Merge git://git.kernel.org/pub/scm/linux/kernel/git/pablo/nf-next
authorDavid S. Miller <davem@davemloft.net>
Mon, 9 May 2016 19:02:58 +0000 (15:02 -0400)
committerDavid S. Miller <davem@davemloft.net>
Mon, 9 May 2016 19:02:58 +0000 (15:02 -0400)
Pablo Neira Ayuso says:

====================
Netfilter updates for net-next

The following large patchset contains Netfilter updates for your
net-next tree. My initial intention was to send you this in two goes but
when I looked back twice I already had this burden on top of me.

Several updates for IPVS from Marco Angaroni:

1) Allow SIP connections originating from real-servers to be load
   balanced by the SIP persistence engine as is already implemented
   in the other direction.

2) Release connections immediately for One-packet-scheduling (OPS)
   in IPVS, instead of making it via timer and rcu callback.

3) Skip deleting conntracks for each one packet in OPS, and don't call
   nf_conntrack_alter_reply() since no reply is expected.

4) Enable drop on exhaustion for OPS + SIP persistence.

Miscelaneous conntrack updates from Florian Westphal, including fix for
hash resize:

5) Move conntrack generation counter out of conntrack pernet structure
   since this is only used by the init_ns to allow hash resizing.

6) Use get_random_once() from packet path to collect hash random seed
    instead of our compound.

7) Don't disable BH from ____nf_conntrack_find() for statistics,
   use NF_CT_STAT_INC_ATOMIC() instead.

8) Fix lookup race during conntrack hash resizing.

9) Introduce clash resolution on conntrack insertion for connectionless
   protocol.

Then, Florian's netns rework to get rid of per-netns conntrack table,
thus we use one single table for them all. There was consensus on this
change during the NFWS 2015 and, on top of that, it has recently been
pointed as a source of multiple problems from unpriviledged netns:

11) Use a single conntrack hashtable for all namespaces. Include netns
    in object comparisons and make it part of the hash calculation.
    Adapt early_drop() to consider netns.

12) Use single expectation and NAT hashtable for all namespaces.

13) Use a single slab cache for all namespaces for conntrack objects.

14) Skip full table scanning from nf_ct_iterate_cleanup() if the pernet
    conntrack counter tells us the table is empty (ie. equals zero).

Fixes for nf_tables interval set element handling, support to set
conntrack connlabels and allow set names up to 32 bytes.

15) Parse element flags from element deletion path and pass it up to the
    backend set implementation.

16) Allow adjacent intervals in the rbtree set type for dynamic interval
    updates.

17) Add support to set connlabel from nf_tables, from Florian Westphal.

18) Allow set names up to 32 bytes in nf_tables.

Several x_tables fixes and updates:

19) Fix incorrect use of IS_ERR_VALUE() in x_tables, original patch
    from Andrzej Hajda.

And finally, miscelaneous netfilter updates such as:

20) Disable automatic helper assignment by default. Note this proc knob
    was introduced by a9006892643a ("netfilter: nf_ct_helper: allow to
    disable automatic helper assignment") 4 years ago to start moving
    towards explicit conntrack helper configuration via iptables CT
    target.

21) Get rid of obsolete and inconsistent debugging instrumentation
    in x_tables.

22) Remove unnecessary check for null after ip6_route_output().
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
1041 files changed:
.mailmap
Documentation/accounting/getdelays.c
Documentation/devicetree/bindings/arc/archs-pct.txt
Documentation/devicetree/bindings/arc/pct.txt
Documentation/devicetree/bindings/arm/cpus.txt
Documentation/devicetree/bindings/btmrvl.txt [deleted file]
Documentation/devicetree/bindings/i2c/i2c-rk3x.txt
Documentation/devicetree/bindings/net/apm-xgene-enet.txt
Documentation/devicetree/bindings/net/cpsw.txt
Documentation/devicetree/bindings/net/hisilicon-hns-dsaf.txt
Documentation/devicetree/bindings/net/hisilicon-hns-nic.txt
Documentation/devicetree/bindings/net/marvell-bt-sd8xxx.txt [new file with mode: 0644]
Documentation/devicetree/bindings/net/microchip,enc28j60.txt [new file with mode: 0644]
Documentation/devicetree/bindings/net/wireless/marvell-sd8xxx.txt [new file with mode: 0644]
Documentation/devicetree/bindings/phy/rockchip-dp-phy.txt
Documentation/devicetree/bindings/phy/rockchip-emmc-phy.txt
Documentation/input/event-codes.txt
Documentation/networking/altera_tse.txt
Documentation/networking/bonding.txt
Documentation/networking/filter.txt
Documentation/networking/gen_stats.txt
Documentation/networking/ipvlan.txt
Documentation/networking/netdev-features.txt
Documentation/networking/netdevices.txt
Documentation/networking/pktgen.txt
Documentation/networking/vrf.txt
Documentation/networking/xfrm_sync.txt
Documentation/sysctl/vm.txt
Documentation/x86/x86_64/mm.txt
MAINTAINERS
Makefile
arch/arc/Kconfig
arch/arc/include/asm/irqflags-arcv2.h
arch/arc/kernel/entry-arcv2.S
arch/arc/kernel/entry-compact.S
arch/arc/mm/init.c
arch/arm/boot/dts/am33xx.dtsi
arch/arm/boot/dts/am4372.dtsi
arch/arm/boot/dts/am57xx-beagle-x15.dts
arch/arm/boot/dts/dm814x-clocks.dtsi
arch/arm/boot/dts/dra62x-clocks.dtsi
arch/arm/boot/dts/dra7xx-clocks.dtsi
arch/arm/boot/dts/qcom-msm8974.dtsi
arch/arm/boot/dts/r8a7791-koelsch.dts
arch/arm/boot/dts/r8a7791-porter.dts
arch/arm/boot/dts/r8a7791.dtsi
arch/arm/configs/multi_v5_defconfig
arch/arm/configs/mvebu_v7_defconfig
arch/arm/configs/orion5x_defconfig
arch/arm/mach-imx/devices/platform-sdhci-esdhc-imx.c
arch/arm/mach-omap2/clockdomains7xx_data.c
arch/arm/mach-omap2/io.c
arch/arm/mach-omap2/omap-wakeupgen.c
arch/arm/mach-omap2/pm34xx.c
arch/arm/mach-shmobile/timer.c
arch/arm64/boot/dts/apm/apm-shadowcat.dtsi
arch/arm64/boot/dts/apm/apm-storm.dtsi
arch/arm64/boot/dts/hisilicon/hip05_hns.dtsi
arch/arm64/boot/dts/socionext/uniphier-ph1-ld20-ref.dts
arch/arm64/boot/dts/socionext/uniphier-ph1-ld20.dtsi
arch/arm64/kernel/head.S
arch/arm64/kernel/smp_spin_table.c
arch/nios2/lib/memset.c
arch/powerpc/include/asm/systbl.h
arch/powerpc/include/asm/unistd.h
arch/powerpc/include/uapi/asm/cputable.h
arch/powerpc/include/uapi/asm/unistd.h
arch/powerpc/kernel/prom.c
arch/s390/include/asm/mmu.h
arch/s390/include/asm/mmu_context.h
arch/s390/include/asm/pgalloc.h
arch/s390/include/asm/processor.h
arch/s390/include/asm/tlbflush.h
arch/s390/mm/init.c
arch/s390/mm/mmap.c
arch/s390/mm/pgalloc.c
arch/s390/pci/pci_dma.c
arch/sparc/configs/sparc32_defconfig
arch/sparc/configs/sparc64_defconfig
arch/sparc/include/asm/spitfire.h
arch/sparc/include/uapi/asm/unistd.h
arch/sparc/kernel/cherrs.S
arch/sparc/kernel/cpu.c
arch/sparc/kernel/cpumap.c
arch/sparc/kernel/fpu_traps.S
arch/sparc/kernel/head_64.S
arch/sparc/kernel/misctrap.S
arch/sparc/kernel/pci.c
arch/sparc/kernel/setup_64.c
arch/sparc/kernel/spiterrs.S
arch/sparc/kernel/systbls_32.S
arch/sparc/kernel/systbls_64.S
arch/sparc/kernel/utrap.S
arch/sparc/kernel/vio.c
arch/sparc/kernel/vmlinux.lds.S
arch/sparc/kernel/winfixup.S
arch/sparc/mm/init_64.c
arch/tile/configs/tilegx_defconfig
arch/tile/configs/tilepro_defconfig
arch/um/drivers/net_kern.c
arch/x86/events/amd/core.c
arch/x86/events/intel/core.c
arch/x86/events/intel/lbr.c
arch/x86/events/intel/pt.c
arch/x86/events/intel/pt.h
arch/x86/events/intel/rapl.c
arch/x86/include/asm/hugetlb.h
arch/x86/include/asm/perf_event.h
arch/x86/kernel/apic/vector.c
arch/x86/kernel/cpu/mshyperv.c
arch/x86/kernel/head_32.S
arch/x86/kvm/vmx.c
arch/x86/mm/setup_nx.c
arch/x86/xen/spinlock.c
arch/xtensa/platforms/iss/network.c
drivers/block/rbd.c
drivers/bluetooth/ath3k.c
drivers/bluetooth/btmrvl_drv.h
drivers/bluetooth/btmrvl_main.c
drivers/bluetooth/btmrvl_sdio.c
drivers/bluetooth/btmrvl_sdio.h
drivers/bluetooth/btusb.c
drivers/bluetooth/hci_intel.c
drivers/bluetooth/hci_vhci.c
drivers/char/pcmcia/synclink_cs.c
drivers/clk/imx/clk-imx6q.c
drivers/clocksource/tango_xtal.c
drivers/cpufreq/cpufreq_governor.c
drivers/cpufreq/intel_pstate.c
drivers/crypto/talitos.c
drivers/edac/i7core_edac.c
drivers/edac/sb_edac.c
drivers/firewire/net.c
drivers/firmware/efi/vars.c
drivers/firmware/psci.c
drivers/gpio/gpio-rcar.c
drivers/gpio/gpiolib-acpi.c
drivers/gpu/drm/amd/amdgpu/amdgpu.h
drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c
drivers/gpu/drm/amd/amdgpu/amdgpu_atpx_handler.c
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c
drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
drivers/gpu/drm/drm_dp_mst_topology.c
drivers/gpu/drm/etnaviv/etnaviv_gpu.c
drivers/gpu/drm/i915/i915_drv.h
drivers/gpu/drm/i915/i915_gem_userptr.c
drivers/gpu/drm/i915/intel_lrc.c
drivers/gpu/drm/i915/intel_pm.c
drivers/gpu/drm/i915/intel_ringbuffer.c
drivers/gpu/drm/i915/intel_uncore.c
drivers/gpu/drm/nouveau/nouveau_connector.c
drivers/gpu/drm/nouveau/nvkm/engine/gr/gf100.c
drivers/gpu/drm/radeon/evergreen.c
drivers/gpu/drm/radeon/evergreen_reg.h
drivers/gpu/drm/radeon/radeon_atpx_handler.c
drivers/gpu/drm/radeon/radeon_connectors.c
drivers/gpu/drm/radeon/radeon_device.c
drivers/gpu/drm/radeon/radeon_ttm.c
drivers/gpu/drm/radeon/si_dpm.c
drivers/gpu/drm/ttm/ttm_bo.c
drivers/gpu/drm/virtio/virtgpu_display.c
drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
drivers/gpu/drm/vmwgfx/vmwgfx_fb.c
drivers/hid/hid-ids.h
drivers/hid/usbhid/hid-quirks.c
drivers/hid/wacom_wac.c
drivers/i2c/busses/Kconfig
drivers/i2c/busses/i2c-cpm.c
drivers/i2c/busses/i2c-exynos5.c
drivers/i2c/busses/i2c-ismt.c
drivers/i2c/busses/i2c-rk3x.c
drivers/infiniband/core/cache.c
drivers/infiniband/core/ucm.c
drivers/infiniband/core/ucma.c
drivers/infiniband/core/uverbs_main.c
drivers/infiniband/core/verbs.c
drivers/infiniband/hw/cxgb3/iwch_provider.c
drivers/infiniband/hw/cxgb4/cq.c
drivers/infiniband/hw/cxgb4/provider.c
drivers/infiniband/hw/cxgb4/qp.c
drivers/infiniband/hw/i40iw/i40iw_main.c
drivers/infiniband/hw/mlx4/qp.c
drivers/infiniband/hw/mlx5/main.c
drivers/infiniband/hw/nes/nes_nic.c
drivers/infiniband/hw/qib/qib_file_ops.c
drivers/infiniband/sw/rdmavt/qp.c
drivers/infiniband/ulp/ipoib/ipoib_cm.c
drivers/infiniband/ulp/ipoib/ipoib_ib.c
drivers/infiniband/ulp/ipoib/ipoib_main.c
drivers/input/joystick/xpad.c
drivers/input/misc/arizona-haptics.c
drivers/input/misc/pmic8xxx-pwrkey.c
drivers/input/misc/twl4030-vibra.c
drivers/input/misc/twl6040-vibra.c
drivers/input/tablet/gtco.c
drivers/iommu/amd_iommu.c
drivers/iommu/arm-smmu.c
drivers/irqchip/irq-mips-gic.c
drivers/isdn/hardware/eicon/message.c
drivers/isdn/hysdn/hysdn_net.c
drivers/isdn/i4l/isdn_net.c
drivers/isdn/i4l/isdn_x25iface.c
drivers/md/md.c
drivers/md/raid0.c
drivers/md/raid5.c
drivers/media/usb/usbvision/usbvision-video.c
drivers/media/v4l2-core/videobuf2-core.c
drivers/media/v4l2-core/videobuf2-memops.c
drivers/media/v4l2-core/videobuf2-v4l2.c
drivers/message/fusion/mptlan.c
drivers/misc/cxl/context.c
drivers/misc/cxl/cxl.h
drivers/misc/cxl/irq.c
drivers/misc/cxl/native.c
drivers/mmc/host/Kconfig
drivers/mmc/host/sdhci-acpi.c
drivers/mmc/host/sunxi-mmc.c
drivers/net/Kconfig
drivers/net/appletalk/cops.c
drivers/net/can/mscan/mscan.c
drivers/net/can/usb/ems_usb.c
drivers/net/can/usb/esd_usb2.c
drivers/net/can/usb/peak_usb/pcan_usb_core.c
drivers/net/cris/eth_v10.c
drivers/net/dsa/Kconfig
drivers/net/dsa/Makefile
drivers/net/dsa/mv88e6123.c [deleted file]
drivers/net/dsa/mv88e6131.c [deleted file]
drivers/net/dsa/mv88e6171.c [deleted file]
drivers/net/dsa/mv88e6352.c [deleted file]
drivers/net/dsa/mv88e6xxx.c
drivers/net/dsa/mv88e6xxx.h
drivers/net/ethernet/3com/3c509.c
drivers/net/ethernet/3com/3c515.c
drivers/net/ethernet/3com/3c574_cs.c
drivers/net/ethernet/3com/3c589_cs.c
drivers/net/ethernet/3com/3c59x.c
drivers/net/ethernet/8390/axnet_cs.c
drivers/net/ethernet/8390/lib8390.c
drivers/net/ethernet/adaptec/starfire.c
drivers/net/ethernet/adi/bfin_mac.c
drivers/net/ethernet/agere/et131x.c
drivers/net/ethernet/allwinner/sun4i-emac.c
drivers/net/ethernet/amd/7990.c
drivers/net/ethernet/amd/a2065.c
drivers/net/ethernet/amd/atarilance.c
drivers/net/ethernet/amd/au1000_eth.c
drivers/net/ethernet/amd/declance.c
drivers/net/ethernet/amd/lance.c
drivers/net/ethernet/amd/ni65.c
drivers/net/ethernet/amd/nmclan_cs.c
drivers/net/ethernet/amd/pcnet32.c
drivers/net/ethernet/amd/sunlance.c
drivers/net/ethernet/apm/xgene/xgene_enet_cle.c
drivers/net/ethernet/apm/xgene/xgene_enet_cle.h
drivers/net/ethernet/apm/xgene/xgene_enet_hw.c
drivers/net/ethernet/apm/xgene/xgene_enet_main.c
drivers/net/ethernet/apm/xgene/xgene_enet_main.h
drivers/net/ethernet/atheros/alx/main.c
drivers/net/ethernet/atheros/atl1c/atl1c.h
drivers/net/ethernet/atheros/atl1c/atl1c_main.c
drivers/net/ethernet/atheros/atl1e/atl1e.h
drivers/net/ethernet/atheros/atl1e/atl1e_main.c
drivers/net/ethernet/broadcom/bcmsysport.c
drivers/net/ethernet/broadcom/bnxt/bnxt.c
drivers/net/ethernet/broadcom/bnxt/bnxt.h
drivers/net/ethernet/broadcom/cnic.c
drivers/net/ethernet/broadcom/genet/bcmgenet.c
drivers/net/ethernet/broadcom/sb1250-mac.c
drivers/net/ethernet/broadcom/tg3.c
drivers/net/ethernet/cadence/macb.c
drivers/net/ethernet/cavium/liquidio/lio_main.c
drivers/net/ethernet/cavium/octeon/octeon_mgmt.c
drivers/net/ethernet/cavium/thunder/nicvf_main.c
drivers/net/ethernet/chelsio/cxgb/sge.c
drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c
drivers/net/ethernet/chelsio/cxgb4/cxgb4.h
drivers/net/ethernet/chelsio/cxgb4/cxgb4_dcb.c
drivers/net/ethernet/chelsio/cxgb4/cxgb4_debugfs.c
drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
drivers/net/ethernet/chelsio/cxgb4/sge.c
drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
drivers/net/ethernet/chelsio/cxgb4/t4_hw.h
drivers/net/ethernet/chelsio/cxgb4/t4fw_api.h
drivers/net/ethernet/chelsio/cxgb4vf/adapter.h
drivers/net/ethernet/chelsio/cxgb4vf/cxgb4vf_main.c
drivers/net/ethernet/chelsio/cxgb4vf/sge.c
drivers/net/ethernet/chelsio/cxgb4vf/t4vf_common.h
drivers/net/ethernet/chelsio/cxgb4vf/t4vf_hw.c
drivers/net/ethernet/davicom/dm9000.c
drivers/net/ethernet/dec/tulip/de4x5.c
drivers/net/ethernet/dec/tulip/dmfe.c
drivers/net/ethernet/dec/tulip/pnic.c
drivers/net/ethernet/dec/tulip/tulip_core.c
drivers/net/ethernet/dec/tulip/uli526x.c
drivers/net/ethernet/dec/tulip/winbond-840.c
drivers/net/ethernet/dlink/dl2k.c
drivers/net/ethernet/dlink/sundance.c
drivers/net/ethernet/fealnx.c
drivers/net/ethernet/freescale/fec_mpc52xx.c
drivers/net/ethernet/freescale/fs_enet/fs_enet-main.c
drivers/net/ethernet/freescale/gianfar.c
drivers/net/ethernet/freescale/gianfar_ethtool.c
drivers/net/ethernet/freescale/ucc_geth_ethtool.c
drivers/net/ethernet/fujitsu/fmvj18x_cs.c
drivers/net/ethernet/hisilicon/hix5hd2_gmac.c
drivers/net/ethernet/hisilicon/hns/hns_ae_adapt.c
drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.c
drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.h
drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.c
drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.h
drivers/net/ethernet/hisilicon/hns/hns_dsaf_misc.c
drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.c
drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.h
drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.c
drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.h
drivers/net/ethernet/hisilicon/hns/hns_dsaf_reg.h
drivers/net/ethernet/hisilicon/hns/hns_enet.c
drivers/net/ethernet/hisilicon/hns/hns_enet.h
drivers/net/ethernet/hp/hp100.c
drivers/net/ethernet/i825xx/82596.c
drivers/net/ethernet/i825xx/lib82596.c
drivers/net/ethernet/i825xx/sun3_82586.c
drivers/net/ethernet/ibm/emac/core.c
drivers/net/ethernet/ibm/emac/phy.c
drivers/net/ethernet/intel/e1000e/netdev.c
drivers/net/ethernet/intel/fm10k/fm10k_pci.c
drivers/net/ethernet/intel/i40e/i40e.h
drivers/net/ethernet/intel/i40e/i40e_adminq.c
drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
drivers/net/ethernet/intel/i40e/i40e_client.h
drivers/net/ethernet/intel/i40e/i40e_common.c
drivers/net/ethernet/intel/i40e/i40e_debugfs.c
drivers/net/ethernet/intel/i40e/i40e_devids.h
drivers/net/ethernet/intel/i40e/i40e_ethtool.c
drivers/net/ethernet/intel/i40e/i40e_main.c
drivers/net/ethernet/intel/i40e/i40e_nvm.c
drivers/net/ethernet/intel/i40e/i40e_prototype.h
drivers/net/ethernet/intel/i40e/i40e_ptp.c
drivers/net/ethernet/intel/i40e/i40e_txrx.c
drivers/net/ethernet/intel/i40e/i40e_txrx.h
drivers/net/ethernet/intel/i40e/i40e_type.h
drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
drivers/net/ethernet/intel/i40evf/i40e_adminq_cmd.h
drivers/net/ethernet/intel/i40evf/i40e_common.c
drivers/net/ethernet/intel/i40evf/i40e_devids.h
drivers/net/ethernet/intel/i40evf/i40e_txrx.c
drivers/net/ethernet/intel/i40evf/i40e_txrx.h
drivers/net/ethernet/intel/i40evf/i40e_type.h
drivers/net/ethernet/intel/i40evf/i40evf.h
drivers/net/ethernet/intel/i40evf/i40evf_ethtool.c
drivers/net/ethernet/intel/i40evf/i40evf_main.c
drivers/net/ethernet/intel/i40evf/i40evf_virtchnl.c
drivers/net/ethernet/intel/igb/igb_main.c
drivers/net/ethernet/intel/ixgbe/ixgbe.h
drivers/net/ethernet/intel/ixgbe/ixgbe_82598.c
drivers/net/ethernet/intel/ixgbe/ixgbe_82599.c
drivers/net/ethernet/intel/ixgbe/ixgbe_common.c
drivers/net/ethernet/intel/ixgbe/ixgbe_common.h
drivers/net/ethernet/intel/ixgbe/ixgbe_dcb.c
drivers/net/ethernet/intel/ixgbe/ixgbe_dcb_82598.c
drivers/net/ethernet/intel/ixgbe/ixgbe_dcb_82599.c
drivers/net/ethernet/intel/ixgbe/ixgbe_dcb_nl.c
drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
drivers/net/ethernet/intel/ixgbe/ixgbe_mbx.c
drivers/net/ethernet/intel/ixgbe/ixgbe_model.h
drivers/net/ethernet/intel/ixgbe/ixgbe_phy.h
drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.c
drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
drivers/net/ethernet/intel/ixgbe/ixgbe_type.h
drivers/net/ethernet/intel/ixgbe/ixgbe_x540.c
drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c
drivers/net/ethernet/intel/ixgbevf/defines.h
drivers/net/ethernet/intel/ixgbevf/ethtool.c
drivers/net/ethernet/intel/ixgbevf/ixgbevf.h
drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
drivers/net/ethernet/intel/ixgbevf/mbx.c
drivers/net/ethernet/intel/ixgbevf/vf.c
drivers/net/ethernet/intel/ixgbevf/vf.h
drivers/net/ethernet/korina.c
drivers/net/ethernet/lantiq_etop.c
drivers/net/ethernet/marvell/mvneta.c
drivers/net/ethernet/marvell/pxa168_eth.c
drivers/net/ethernet/marvell/sky2.c
drivers/net/ethernet/mellanox/mlx4/alloc.c
drivers/net/ethernet/mellanox/mlx4/en_cq.c
drivers/net/ethernet/mellanox/mlx4/en_netdev.c
drivers/net/ethernet/mellanox/mlx4/en_resources.c
drivers/net/ethernet/mellanox/mlx4/en_rx.c
drivers/net/ethernet/mellanox/mlx4/en_tx.c
drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
drivers/net/ethernet/mellanox/mlx5/core/Kconfig
drivers/net/ethernet/mellanox/mlx5/core/Makefile
drivers/net/ethernet/mellanox/mlx5/core/en.h
drivers/net/ethernet/mellanox/mlx5/core/en_arfs.c [new file with mode: 0644]
drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c
drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
drivers/net/ethernet/mellanox/mlx5/core/en_fs.c
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
drivers/net/ethernet/mellanox/mlx5/core/en_stats.h [new file with mode: 0644]
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
drivers/net/ethernet/mellanox/mlx5/core/en_tc.h
drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.c
drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.h
drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
drivers/net/ethernet/mellanox/mlx5/core/fs_core.h
drivers/net/ethernet/mellanox/mlx5/core/main.c
drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
drivers/net/ethernet/mellanox/mlx5/core/port.c
drivers/net/ethernet/mellanox/mlx5/core/uar.c
drivers/net/ethernet/mellanox/mlx5/core/vport.c
drivers/net/ethernet/mellanox/mlx5/core/vxlan.c
drivers/net/ethernet/mellanox/mlx5/core/vxlan.h
drivers/net/ethernet/mellanox/mlxsw/spectrum.c
drivers/net/ethernet/micrel/ksz884x.c
drivers/net/ethernet/microchip/enc28j60.c
drivers/net/ethernet/microchip/encx24j600.c
drivers/net/ethernet/moxa/moxart_ether.c
drivers/net/ethernet/myricom/myri10ge/myri10ge.c
drivers/net/ethernet/natsemi/natsemi.c
drivers/net/ethernet/natsemi/sonic.c
drivers/net/ethernet/neterion/s2io.c
drivers/net/ethernet/nuvoton/w90p910_ether.c
drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe.h
drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c
drivers/net/ethernet/packetengines/hamachi.c
drivers/net/ethernet/packetengines/yellowfin.c
drivers/net/ethernet/qlogic/netxen/netxen_nic_main.c
drivers/net/ethernet/qlogic/qed/Makefile
drivers/net/ethernet/qlogic/qed/qed.h
drivers/net/ethernet/qlogic/qed/qed_dev.c
drivers/net/ethernet/qlogic/qed/qed_hsi.h
drivers/net/ethernet/qlogic/qed/qed_init_fw_funcs.c
drivers/net/ethernet/qlogic/qed/qed_l2.c
drivers/net/ethernet/qlogic/qed/qed_main.c
drivers/net/ethernet/qlogic/qed/qed_mcp.c
drivers/net/ethernet/qlogic/qed/qed_mcp.h
drivers/net/ethernet/qlogic/qed/qed_reg_addr.h
drivers/net/ethernet/qlogic/qed/qed_selftest.c [new file with mode: 0644]
drivers/net/ethernet/qlogic/qed/qed_selftest.h [new file with mode: 0644]
drivers/net/ethernet/qlogic/qed/qed_sp.h
drivers/net/ethernet/qlogic/qed/qed_sp_commands.c
drivers/net/ethernet/qlogic/qede/qede.h
drivers/net/ethernet/qlogic/qede/qede_ethtool.c
drivers/net/ethernet/qlogic/qede/qede_main.c
drivers/net/ethernet/qlogic/qlcnic/qlcnic.h
drivers/net/ethernet/qualcomm/qca_spi.c
drivers/net/ethernet/realtek/atp.c
drivers/net/ethernet/renesas/ravb_main.c
drivers/net/ethernet/renesas/sh_eth.c
drivers/net/ethernet/renesas/sh_eth.h
drivers/net/ethernet/seeq/sgiseeq.c
drivers/net/ethernet/sfc/ef10.c
drivers/net/ethernet/sgi/meth.c
drivers/net/ethernet/sis/sis900.c
drivers/net/ethernet/smsc/epic100.c
drivers/net/ethernet/smsc/smc911x.c
drivers/net/ethernet/smsc/smc9194.c
drivers/net/ethernet/smsc/smc91c92_cs.c
drivers/net/ethernet/smsc/smc91x.c
drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
drivers/net/ethernet/stmicro/stmmac/stmmac.h
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c
drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c
drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
drivers/net/ethernet/sun/niu.c
drivers/net/ethernet/sun/sungem.c
drivers/net/ethernet/synopsys/dwc_eth_qos.c
drivers/net/ethernet/tehuti/tehuti.c
drivers/net/ethernet/ti/cpsw.c
drivers/net/ethernet/ti/cpsw.h
drivers/net/ethernet/ti/davinci_emac.c
drivers/net/ethernet/ti/netcp_core.c
drivers/net/ethernet/ti/tlan.c
drivers/net/ethernet/tile/tilepro.c
drivers/net/ethernet/toshiba/ps3_gelic_wireless.c
drivers/net/ethernet/toshiba/spider_net.c
drivers/net/ethernet/tundra/tsi108_eth.c
drivers/net/ethernet/via/via-rhine.c
drivers/net/ethernet/wiznet/Kconfig
drivers/net/ethernet/wiznet/w5100-spi.c
drivers/net/ethernet/wiznet/w5100.c
drivers/net/ethernet/wiznet/w5100.h
drivers/net/ethernet/wiznet/w5300.c
drivers/net/ethernet/xilinx/ll_temac_main.c
drivers/net/ethernet/xilinx/xilinx_axienet_main.c
drivers/net/ethernet/xilinx/xilinx_emaclite.c
drivers/net/ethernet/xircom/xirc2ps_cs.c
drivers/net/fjes/fjes_main.c
drivers/net/hamradio/baycom_epp.c
drivers/net/hamradio/hdlcdrv.c
drivers/net/hamradio/mkiss.c
drivers/net/hamradio/scc.c
drivers/net/hamradio/yam.c
drivers/net/ieee802154/at86rf230.c
drivers/net/ieee802154/atusb.c
drivers/net/ieee802154/mrf24j40.c
drivers/net/ifb.c
drivers/net/ipvlan/ipvlan_main.c
drivers/net/irda/ali-ircc.c
drivers/net/irda/bfin_sir.c
drivers/net/irda/irda-usb.c
drivers/net/irda/nsc-ircc.c
drivers/net/irda/smsc-ircc2.c
drivers/net/irda/stir4200.c
drivers/net/irda/via-ircc.c
drivers/net/macsec.c
drivers/net/macvlan.c
drivers/net/macvtap.c
drivers/net/phy/at803x.c
drivers/net/phy/fixed_phy.c
drivers/net/phy/mdio_bus.c
drivers/net/phy/phy_device.c
drivers/net/ppp/ppp_generic.c
drivers/net/rionet.c
drivers/net/slip/slip.c
drivers/net/tun.c
drivers/net/usb/catc.c
drivers/net/usb/kaweth.c
drivers/net/usb/lan78xx.c
drivers/net/usb/pegasus.c
drivers/net/usb/r8152.c
drivers/net/usb/rtl8150.c
drivers/net/usb/smsc75xx.c
drivers/net/usb/smsc95xx.c
drivers/net/usb/usbnet.c
drivers/net/vrf.c
drivers/net/vxlan.c
drivers/net/wan/cosa.c
drivers/net/wan/farsync.c
drivers/net/wan/lmc/lmc_main.c
drivers/net/wan/sbni.c
drivers/net/wimax/i2400m/netdev.c
drivers/net/wireless/ath/ath10k/ce.c
drivers/net/wireless/ath/ath10k/ce.h
drivers/net/wireless/ath/ath10k/core.c
drivers/net/wireless/ath/ath10k/core.h
drivers/net/wireless/ath/ath10k/debug.c
drivers/net/wireless/ath/ath10k/debug.h
drivers/net/wireless/ath/ath10k/htc.h
drivers/net/wireless/ath/ath10k/htt.c
drivers/net/wireless/ath/ath10k/htt.h
drivers/net/wireless/ath/ath10k/htt_rx.c
drivers/net/wireless/ath/ath10k/htt_tx.c
drivers/net/wireless/ath/ath10k/hw.h
drivers/net/wireless/ath/ath10k/mac.c
drivers/net/wireless/ath/ath10k/mac.h
drivers/net/wireless/ath/ath10k/pci.c
drivers/net/wireless/ath/ath10k/pci.h
drivers/net/wireless/ath/ath10k/swap.c
drivers/net/wireless/ath/ath10k/swap.h
drivers/net/wireless/ath/ath10k/targaddrs.h
drivers/net/wireless/ath/ath10k/testmode.c
drivers/net/wireless/ath/ath10k/thermal.h
drivers/net/wireless/ath/ath10k/txrx.c
drivers/net/wireless/ath/ath10k/wmi-tlv.c
drivers/net/wireless/ath/ath10k/wmi-tlv.h
drivers/net/wireless/ath/ath10k/wmi.c
drivers/net/wireless/ath/ath10k/wmi.h
drivers/net/wireless/ath/ath10k/wow.c
drivers/net/wireless/ath/ath9k/ar5008_phy.c
drivers/net/wireless/ath/ath9k/ar9002_phy.c
drivers/net/wireless/ath/ath9k/htc_drv_main.c
drivers/net/wireless/ath/ath9k/hw.c
drivers/net/wireless/ath/ath9k/init.c
drivers/net/wireless/ath/ath9k/pci.c
drivers/net/wireless/ath/wcn36xx/debug.c
drivers/net/wireless/ath/wcn36xx/hal.h
drivers/net/wireless/ath/wcn36xx/main.c
drivers/net/wireless/ath/wcn36xx/pmc.c
drivers/net/wireless/ath/wcn36xx/smd.c
drivers/net/wireless/ath/wcn36xx/smd.h
drivers/net/wireless/ath/wcn36xx/txrx.c
drivers/net/wireless/ath/wcn36xx/wcn36xx.h
drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcdc.c
drivers/net/wireless/broadcom/brcm80211/brcmfmac/bus.h
drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.c
drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.h
drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.c
drivers/net/wireless/broadcom/brcm80211/brcmfmac/fweh.c
drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil.h
drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwsignal.c
drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwsignal.h
drivers/net/wireless/broadcom/brcm80211/brcmfmac/msgbuf.c
drivers/net/wireless/broadcom/brcm80211/brcmfmac/p2p.c
drivers/net/wireless/broadcom/brcm80211/brcmfmac/proto.h
drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
drivers/net/wireless/broadcom/brcm80211/brcmfmac/usb.c
drivers/net/wireless/cisco/airo.c
drivers/net/wireless/intel/ipw2x00/ipw2100.c
drivers/net/wireless/intel/ipw2x00/ipw2200.c
drivers/net/wireless/intel/iwlwifi/iwl-8000.c
drivers/net/wireless/intel/iwlwifi/iwl-drv.c
drivers/net/wireless/intel/iwlwifi/iwl-fw-file.h
drivers/net/wireless/intel/iwlwifi/mvm/fw-dbg.c
drivers/net/wireless/intel/iwlwifi/mvm/fw.c
drivers/net/wireless/intel/iwlwifi/pcie/drv.c
drivers/net/wireless/intersil/hostap/hostap_hw.c
drivers/net/wireless/intersil/orinoco/main.c
drivers/net/wireless/intersil/orinoco/orinoco_usb.c
drivers/net/wireless/intersil/prism54/isl_38xx.c
drivers/net/wireless/mac80211_hwsim.c
drivers/net/wireless/mac80211_hwsim.h
drivers/net/wireless/marvell/mwifiex/cfg80211.c
drivers/net/wireless/marvell/mwifiex/cmdevt.c
drivers/net/wireless/marvell/mwifiex/init.c
drivers/net/wireless/marvell/mwifiex/main.c
drivers/net/wireless/marvell/mwifiex/main.h
drivers/net/wireless/marvell/mwifiex/pcie.c
drivers/net/wireless/marvell/mwifiex/pcie.h
drivers/net/wireless/marvell/mwifiex/scan.c
drivers/net/wireless/marvell/mwifiex/sdio.c
drivers/net/wireless/marvell/mwifiex/sdio.h
drivers/net/wireless/marvell/mwifiex/sta_cmd.c
drivers/net/wireless/marvell/mwifiex/sta_cmdresp.c
drivers/net/wireless/marvell/mwifiex/sta_ioctl.c
drivers/net/wireless/marvell/mwifiex/txrx.c
drivers/net/wireless/marvell/mwifiex/uap_txrx.c
drivers/net/wireless/marvell/mwifiex/usb.c
drivers/net/wireless/ralink/rt2x00/rt2800lib.c
drivers/net/wireless/realtek/rtl818x/rtl8180/dev.c
drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.c
drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h
drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_regs.h
drivers/net/wireless/realtek/rtlwifi/rtl8821ae/phy.c
drivers/net/wireless/wl3501_cs.c
drivers/net/wireless/zydas/zd1201.c
drivers/of/of_mdio.c
drivers/perf/arm_pmu.c
drivers/phy/phy-rockchip-dp.c
drivers/phy/phy-rockchip-emmc.c
drivers/pinctrl/freescale/Kconfig
drivers/pinctrl/mediatek/pinctrl-mtk-common.c
drivers/pinctrl/pinctrl-single.c
drivers/platform/x86/toshiba_acpi.c
drivers/rapidio/devices/rio_mport_cdev.c
drivers/s390/char/sclp_ctl.c
drivers/s390/net/ctcm_main.c
drivers/s390/net/ctcm_mpc.c
drivers/s390/net/netiucv.c
drivers/s390/net/qeth_core_main.c
drivers/scsi/cxgbi/libcxgbi.c
drivers/soc/mediatek/mtk-scpsys.c
drivers/staging/media/davinci_vpfe/vpfe_video.c
drivers/staging/rdma/hfi1/TODO
drivers/staging/rdma/hfi1/file_ops.c
drivers/staging/rdma/hfi1/mmu_rb.c
drivers/staging/rdma/hfi1/mmu_rb.h
drivers/staging/rdma/hfi1/qp.c
drivers/staging/rdma/hfi1/user_exp_rcv.c
drivers/staging/rdma/hfi1/user_sdma.c
drivers/staging/rtl8192e/rtl8192e/rtl_core.c
drivers/staging/rtl8192e/rtllib_softmac.c
drivers/staging/rtl8192u/ieee80211/ieee80211_softmac.c
drivers/staging/rtl8192u/r8192U_core.c
drivers/staging/wlan-ng/p80211netdev.c
drivers/thermal/Kconfig
drivers/thermal/hisi_thermal.c
drivers/thermal/mtk_thermal.c
drivers/thermal/of-thermal.c
drivers/thermal/power_allocator.c
drivers/thermal/thermal_core.c
drivers/tty/n_gsm.c
drivers/tty/pty.c
drivers/tty/serial/8250/8250_port.c
drivers/tty/serial/8250/Kconfig
drivers/tty/serial/uartlite.c
drivers/tty/synclink.c
drivers/tty/synclink_gt.c
drivers/tty/synclinkmp.c
drivers/tty/tty_io.c
drivers/usb/dwc3/core.c
drivers/usb/dwc3/debugfs.c
drivers/usb/dwc3/dwc3-omap.c
drivers/usb/dwc3/gadget.c
drivers/usb/gadget/composite.c
drivers/usb/gadget/function/f_fs.c
drivers/usb/gadget/function/u_ether.c
fs/ceph/mds_client.c
fs/devpts/inode.c
fs/fuse/file.c
fs/ocfs2/dlm/dlmmaster.c
fs/proc/task_mmu.c
fs/quota/netlink.c
fs/udf/super.c
fs/udf/udfdecl.h
fs/udf/unicode.c
include/asm-generic/futex.h
include/drm/drm_cache.h
include/linux/bpf.h
include/linux/ceph/auth.h
include/linux/ceph/osd_client.h
include/linux/cgroup-defs.h
include/linux/cpuset.h
include/linux/devpts_fs.h
include/linux/filter.h
include/linux/hash.h
include/linux/huge_mm.h
include/linux/ieee802154.h
include/linux/if_ether.h
include/linux/lockdep.h
include/linux/mlx4/device.h
include/linux/mlx5/device.h
include/linux/mlx5/driver.h
include/linux/mlx5/fs.h
include/linux/mlx5/port.h
include/linux/mlx5/vport.h
include/linux/mm.h
include/linux/net.h
include/linux/netdevice.h
include/linux/nl802154.h
include/linux/qed/qed_if.h
include/linux/skbuff.h
include/linux/soc/qcom/smd.h
include/linux/socket.h
include/linux/thermal.h
include/linux/tty_driver.h
include/linux/u64_stats_sync.h
include/media/videobuf2-core.h
include/net/6lowpan.h
include/net/bluetooth/hci.h
include/net/codel.h
include/net/codel_impl.h [new file with mode: 0644]
include/net/codel_qdisc.h [new file with mode: 0644]
include/net/dsa.h
include/net/dst.h
include/net/fq.h [new file with mode: 0644]
include/net/fq_impl.h [new file with mode: 0644]
include/net/gen_stats.h
include/net/gre.h
include/net/icmp.h
include/net/ip.h
include/net/ip6_tunnel.h
include/net/ip_tunnels.h
include/net/ipv6.h
include/net/mac802154.h
include/net/rtnetlink.h
include/net/sctp/sctp.h
include/net/sctp/structs.h
include/net/snmp.h
include/net/sock.h
include/net/switchdev.h
include/net/tc_act/tc_mirred.h
include/net/tcp.h
include/net/transp_v6.h
include/net/udp.h
include/net/vxlan.h
include/net/xfrm.h
include/rdma/ib.h
include/sound/hda_i915.h
include/sound/hda_regmap.h
include/uapi/asm-generic/unistd.h
include/uapi/linux/bpf.h
include/uapi/linux/gen_stats.h
include/uapi/linux/if_bridge.h
include/uapi/linux/if_link.h
include/uapi/linux/if_macsec.h
include/uapi/linux/ila.h
include/uapi/linux/inet_diag.h
include/uapi/linux/ip_vs.h
include/uapi/linux/l2tp.h
include/uapi/linux/nl80211.h
include/uapi/linux/openvswitch.h
include/uapi/linux/pkt_cls.h
include/uapi/linux/pkt_sched.h
include/uapi/linux/qrtr.h [new file with mode: 0644]
include/uapi/linux/quota.h
include/uapi/linux/rtnetlink.h
include/uapi/linux/tc_act/tc_bpf.h
include/uapi/linux/tc_act/tc_connmark.h
include/uapi/linux/tc_act/tc_csum.h
include/uapi/linux/tc_act/tc_defact.h
include/uapi/linux/tc_act/tc_gact.h
include/uapi/linux/tc_act/tc_ife.h
include/uapi/linux/tc_act/tc_ipt.h
include/uapi/linux/tc_act/tc_mirred.h
include/uapi/linux/tc_act/tc_nat.h
include/uapi/linux/tc_act/tc_pedit.h
include/uapi/linux/tc_act/tc_skbedit.h
include/uapi/linux/tc_act/tc_vlan.h
include/uapi/linux/v4l2-dv-timings.h
kernel/bpf/core.c
kernel/bpf/inode.c
kernel/bpf/syscall.c
kernel/bpf/verifier.c
kernel/cgroup.c
kernel/cpu.c
kernel/cpuset.c
kernel/events/core.c
kernel/futex.c
kernel/irq/ipi.c
kernel/kcov.c
kernel/kexec_core.c
kernel/locking/lockdep.c
kernel/locking/lockdep_proc.c
kernel/locking/qspinlock_stat.h
kernel/workqueue.c
lib/stackdepot.c
mm/huge_memory.c
mm/memcontrol.c
mm/memory-failure.c
mm/memory.c
mm/migrate.c
mm/page_io.c
mm/swap.c
mm/vmscan.c
net/6lowpan/6lowpan_i.h
net/6lowpan/core.c
net/6lowpan/debugfs.c
net/6lowpan/iphc.c
net/6lowpan/nhc_udp.c
net/Kconfig
net/Makefile
net/atm/lec.c
net/batman-adv/bat_iv_ogm.c
net/batman-adv/bat_v.c
net/batman-adv/bat_v_ogm.c
net/batman-adv/bridge_loop_avoidance.c
net/batman-adv/debugfs.c
net/batman-adv/distributed-arp-table.c
net/batman-adv/fragmentation.c
net/batman-adv/hard-interface.c
net/batman-adv/icmp_socket.c
net/batman-adv/main.c
net/batman-adv/main.h
net/batman-adv/multicast.c
net/batman-adv/network-coding.c
net/batman-adv/originator.c
net/batman-adv/packet.h
net/batman-adv/routing.c
net/batman-adv/send.c
net/batman-adv/soft-interface.c
net/batman-adv/translation-table.c
net/batman-adv/types.h
net/bluetooth/6lowpan.c
net/bluetooth/bnep/netdev.c
net/bridge/br_mdb.c
net/bridge/br_multicast.c
net/bridge/br_netfilter_hooks.c
net/bridge/br_netfilter_ipv6.c
net/bridge/br_netlink.c
net/bridge/br_private.h
net/bridge/br_sysfs_br.c
net/bridge/br_vlan.c
net/ceph/auth.c
net/ceph/auth_none.c
net/ceph/auth_none.h
net/ceph/auth_x.c
net/ceph/auth_x.h
net/ceph/osd_client.c
net/core/dev.c
net/core/filter.c
net/core/gen_stats.c
net/core/neighbour.c
net/core/net-procfs.c
net/core/pktgen.c
net/core/rtnetlink.c
net/core/skbuff.c
net/core/sock.c
net/core/sock_diag.c
net/dccp/dccp.h
net/dccp/input.c
net/dccp/ipv4.c
net/dccp/ipv6.c
net/dccp/minisocks.c
net/dccp/options.c
net/dccp/timer.c
net/dsa/slave.c
net/ieee802154/6lowpan/6lowpan_i.h
net/ieee802154/6lowpan/core.c
net/ieee802154/6lowpan/tx.c
net/ieee802154/nl-mac.c
net/ieee802154/nl802154.c
net/ipv4/arp.c
net/ipv4/fib_frontend.c
net/ipv4/gre_demux.c
net/ipv4/icmp.c
net/ipv4/inet_connection_sock.c
net/ipv4/inet_diag.c
net/ipv4/inet_hashtables.c
net/ipv4/inet_timewait_sock.c
net/ipv4/ip_forward.c
net/ipv4/ip_fragment.c
net/ipv4/ip_gre.c
net/ipv4/ip_input.c
net/ipv4/ip_sockglue.c
net/ipv4/ip_tunnel.c
net/ipv4/ip_tunnel_core.c
net/ipv4/route.c
net/ipv4/syncookies.c
net/ipv4/tcp.c
net/ipv4/tcp_cdg.c
net/ipv4/tcp_cubic.c
net/ipv4/tcp_fastopen.c
net/ipv4/tcp_input.c
net/ipv4/tcp_ipv4.c
net/ipv4/tcp_minisocks.c
net/ipv4/tcp_output.c
net/ipv4/tcp_recovery.c
net/ipv4/tcp_timer.c
net/ipv4/udp.c
net/ipv6/Kconfig
net/ipv6/addrconf.c
net/ipv6/datagram.c
net/ipv6/exthdrs.c
net/ipv6/icmp.c
net/ipv6/ila/ila.h
net/ipv6/ila/ila_common.c
net/ipv6/ila/ila_lwt.c
net/ipv6/ila/ila_xlat.c
net/ipv6/inet6_hashtables.c
net/ipv6/ip6_fib.c
net/ipv6/ip6_flowlabel.c
net/ipv6/ip6_gre.c
net/ipv6/ip6_input.c
net/ipv6/ip6_output.c
net/ipv6/ip6_tunnel.c
net/ipv6/ip6mr.c
net/ipv6/ipv6_sockglue.c
net/ipv6/ping.c
net/ipv6/raw.c
net/ipv6/reassembly.c
net/ipv6/route.c
net/ipv6/syncookies.c
net/ipv6/tcp_ipv6.c
net/ipv6/udp.c
net/irda/irlan/irlan_eth.c
net/l2tp/l2tp_core.c
net/l2tp/l2tp_ip6.c
net/l2tp/l2tp_netlink.c
net/mac80211/iface.c
net/netfilter/ipvs/ip_vs_ctl.c
net/openvswitch/datapath.c
net/qrtr/Kconfig [new file with mode: 0644]
net/qrtr/Makefile [new file with mode: 0644]
net/qrtr/qrtr.c [new file with mode: 0644]
net/qrtr/qrtr.h [new file with mode: 0644]
net/qrtr/smd.c [new file with mode: 0644]
net/rds/tcp.c
net/rds/tcp.h
net/rds/tcp_connect.c
net/rds/tcp_listen.c
net/rds/tcp_recv.c
net/rxrpc/ar-input.c
net/sched/act_api.c
net/sched/act_bpf.c
net/sched/act_connmark.c
net/sched/act_csum.c
net/sched/act_gact.c
net/sched/act_ife.c
net/sched/act_ipt.c
net/sched/act_mirred.c
net/sched/act_nat.c
net/sched/act_pedit.c
net/sched/act_simple.c
net/sched/act_skbedit.c
net/sched/act_vlan.c
net/sched/cls_bpf.c
net/sched/cls_u32.c
net/sched/sch_api.c
net/sched/sch_codel.c
net/sched/sch_fq_codel.c
net/sched/sch_generic.c
net/sched/sch_htb.c
net/sched/sch_netem.c
net/sched/sch_tbf.c
net/sctp/chunk.c
net/sctp/input.c
net/sctp/inqueue.c
net/sctp/ipv6.c
net/sctp/sctp_diag.c
net/sctp/sm_sideeffect.c
net/sctp/ulpqueue.c
net/socket.c
net/sunrpc/xprtsock.c
net/switchdev/switchdev.c
net/tipc/core.c
net/tipc/msg.h
net/tipc/node.c
net/tipc/node.h
net/tipc/socket.c
net/tipc/socket.h
net/tipc/subscr.c
net/vmw_vsock/vmci_transport.c
net/wireless/nl80211.c
samples/bpf/Makefile
samples/bpf/README.rst [new file with mode: 0644]
samples/bpf/parse_ldabs.c [new file with mode: 0644]
samples/bpf/parse_simple.c [new file with mode: 0644]
samples/bpf/parse_varlen.c [new file with mode: 0644]
samples/bpf/test_cls_bpf.sh [new file with mode: 0755]
samples/bpf/test_verifier.c
samples/bpf/trace_output_kern.c
sound/hda/ext/hdac_ext_stream.c
sound/hda/hdac_device.c
sound/hda/hdac_i915.c
sound/hda/hdac_regmap.c
sound/pci/hda/hda_generic.c
sound/pci/hda/hda_intel.c
sound/pci/hda/patch_cirrus.c
sound/pci/hda/patch_hdmi.c
sound/pci/hda/patch_realtek.c
sound/pci/pcxhr/pcxhr_core.c
sound/soc/codecs/Kconfig
sound/soc/codecs/arizona.c
sound/soc/codecs/arizona.h
sound/soc/codecs/cs35l32.c
sound/soc/codecs/cs47l24.c
sound/soc/codecs/hdac_hdmi.c
sound/soc/codecs/nau8825.c
sound/soc/codecs/rt5640.c
sound/soc/codecs/rt5640.h
sound/soc/codecs/wm5102.c
sound/soc/codecs/wm5110.c
sound/soc/codecs/wm8962.c
sound/soc/codecs/wm8997.c
sound/soc/codecs/wm8998.c
sound/soc/intel/Kconfig
sound/soc/intel/haswell/sst-haswell-ipc.c
sound/soc/intel/skylake/skl-sst-dsp.c
sound/soc/intel/skylake/skl-topology.c
sound/soc/intel/skylake/skl-topology.h
sound/soc/intel/skylake/skl.c
sound/soc/soc-dapm.c
tools/objtool/Documentation/stack-validation.txt
tools/objtool/builtin-check.c
tools/perf/util/intel-pt.c

index 90c0aef..c156a8b 100644 (file)
--- a/.mailmap
+++ b/.mailmap
@@ -48,6 +48,9 @@ Felix Kuhling <fxkuehl@gmx.de>
 Felix Moeller <felix@derklecks.de>
 Filipe Lautert <filipe@icewall.org>
 Franck Bui-Huu <vagabon.xyz@gmail.com>
+Frank Rowand <frowand.list@gmail.com> <frowand@mvista.com>
+Frank Rowand <frowand.list@gmail.com> <frank.rowand@am.sony.com>
+Frank Rowand <frowand.list@gmail.com> <frank.rowand@sonymobile.com>
 Frank Zago <fzago@systemfabricworks.com>
 Greg Kroah-Hartman <greg@echidna.(none)>
 Greg Kroah-Hartman <gregkh@suse.de>
@@ -79,6 +82,7 @@ Kay Sievers <kay.sievers@vrfy.org>
 Kenneth W Chen <kenneth.w.chen@intel.com>
 Konstantin Khlebnikov <koct9i@gmail.com> <k.khlebnikov@samsung.com>
 Koushik <raghavendra.koushik@neterion.com>
+Krzysztof Kozlowski <krzk@kernel.org> <k.kozlowski.k@gmail.com>
 Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
 Leonid I Ananiev <leonid.i.ananiev@intel.com>
 Linas Vepstas <linas@austin.ibm.com>
index 7785fb5..b5ca536 100644 (file)
@@ -505,6 +505,8 @@ int main(int argc, char *argv[])
                                                if (!loop)
                                                        goto done;
                                                break;
+                                       case TASKSTATS_TYPE_NULL:
+                                               break;
                                        default:
                                                fprintf(stderr, "Unknown nested"
                                                        " nla_type %d\n",
@@ -512,7 +514,8 @@ int main(int argc, char *argv[])
                                                break;
                                        }
                                        len2 += NLA_ALIGN(na->nla_len);
-                                       na = (struct nlattr *) ((char *) na + len2);
+                                       na = (struct nlattr *)((char *)na +
+                                                              NLA_ALIGN(na->nla_len));
                                }
                                break;
 
index 1ae98b8..e4b9dce 100644 (file)
@@ -2,7 +2,7 @@
 
 The ARC HS can be configured with a pipeline performance monitor for counting
 CPU and cache events like cache misses and hits. Like conventional PCT there
-are 100+ hardware conditions dynamically mapped to upto 32 counters.
+are 100+ hardware conditions dynamically mapped to up to 32 counters.
 It also supports overflow interrupts.
 
 Required properties:
index 7b95884..4e874d9 100644 (file)
@@ -2,7 +2,7 @@
 
 The ARC700 can be configured with a pipeline performance monitor for counting
 CPU and cache events like cache misses and hits. Like conventional PCT there
-are 100+ hardware conditions dynamically mapped to upto 32 counters
+are 100+ hardware conditions dynamically mapped to up to 32 counters
 
 Note that:
  * The ARC 700 PCT does not support interrupts; although HW events may be
index ccc62f1..3f0cbbb 100644 (file)
@@ -192,7 +192,6 @@ nodes to be present and contain the properties described below.
                          can be one of:
                            "allwinner,sun6i-a31"
                            "allwinner,sun8i-a23"
-                           "arm,psci"
                            "arm,realview-smp"
                            "brcm,bcm-nsp-smp"
                            "brcm,brahma-b15"
diff --git a/Documentation/devicetree/bindings/btmrvl.txt b/Documentation/devicetree/bindings/btmrvl.txt
deleted file mode 100644 (file)
index 58f964b..0000000
+++ /dev/null
@@ -1,29 +0,0 @@
-btmrvl
-------
-
-Required properties:
-
-  - compatible : must be "btmrvl,cfgdata"
-
-Optional properties:
-
-  - btmrvl,cal-data : Calibration data downloaded to the device during
-                     initialization. This is an array of 28 values(u8).
-
-  - btmrvl,gpio-gap : gpio and gap (in msecs) combination to be
-                     configured.
-
-Example:
-
-GPIO pin 13 is configured as a wakeup source and GAP is set to 100 msecs
-in below example.
-
-btmrvl {
-       compatible = "btmrvl,cfgdata";
-
-       btmrvl,cal-data = /bits/ 8 <
-               0x37 0x01 0x1c 0x00 0xff 0xff 0xff 0xff 0x01 0x7f 0x04 0x02
-               0x00 0x00 0xba 0xce 0xc0 0xc6 0x2d 0x00 0x00 0x00 0x00 0x00
-               0x00 0x00 0xf0 0x00>;
-       btmrvl,gpio-gap = <0x0d64>;
-};
index f0d71bc..0b4a85f 100644 (file)
@@ -6,8 +6,8 @@ RK3xxx SoCs.
 Required properties :
 
  - reg : Offset and length of the register set for the device
- - compatible : should be "rockchip,rk3066-i2c", "rockchip,rk3188-i2c" or
-               "rockchip,rk3288-i2c".
+ - compatible : should be "rockchip,rk3066-i2c", "rockchip,rk3188-i2c",
+               "rockchip,rk3228-i2c" or "rockchip,rk3288-i2c".
  - interrupts : interrupt number
  - clocks : parent clock
 
index 078060a..05f705e 100644 (file)
@@ -18,6 +18,8 @@ Required properties for all the ethernet interfaces:
   - First is the Rx interrupt.  This irq is mandatory.
   - Second is the Tx completion interrupt.
     This is supported only on SGMII based 1GbE and 10GbE interfaces.
+- channel: Ethernet to CPU, start channel (prefetch buffer) number
+  - Must map to the first irq and irqs must be sequential
 - port-id: Port number (0 or 1)
 - clocks: Reference to the clock entry.
 - local-mac-address: MAC address assigned to this device
index 28a4781..0ae0649 100644 (file)
@@ -45,13 +45,13 @@ Required properties:
 Optional properties:
 - dual_emac_res_vlan   : Specifies VID to be used to segregate the ports
 - mac-address          : See ethernet.txt file in the same directory
-- phy_id               : Specifies slave phy id
+- phy_id               : Specifies slave phy id (deprecated, use phy-handle)
 - phy-handle           : See ethernet.txt file in the same directory
 
 Slave sub-nodes:
 - fixed-link           : See fixed-link.txt file in the same directory
-                         Either the property phy_id, or the sub-node
-                         fixed-link can be specified
+
+Note: Exactly one of phy_id, phy-handle, or fixed-link must be specified.
 
 Note: "ti,hwmods" field is used to fetch the base address and irq
 resources from TI, omap hwmod data base during device registration.
index ecacfa4..d4b7f2e 100644 (file)
@@ -7,19 +7,45 @@ Required properties:
 - mode: dsa fabric mode string. only support one of dsaf modes like these:
                "2port-64vf",
                "6port-16rss",
-               "6port-16vf".
+               "6port-16vf",
+               "single-port".
 - interrupt-parent: the interrupt parent of this device.
 - interrupts: should contain the DSA Fabric and rcb interrupt.
 - reg: specifies base physical address(es) and size of the device registers.
-  The first region is external interface control register base and size.
-  The second region is SerDes base register and size.
+  The first region is external interface control register base and size(optional,
+  only used when subctrl-syscon does not exist). It is recommended using
+  subctrl-syscon rather than this address.
+  The second region is SerDes base register and size(optional, only used when
+  serdes-syscon in port node does not exist). It is recommended using
+  serdes-syscon rather than this address.
   The third region is the PPE register base and size.
-  The fourth region is dsa fabric base register and size.
-  The fifth region is cpld base register and size, it is not required if do not use cpld.
-- phy-handle: phy handle of physicl port, 0 if not any phy device. see ethernet.txt [1].
+  The fourth region is dsa fabric base register and size. It is not required for
+  single-port mode.
+- reg-names: may be ppe-base and(or) dsaf-base. It is used to find the
+  corresponding reg's index.
+
+- phy-handle: phy handle of physical port, 0 if not any phy device. It is optional
+  attribute. If port node exists, phy-handle in each port node will be used.
+  see ethernet.txt [1].
+- subctrl-syscon: is syscon handle for external interface control register.
+- reset-field-offset: is offset of reset field. Its value depends on the hardware
+  user manual.
 - buf-size: rx buffer size, should be 16-1024.
 - desc-num: number of description in TX and RX queue, should be 512, 1024, 2048 or 4096.
 
+- port: subnodes of dsaf. A dsaf node may contain several port nodes(Depending
+  on mode of dsaf). Port node contain some attributes listed below:
+- reg: is physical port index in one dsaf.
+- phy-handle: phy handle of physical port. It is not required if there isn't
+  phy device. see ethernet.txt [1].
+- serdes-syscon: is syscon handle for SerDes register.
+- cpld-syscon: is syscon handle + register offset pair for cpld register. It is
+  not required if there isn't cpld device.
+- port-rst-offset: is offset of reset field for each port in dsaf. Its value
+  depends on the hardware user manual.
+- port-mode-offset: is offset of port mode field for each port in dsaf. Its
+  value depends on the hardware user manual.
+
 [1] Documentation/devicetree/bindings/net/phy.txt
 
 Example:
@@ -28,11 +54,11 @@ dsaf0: dsa@c7000000 {
        compatible = "hisilicon,hns-dsaf-v1";
        mode = "6port-16rss";
        interrupt-parent = <&mbigen_dsa>;
-       reg = <0x0 0xC0000000 0x0 0x420000
-              0x0 0xC2000000 0x0 0x300000
-              0x0 0xc5000000 0x0 0x890000
+       reg = <0x0 0xc5000000 0x0 0x890000
               0x0 0xc7000000 0x0 0x60000>;
-       phy-handle = <0 0 0 0 &soc0_phy4 &soc0_phy5 0 0>;
+       reg-names = "ppe-base", "dsaf-base";
+       subctrl-syscon = <&subctrl>;
+       reset-field-offset = 0;
        interrupts = <131 4>,<132 4>, <133 4>,<134 4>,
                     <135 4>,<136 4>, <137 4>,<138 4>,
                     <139 4>,<140 4>, <141 4>,<142 4>,
@@ -43,4 +69,15 @@ dsaf0: dsa@c7000000 {
        buf-size = <4096>;
        desc-num = <1024>;
        dma-coherent;
+
+       port@0 {
+               reg = 0;
+               phy-handle = <&phy0>;
+               serdes-syscon = <&serdes>;
+       };
+
+       port@1 {
+                reg = 1;
+                serdes-syscon = <&serdes>;
+        };
 };
index e6a9d1c..b9ff4ba 100644 (file)
@@ -36,6 +36,34 @@ Required properties:
                        | | | | | |
                       external port
 
+  This attribute is remained for compatible purpose. It is not recommended to
+  use it in new code.
+
+- port-idx-in-ae: is the index of port provided by AE.
+  In NIC mode of DSAF, all 6 PHYs of service DSAF are taken as ethernet ports
+  to the CPU. The port-idx-in-ae can be 0 to 5. Here is the diagram:
+            +-----+---------------+
+            |            CPU      |
+            +-+-+-+---+-+-+-+-+-+-+
+              |    |   | | | | | |
+           debug debug   service
+           port  port     port
+           (0)   (0)     (0-5)
+
+  In Switch mode of DSAF, all 6 PHYs of service DSAF are taken as physical
+  ports connected to a LAN Switch while the CPU side assume itself have one
+  single NIC connected to this switch. In this case, the port-idx-in-ae
+  will be 0 only.
+            +-----+-----+------+------+
+            |                CPU      |
+            +-+-+-+-+-+-+-+-+-+-+-+-+-+
+              |    |     service| port(0)
+            debug debug  +------------+
+            port  port   |   switch   |
+            (0)   (0)    +-+-+-+-+-+-++
+                          | | | | | |
+                         external port
+
 - local-mac-address: mac addr of the ethernet interface
 
 Example:
@@ -43,6 +71,6 @@ Example:
        ethernet@0{
                compatible = "hisilicon,hns-nic-v1";
                ae-handle = <&dsaf0>;
-               port-id = <0>;
+               port-idx-in-ae = <0>;
                local-mac-address = [a2 14 e4 4b 56 76];
        };
diff --git a/Documentation/devicetree/bindings/net/marvell-bt-sd8xxx.txt b/Documentation/devicetree/bindings/net/marvell-bt-sd8xxx.txt
new file mode 100644 (file)
index 0000000..14aa6cf
--- /dev/null
@@ -0,0 +1,56 @@
+Marvell 8897/8997 (sd8897/sd8997) bluetooth SDIO devices
+------
+
+Required properties:
+
+  - compatible : should be one of the following:
+       * "marvell,sd8897-bt"
+       * "marvell,sd8997-bt"
+
+Optional properties:
+
+  - marvell,cal-data: Calibration data downloaded to the device during
+                     initialization. This is an array of 28 values(u8).
+
+  - marvell,wakeup-pin: It represents wakeup pin number of the bluetooth chip.
+                       firmware will use the pin to wakeup host system.
+  - marvell,wakeup-gap-ms: wakeup gap represents wakeup latency of the host
+                     platform. The value will be configured to firmware. This
+                     is needed to work chip's sleep feature as expected.
+  - interrupt-parent: phandle of the parent interrupt controller
+  - interrupts : interrupt pin number to the cpu. Driver will request an irq based
+                on this interrupt number. During system suspend, the irq will be
+                enabled so that the bluetooth chip can wakeup host platform under
+                certain condition. During system resume, the irq will be disabled
+                to make sure unnecessary interrupt is not received.
+
+Example:
+
+IRQ pin 119 is used as system wakeup source interrupt.
+wakeup pin 13 and gap 100ms are configured so that firmware can wakeup host
+using this device side pin and wakeup latency.
+calibration data is also available in below example.
+
+&mmc3 {
+       status = "okay";
+       vmmc-supply = <&wlan_en_reg>;
+       bus-width = <4>;
+       cap-power-off-card;
+       keep-power-in-suspend;
+
+       #address-cells = <1>;
+       #size-cells = <0>;
+       btmrvl: bluetooth@2 {
+               compatible = "marvell,sd8897-bt";
+               reg = <2>;
+               interrupt-parent = <&pio>;
+               interrupts = <119 IRQ_TYPE_LEVEL_LOW>;
+
+               marvell,cal-data = /bits/ 8 <
+                       0x37 0x01 0x1c 0x00 0xff 0xff 0xff 0xff 0x01 0x7f 0x04 0x02
+                       0x00 0x00 0xba 0xce 0xc0 0xc6 0x2d 0x00 0x00 0x00 0x00 0x00
+                       0x00 0x00 0xf0 0x00>;
+               marvell,wakeup-pin = <0x0d>;
+               marvell,wakeup-gap-ms = <0x64>;
+       };
+};
diff --git a/Documentation/devicetree/bindings/net/microchip,enc28j60.txt b/Documentation/devicetree/bindings/net/microchip,enc28j60.txt
new file mode 100644 (file)
index 0000000..1dc3bc7
--- /dev/null
@@ -0,0 +1,59 @@
+* Microchip ENC28J60
+
+This is a standalone 10 MBit ethernet controller with SPI interface.
+
+For each device connected to a SPI bus, define a child node within
+the SPI master node.
+
+Required properties:
+- compatible: Should be "microchip,enc28j60"
+- reg: Specify the SPI chip select the ENC28J60 is wired to
+- interrupt-parent: Specify the phandle of the source interrupt, see interrupt
+                    binding documentation for details. Usually this is the GPIO bank
+                    the interrupt line is wired to.
+- interrupts: Specify the interrupt index within the interrupt controller (referred
+              to above in interrupt-parent) and interrupt type. The ENC28J60 natively
+              generates falling edge interrupts, however, additional board logic
+              might invert the signal.
+- pinctrl-names: List of assigned state names, see pinctrl binding documentation.
+- pinctrl-0: List of phandles to configure the GPIO pin used as interrupt line,
+             see also generic and your platform specific pinctrl binding
+             documentation.
+
+Optional properties:
+- spi-max-frequency: Maximum frequency of the SPI bus when accessing the ENC28J60.
+  According to the ENC28J80 datasheet, the chip allows a maximum of 20 MHz, however,
+  board designs may need to limit this value.
+- local-mac-address: See ethernet.txt in the same directory.
+
+
+Example (for NXP i.MX28 with pin control stuff for GPIO irq):
+
+        ssp2: ssp@80014000 {
+                compatible = "fsl,imx28-spi";
+                pinctrl-names = "default";
+                pinctrl-0 = <&spi2_pins_b &spi2_sck_cfg>;
+                status = "okay";
+
+                enc28j60: ethernet@0 {
+                        compatible = "microchip,enc28j60";
+                        pinctrl-names = "default";
+                        pinctrl-0 = <&enc28j60_pins>;
+                        reg = <0>;
+                        interrupt-parent = <&gpio3>;
+                        interrupts = <3 IRQ_TYPE_EDGE_FALLING>;
+                        spi-max-frequency = <12000000>;
+                };
+        };
+
+        pinctrl@80018000 {
+                enc28j60_pins: enc28j60_pins@0 {
+                        reg = <0>;
+                        fsl,pinmux-ids = <
+                                MX28_PAD_AUART0_RTS__GPIO_3_3    /* Interrupt */
+                        >;
+                        fsl,drive-strength = <MXS_DRIVE_4mA>;
+                        fsl,voltage = <MXS_VOLTAGE_HIGH>;
+                        fsl,pull-up = <MXS_PULL_DISABLE>;
+                };
+        };
diff --git a/Documentation/devicetree/bindings/net/wireless/marvell-sd8xxx.txt b/Documentation/devicetree/bindings/net/wireless/marvell-sd8xxx.txt
new file mode 100644 (file)
index 0000000..c421aba
--- /dev/null
@@ -0,0 +1,63 @@
+Marvell 8897/8997 (sd8897/sd8997) SDIO devices
+------
+
+This node provides properties for controlling the marvell sdio wireless device.
+The node is expected to be specified as a child node to the SDIO controller that
+connects the device to the system.
+
+Required properties:
+
+  - compatible : should be one of the following:
+       * "marvell,sd8897"
+       * "marvell,sd8997"
+
+Optional properties:
+
+  - marvell,caldata* : A series of properties with marvell,caldata prefix,
+                     represent calibration data downloaded to the device during
+                     initialization. This is an array of unsigned 8-bit values.
+                     the properties should follow below property name and
+                     corresponding array length:
+       "marvell,caldata-txpwrlimit-2g" (length = 566).
+       "marvell,caldata-txpwrlimit-5g-sub0" (length = 502).
+       "marvell,caldata-txpwrlimit-5g-sub1" (length = 688).
+       "marvell,caldata-txpwrlimit-5g-sub2" (length = 750).
+       "marvell,caldata-txpwrlimit-5g-sub3" (length = 502).
+  - marvell,wakeup-pin : a wakeup pin number of wifi chip which will be configured
+                     to firmware. Firmware will wakeup the host using this pin
+                     during suspend/resume.
+  - interrupt-parent: phandle of the parent interrupt controller
+  - interrupts : interrupt pin number to the cpu. driver will request an irq based on
+                this interrupt number. during system suspend, the irq will be enabled
+                so that the wifi chip can wakeup host platform under certain condition.
+                during system resume, the irq will be disabled to make sure
+                unnecessary interrupt is not received.
+
+Example:
+
+Tx power limit calibration data is configured in below example.
+The calibration data is an array of unsigned values, the length
+can vary between hw versions.
+IRQ pin 38 is used as system wakeup source interrupt. wakeup pin 3 is configured
+so that firmware can wakeup host using this device side pin.
+
+&mmc3 {
+       status = "okay";
+       vmmc-supply = <&wlan_en_reg>;
+       bus-width = <4>;
+       cap-power-off-card;
+       keep-power-in-suspend;
+
+       #address-cells = <1>;
+       #size-cells = <0>;
+       mwifiex: wifi@1 {
+               compatible = "marvell,sd8897";
+               reg = <1>;
+               interrupt-parent = <&pio>;
+               interrupts = <38 IRQ_TYPE_LEVEL_LOW>;
+
+               marvell,caldata_00_txpwrlimit_2g_cfg_set = /bits/ 8 <
+       0x01 0x00 0x06 0x00 0x08 0x02 0x89 0x01>;
+               marvell,wakeup-pin = <3>;
+       };
+};
index 50c4f9b..e3b4809 100644 (file)
@@ -8,15 +8,19 @@ Required properties:
        of memory mapped region.
 - clock-names: from common clock binding:
        Required elements: "24m"
-- rockchip,grf: phandle to the syscon managing the "general register files"
 - #phy-cells : from the generic PHY bindings, must be 0;
 
 Example:
 
-edp_phy: edp-phy {
-       compatible = "rockchip,rk3288-dp-phy";
-       rockchip,grf = <&grf>;
-       clocks = <&cru SCLK_EDP_24M>;
-       clock-names = "24m";
-       #phy-cells = <0>;
+grf: syscon@ff770000 {
+       compatible = "rockchip,rk3288-grf", "syscon", "simple-mfd";
+
+...
+
+       edp_phy: edp-phy {
+               compatible = "rockchip,rk3288-dp-phy";
+               clocks = <&cru SCLK_EDP_24M>;
+               clock-names = "24m";
+               #phy-cells = <0>;
+       };
 };
index 61916f1..555cb0f 100644 (file)
@@ -3,17 +3,23 @@ Rockchip EMMC PHY
 
 Required properties:
  - compatible: rockchip,rk3399-emmc-phy
- - rockchip,grf : phandle to the syscon managing the "general
-   register files"
  - #phy-cells: must be 0
- - reg: PHY configure reg address offset in "general
+ - reg: PHY register address offset and length in "general
    register files"
 
 Example:
 
-emmcphy: phy {
-       compatible = "rockchip,rk3399-emmc-phy";
-       rockchip,grf = <&grf>;
-       reg = <0xf780>;
-       #phy-cells = <0>;
+
+grf: syscon@ff770000 {
+       compatible = "rockchip,rk3399-grf", "syscon", "simple-mfd";
+       #address-cells = <1>;
+       #size-cells = <1>;
+
+...
+
+       emmcphy: phy@f780 {
+               compatible = "rockchip,rk3399-emmc-phy";
+               reg = <0xf780 0x20>;
+               #phy-cells = <0>;
+       };
 };
index 3f0f5ce..36ea940 100644 (file)
@@ -173,6 +173,10 @@ A few EV_ABS codes have special meanings:
     proximity of the device and while the value of the BTN_TOUCH code is 0. If
     the input device may be used freely in three dimensions, consider ABS_Z
     instead.
+  - BTN_TOOL_<name> should be set to 1 when the tool comes into detectable
+    proximity and set to 0 when the tool leaves detectable proximity.
+    BTN_TOOL_<name> signals the type of tool that is currently detected by the
+    hardware and is otherwise independent of ABS_DISTANCE and/or BTN_TOUCH.
 
 * ABS_MT_<name>:
   - Used to describe multitouch input events. Please see
index 3f24df8..50b8589 100644 (file)
@@ -6,7 +6,7 @@ This is the driver for the Altera Triple-Speed Ethernet (TSE) controllers
 using the SGDMA and MSGDMA soft DMA IP components. The driver uses the
 platform bus to obtain component resources. The designs used to test this
 driver were built for a Cyclone(R) V SOC FPGA board, a Cyclone(R) V FPGA board,
-and tested with ARM and NIOS processor hosts seperately. The anticipated use
+and tested with ARM and NIOS processor hosts separately. The anticipated use
 cases are simple communications between an embedded system and an external peer
 for status and simple configuration of the embedded system.
 
@@ -65,14 +65,14 @@ Driver parameters can be also passed in command line by using:
 4.1) Transmit process
 When the driver's transmit routine is called by the kernel, it sets up a
 transmit descriptor by calling the underlying DMA transmit routine (SGDMA or
-MSGDMA), and initites a transmit operation. Once the transmit is complete, an
+MSGDMA), and initiates a transmit operation. Once the transmit is complete, an
 interrupt is driven by the transmit DMA logic. The driver handles the transmit
 completion in the context of the interrupt handling chain by recycling
 resource required to send and track the requested transmit operation.
 
 4.2) Receive process
 The driver will post receive buffers to the receive DMA logic during driver
-intialization. Receive buffers may or may not be queued depending upon the
+initialization. Receive buffers may or may not be queued depending upon the
 underlying DMA logic (MSGDMA is able queue receive buffers, SGDMA is not able
 to queue receive buffers to the SGDMA receive logic). When a packet is
 received, the DMA logic generates an interrupt. The driver handles a receive
index 334b49e..57f52cd 100644 (file)
@@ -1880,8 +1880,8 @@ or more peers on the local network.
 
        The ARP monitor relies on the device driver itself to verify
 that traffic is flowing.  In particular, the driver must keep up to
-date the last receive time, dev->last_rx, and transmit start time,
-dev->trans_start.  If these are not updated by the driver, then the
+date the last receive time, dev->last_rx.  Drivers that use NETIF_F_LLTX
+flag must also update netdev_queue->trans_start.  If they do not, then the
 ARP monitor will immediately fail any slaves using that driver, and
 those slaves will stay down.  If networking monitoring (tcpdump, etc)
 shows the ARP requests and replies on the network, then it may be that
index 96da119..6aef0b5 100644 (file)
@@ -1095,6 +1095,87 @@ all use cases.
 
 See details of eBPF verifier in kernel/bpf/verifier.c
 
+Direct packet access
+--------------------
+In cls_bpf and act_bpf programs the verifier allows direct access to the packet
+data via skb->data and skb->data_end pointers.
+Ex:
+1:  r4 = *(u32 *)(r1 +80)  /* load skb->data_end */
+2:  r3 = *(u32 *)(r1 +76)  /* load skb->data */
+3:  r5 = r3
+4:  r5 += 14
+5:  if r5 > r4 goto pc+16
+R1=ctx R3=pkt(id=0,off=0,r=14) R4=pkt_end R5=pkt(id=0,off=14,r=14) R10=fp
+6:  r0 = *(u16 *)(r3 +12) /* access 12 and 13 bytes of the packet */
+
+this 2byte load from the packet is safe to do, since the program author
+did check 'if (skb->data + 14 > skb->data_end) goto err' at insn #5 which
+means that in the fall-through case the register R3 (which points to skb->data)
+has at least 14 directly accessible bytes. The verifier marks it
+as R3=pkt(id=0,off=0,r=14).
+id=0 means that no additional variables were added to the register.
+off=0 means that no additional constants were added.
+r=14 is the range of safe access which means that bytes [R3, R3 + 14) are ok.
+Note that R5 is marked as R5=pkt(id=0,off=14,r=14). It also points
+to the packet data, but constant 14 was added to the register, so
+it now points to 'skb->data + 14' and accessible range is [R5, R5 + 14 - 14)
+which is zero bytes.
+
+More complex packet access may look like:
+ R0=imm1 R1=ctx R3=pkt(id=0,off=0,r=14) R4=pkt_end R5=pkt(id=0,off=14,r=14) R10=fp
+ 6:  r0 = *(u8 *)(r3 +7) /* load 7th byte from the packet */
+ 7:  r4 = *(u8 *)(r3 +12)
+ 8:  r4 *= 14
+ 9:  r3 = *(u32 *)(r1 +76) /* load skb->data */
+10:  r3 += r4
+11:  r2 = r1
+12:  r2 <<= 48
+13:  r2 >>= 48
+14:  r3 += r2
+15:  r2 = r3
+16:  r2 += 8
+17:  r1 = *(u32 *)(r1 +80) /* load skb->data_end */
+18:  if r2 > r1 goto pc+2
+ R0=inv56 R1=pkt_end R2=pkt(id=2,off=8,r=8) R3=pkt(id=2,off=0,r=8) R4=inv52 R5=pkt(id=0,off=14,r=14) R10=fp
+19:  r1 = *(u8 *)(r3 +4)
+The state of the register R3 is R3=pkt(id=2,off=0,r=8)
+id=2 means that two 'r3 += rX' instructions were seen, so r3 points to some
+offset within a packet and since the program author did
+'if (r3 + 8 > r1) goto err' at insn #18, the safe range is [R3, R3 + 8).
+The verifier only allows 'add' operation on packet registers. Any other
+operation will set the register state to 'unknown_value' and it won't be
+available for direct packet access.
+Operation 'r3 += rX' may overflow and become less than original skb->data,
+therefore the verifier has to prevent that. So it tracks the number of
+upper zero bits in all 'uknown_value' registers, so when it sees
+'r3 += rX' instruction and rX is more than 16-bit value, it will error as:
+"cannot add integer value with N upper zero bits to ptr_to_packet"
+Ex. after insn 'r4 = *(u8 *)(r3 +12)' (insn #7 above) the state of r4 is
+R4=inv56 which means that upper 56 bits on the register are guaranteed
+to be zero. After insn 'r4 *= 14' the state becomes R4=inv52, since
+multiplying 8-bit value by constant 14 will keep upper 52 bits as zero.
+Similarly 'r2 >>= 48' will make R2=inv48, since the shift is not sign
+extending. This logic is implemented in evaluate_reg_alu() function.
+
+The end result is that bpf program author can access packet directly
+using normal C code as:
+  void *data = (void *)(long)skb->data;
+  void *data_end = (void *)(long)skb->data_end;
+  struct eth_hdr *eth = data;
+  struct iphdr *iph = data + sizeof(*eth);
+  struct udphdr *udp = data + sizeof(*eth) + sizeof(*iph);
+
+  if (data + sizeof(*eth) + sizeof(*iph) + sizeof(*udp) > data_end)
+          return 0;
+  if (eth->h_proto != htons(ETH_P_IP))
+          return 0;
+  if (iph->protocol != IPPROTO_UDP || iph->ihl != 5)
+          return 0;
+  if (udp->dest == 53 || udp->source == 9)
+          ...;
+which makes such programs easier to write comparing to LD_ABS insn
+and significantly faster.
+
 eBPF maps
 ---------
 'maps' is a generic storage of different types for sharing data between kernel
@@ -1293,5 +1374,5 @@ to give potential BPF hackers or security auditors a better overview of
 the underlying architecture.
 
 Jay Schulist <jschlst@samba.org>
-Daniel Borkmann <dborkman@redhat.com>
-Alexei Starovoitov <ast@plumgrid.com>
+Daniel Borkmann <daniel@iogearbox.net>
+Alexei Starovoitov <ast@kernel.org>
index 70e6275..ff630a8 100644 (file)
@@ -33,7 +33,8 @@ my_dumping_routine(struct sk_buff *skb, ...)
 {
        struct gnet_dump dump;
 
-       if (gnet_stats_start_copy(skb, TCA_STATS2, &mystruct->lock, &dump) < 0)
+       if (gnet_stats_start_copy(skb, TCA_STATS2, &mystruct->lock, &dump,
+                                 TCA_PAD) < 0)
                goto rtattr_failure;
 
        if (gnet_stats_copy_basic(&dump, &mystruct->bstats) < 0 ||
@@ -56,7 +57,8 @@ existing TLV types.
 my_dumping_routine(struct sk_buff *skb, ...)
 {
     if (gnet_stats_start_copy_compat(skb, TCA_STATS2, TCA_STATS,
-               TCA_XSTATS, &mystruct->lock, &dump) < 0)
+                                    TCA_XSTATS, &mystruct->lock, &dump,
+                                    TCA_PAD) < 0)
                goto rtattr_failure;
        ...
 }
index cf99639..14422f8 100644 (file)
@@ -8,7 +8,7 @@ Initial Release:
        This is conceptually very similar to the macvlan driver with one major
 exception of using L3 for mux-ing /demux-ing among slaves. This property makes
 the master device share the L2 with it's slave devices. I have developed this
-driver in conjuntion with network namespaces and not sure if there is use case
+driver in conjunction with network namespaces and not sure if there is use case
 outside of it.
 
 
@@ -42,7 +42,7 @@ out. In this mode the slaves will RX/TX multicast and broadcast (if applicable)
 as well.
 
 4.2 L3 mode:
-       In this mode TX processing upto L3 happens on the stack instance attached
+       In this mode TX processing up to L3 happens on the stack instance attached
 to the slave device and packets are switched to the stack instance of the
 master device for the L2 processing and routing from that instance will be
 used before packets are queued on the outbound device. In this mode the slaves
@@ -56,7 +56,7 @@ situations defines your use case then you can choose to use ipvlan -
        (a) The Linux host that is connected to the external switch / router has
 policy configured that allows only one mac per port.
        (b) No of virtual devices created on a master exceed the mac capacity and
-puts the NIC in promiscous mode and degraded performance is a concern.
+puts the NIC in promiscuous mode and degraded performance is a concern.
        (c) If the slave device is to be put into the hostile / untrusted network
 namespace where L2 on the slave could be changed / misused.
 
index f310ede..7413eb0 100644 (file)
@@ -131,13 +131,11 @@ stack. Driver should not change behaviour based on them.
 
  * LLTX driver (deprecated for hardware drivers)
 
-NETIF_F_LLTX should be set in drivers that implement their own locking in
-transmit path or don't need locking at all (e.g. software tunnels).
-In ndo_start_xmit, it is recommended to use a try_lock and return
-NETDEV_TX_LOCKED when the spin lock fails.  The locking should also properly
-protect against other callbacks (the rules you need to find out).
+NETIF_F_LLTX is meant to be used by drivers that don't need locking at all,
+e.g. software tunnels.
 
-Don't use it for new drivers.
+This is also used in a few legacy drivers that implement their
+own locking, don't use it for new (hardware) drivers.
 
  * netns-local device
 
index 0b1cf6b..7fec206 100644 (file)
@@ -69,10 +69,9 @@ ndo_start_xmit:
 
        When the driver sets NETIF_F_LLTX in dev->features this will be
        called without holding netif_tx_lock. In this case the driver
-       has to lock by itself when needed. It is recommended to use a try lock
-       for this and return NETDEV_TX_LOCKED when the spin lock fails.
-       The locking there should also properly protect against 
-       set_rx_mode. Note that the use of NETIF_F_LLTX is deprecated.
+       has to lock by itself when needed.
+       The locking there should also properly protect against
+       set_rx_mode. WARNING: use of NETIF_F_LLTX is deprecated.
        Don't use it for new drivers.
 
        Context: Process with BHs disabled or BH (timer),
@@ -83,8 +82,6 @@ ndo_start_xmit:
        o NETDEV_TX_BUSY Cannot transmit packet, try later 
          Usually a bug, means queue start/stop flow control is broken in
          the driver. Note: the driver must NOT put the skb in its DMA ring.
-       o NETDEV_TX_LOCKED Locking failed, please retry quickly.
-         Only valid when NETIF_F_LLTX is set.
 
 ndo_tx_timeout:
        Synchronization: netif_tx_lock spinlock; all TX queues frozen.
index f4be85e..2c4e335 100644 (file)
@@ -67,12 +67,12 @@ The two basic thread commands are:
  * add_device DEVICE@NAME -- adds a single device
  * rem_device_all         -- remove all associated devices
 
-When adding a device to a thread, a corrosponding procfile is created
+When adding a device to a thread, a corresponding procfile is created
 which is used for configuring this device. Thus, device names need to
 be unique.
 
 To support adding the same device to multiple threads, which is useful
-with multi queue NICs, the device naming scheme is extended with "@":
+with multi queue NICs, the device naming scheme is extended with "@":
  device@something
 
 The part after "@" can be anything, but it is custom to use the thread
@@ -221,7 +221,7 @@ Sample scripts
 
 A collection of tutorial scripts and helpers for pktgen is in the
 samples/pktgen directory. The helper parameters.sh file support easy
-and consistant parameter parsing across the sample scripts.
+and consistent parameter parsing across the sample scripts.
 
 Usage example and help:
  ./pktgen_sample01_simple.sh -i eth4 -m 00:1B:21:3C:9D:F8 -d 192.168.8.2
index d52aa10..5da679c 100644 (file)
@@ -41,7 +41,7 @@ using an rx_handler which gives the impression that packets flow through
 the VRF device. Similarly on egress routing rules are used to send packets
 to the VRF device driver before getting sent out the actual interface. This
 allows tcpdump on a VRF device to capture all packets into and out of the
-VRF as a whole.[1] Similiarly, netfilter [2] and tc rules can be applied
+VRF as a whole.[1] Similarly, netfilter [2] and tc rules can be applied
 using the VRF device to specify rules that apply to the VRF domain as a whole.
 
 [1] Packets in the forwarded state do not flow through the device, so those
index d7aac9d..8d88e0f 100644 (file)
@@ -4,7 +4,7 @@ Krisztian <hidden@balabit.hu> and others and additional patches
 from Jamal <hadi@cyberus.ca>.
 
 The end goal for syncing is to be able to insert attributes + generate
-events so that the an SA can be safely moved from one machine to another
+events so that the SA can be safely moved from one machine to another
 for HA purposes.
 The idea is to synchronize the SA so that the takeover machine can do
 the processing of the SA as accurate as possible if it has access to it.
@@ -13,7 +13,7 @@ We already have the ability to generate SA add/del/upd events.
 These patches add ability to sync and have accurate lifetime byte (to
 ensure proper decay of SAs) and replay counters to avoid replay attacks
 with as minimal loss at failover time.
-This way a backup stays as closely uptodate as an active member.
+This way a backup stays as closely up-to-date as an active member.
 
 Because the above items change for every packet the SA receives,
 it is possible for a lot of the events to be generated.
@@ -163,7 +163,7 @@ If you have an SA that is getting hit by traffic in bursts such that
 there is a period where the timer threshold expires with no packets
 seen, then an odd behavior is seen as follows:
 The first packet arrival after a timer expiry will trigger a timeout
-aevent; i.e we dont wait for a timeout period or a packet threshold
+event; i.e we don't wait for a timeout period or a packet threshold
 to be reached. This is done for simplicity and efficiency reasons.
 
 -JHS
index cb03684..34a5fec 100644 (file)
@@ -581,15 +581,16 @@ Specify "[Nn]ode" for node order
 "Zone Order" orders the zonelists by zone type, then by node within each
 zone.  Specify "[Zz]one" for zone order.
 
-Specify "[Dd]efault" to request automatic configuration.  Autoconfiguration
-will select "node" order in following case.
-(1) if the DMA zone does not exist or
-(2) if the DMA zone comprises greater than 50% of the available memory or
-(3) if any node's DMA zone comprises greater than 70% of its local memory and
-    the amount of local memory is big enough.
-
-Otherwise, "zone" order will be selected. Default order is recommended unless
-this is causing problems for your system/application.
+Specify "[Dd]efault" to request automatic configuration.
+
+On 32-bit, the Normal zone needs to be preserved for allocations accessible
+by the kernel, so "zone" order will be selected.
+
+On 64-bit, devices that require DMA32/DMA are relatively rare, so "node"
+order will be selected.
+
+Default order is recommended unless this is causing problems for your
+system/application.
 
 ==============================================================
 
index c518dce..5aa7383 100644 (file)
@@ -19,7 +19,7 @@ ffffff0000000000 - ffffff7fffffffff (=39 bits) %esp fixup stacks
 ffffffef00000000 - ffffffff00000000 (=64 GB) EFI region mapping space
 ... unused hole ...
 ffffffff80000000 - ffffffffa0000000 (=512 MB)  kernel text mapping, from phys 0
-ffffffffa0000000 - ffffffffff5fffff (=1525 MB) module mapping space
+ffffffffa0000000 - ffffffffff5fffff (=1526 MB) module mapping space
 ffffffffff600000 - ffffffffffdfffff (=8 MB) vsyscalls
 ffffffffffe00000 - ffffffffffffffff (=2 MB) unused hole
 
@@ -31,8 +31,8 @@ vmalloc space is lazily synchronized into the different PML4 pages of
 the processes using the page fault handler, with init_level4_pgt as
 reference.
 
-Current X86-64 implementations only support 40 bits of address space,
-but we support up to 46 bits. This expands into MBZ space in the page tables.
+Current X86-64 implementations support up to 46 bits of address space (64 TB),
+which is our current limit. This expands into MBZ space in the page tables.
 
 We map EFI runtime services in the 'efi_pgd' PGD in a 64Gb large virtual
 memory window (this size is arbitrary, it can be raised later if needed).
index 37691ab..e425912 100644 (file)
@@ -2203,10 +2203,13 @@ BATMAN ADVANCED
 M:     Marek Lindner <mareklindner@neomailbox.ch>
 M:     Simon Wunderlich <sw@simonwunderlich.de>
 M:     Antonio Quartulli <a@unstable.cc>
-L:     b.a.t.m.a.n@lists.open-mesh.org
+L:     b.a.t.m.a.n@lists.open-mesh.org (moderated for non-subscribers)
 W:     https://www.open-mesh.org/
 Q:     https://patchwork.open-mesh.org/project/batman/list/
 S:     Maintained
+F:     Documentation/ABI/testing/sysfs-class-net-batman-adv
+F:     Documentation/ABI/testing/sysfs-class-net-mesh
+F:     Documentation/networking/batman-adv.txt
 F:     net/batman-adv/
 
 BAYCOM/HDLCDRV DRIVERS FOR AX.25
@@ -4745,7 +4748,7 @@ F:        drivers/platform/x86/fujitsu-tablet.c
 
 FUSE: FILESYSTEM IN USERSPACE
 M:     Miklos Szeredi <miklos@szeredi.hu>
-L:     fuse-devel@lists.sourceforge.net
+L:     linux-fsdevel@vger.kernel.org
 W:     http://fuse.sourceforge.net/
 T:     git git://git.kernel.org/pub/scm/linux/kernel/git/mszeredi/fuse.git
 S:     Maintained
@@ -4904,7 +4907,7 @@ F:        net/ipv4/gre_offload.c
 F:     include/net/gre.h
 
 GRETH 10/100/1G Ethernet MAC device driver
-M:     Kristoffer Glembo <kristoffer@gaisler.com>
+M:     Andreas Larsson <andreas@gaisler.com>
 L:     netdev@vger.kernel.org
 S:     Maintained
 F:     drivers/net/ethernet/aeroflex/
@@ -5745,13 +5748,6 @@ F:       drivers/char/hw_random/ixp4xx-rng.c
 
 INTEL ETHERNET DRIVERS
 M:     Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-R:     Jesse Brandeburg <jesse.brandeburg@intel.com>
-R:     Shannon Nelson <shannon.nelson@intel.com>
-R:     Carolyn Wyborny <carolyn.wyborny@intel.com>
-R:     Don Skidmore <donald.c.skidmore@intel.com>
-R:     Bruce Allan <bruce.w.allan@intel.com>
-R:     John Ronciak <john.ronciak@intel.com>
-R:     Mitch Williams <mitch.a.williams@intel.com>
 L:     intel-wired-lan@lists.osuosl.org (moderated for non-subscribers)
 W:     http://www.intel.com/support/feedback.htm
 W:     http://e1000.sourceforge.net/
@@ -6028,7 +6024,7 @@ F:        include/scsi/*iscsi*
 
 ISCSI EXTENSIONS FOR RDMA (ISER) INITIATOR
 M:     Or Gerlitz <ogerlitz@mellanox.com>
-M:     Sagi Grimberg <sagig@mellanox.com>
+M:     Sagi Grimberg <sagi@grimberg.me>
 M:     Roi Dayan <roid@mellanox.com>
 L:     linux-rdma@vger.kernel.org
 S:     Supported
@@ -6038,7 +6034,7 @@ Q:        http://patchwork.kernel.org/project/linux-rdma/list/
 F:     drivers/infiniband/ulp/iser/
 
 ISCSI EXTENSIONS FOR RDMA (ISER) TARGET
-M:     Sagi Grimberg <sagig@mellanox.com>
+M:     Sagi Grimberg <sagi@grimberg.me>
 T:     git git://git.kernel.org/pub/scm/linux/kernel/git/nab/target-pending.git master
 L:     linux-rdma@vger.kernel.org
 L:     target-devel@vger.kernel.org
@@ -6401,7 +6397,7 @@ F:        mm/kmemleak.c
 F:     mm/kmemleak-test.c
 
 KPROBES
-M:     Ananth N Mavinakayanahalli <ananth@in.ibm.com>
+M:     Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
 M:     Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
 M:     "David S. Miller" <davem@davemloft.net>
 M:     Masami Hiramatsu <mhiramat@kernel.org>
@@ -9490,7 +9486,7 @@ F:        drivers/net/wireless/realtek/rtlwifi/rtl8192ce/
 RTL8XXXU WIRELESS DRIVER (rtl8xxxu)
 M:     Jes Sorensen <Jes.Sorensen@redhat.com>
 L:     linux-wireless@vger.kernel.org
-T:     git git://git.kernel.org/pub/scm/linux/kernel/git/jes/linux.git rtl8723au-mac80211
+T:     git git://git.kernel.org/pub/scm/linux/kernel/git/jes/linux.git rtl8xxxu-devel
 S:     Maintained
 F:     drivers/net/wireless/realtek/rtl8xxxu/
 
@@ -10015,7 +10011,8 @@ F:      drivers/infiniband/hw/ocrdma/
 
 SFC NETWORK DRIVER
 M:     Solarflare linux maintainers <linux-net-drivers@solarflare.com>
-M:     Shradha Shah <sshah@solarflare.com>
+M:     Edward Cree <ecree@solarflare.com>
+M:     Bert Kenward <bkenward@solarflare.com>
 L:     netdev@vger.kernel.org
 S:     Supported
 F:     drivers/net/ethernet/sfc/
@@ -11072,6 +11069,15 @@ S:     Maintained
 F:     drivers/clk/ti/
 F:     include/linux/clk/ti.h
 
+TI ETHERNET SWITCH DRIVER (CPSW)
+M:     Mugunthan V N <mugunthanvnm@ti.com>
+R:     Grygorii Strashko <grygorii.strashko@ti.com>
+L:     linux-omap@vger.kernel.org
+L:     netdev@vger.kernel.org
+S:     Maintained
+F:     drivers/net/ethernet/ti/cpsw*
+F:     drivers/net/ethernet/ti/davinci*
+
 TI FLASH MEDIA INTERFACE DRIVER
 M:     Alex Dubov <oakad@yahoo.com>
 S:     Maintained
index 8734118..7466de6 100644 (file)
--- a/Makefile
+++ b/Makefile
@@ -1,8 +1,8 @@
 VERSION = 4
 PATCHLEVEL = 6
 SUBLEVEL = 0
-EXTRAVERSION = -rc4
-NAME = Blurry Fish Butt
+EXTRAVERSION = -rc6
+NAME = Charred Weasel
 
 # *DOCUMENTATION*
 # To see a list of typical targets execute "make help"
@@ -1008,7 +1008,8 @@ prepare0: archprepare FORCE
 prepare: prepare0 prepare-objtool
 
 ifdef CONFIG_STACK_VALIDATION
-  has_libelf := $(shell echo "int main() {}" | $(HOSTCC) -xc -o /dev/null -lelf - &> /dev/null && echo 1 || echo 0)
+  has_libelf := $(call try-run,\
+               echo "int main() {}" | $(HOSTCC) -xc -o /dev/null -lelf -,1,0)
   ifeq ($(has_libelf),1)
     objtool_target := tools/objtool FORCE
   else
index 12d0284..ec4791e 100644 (file)
@@ -35,8 +35,10 @@ config ARC
        select NO_BOOTMEM
        select OF
        select OF_EARLY_FLATTREE
+       select OF_RESERVED_MEM
        select PERF_USE_VMALLOC
        select HAVE_DEBUG_STACKOVERFLOW
+       select HAVE_GENERIC_DMA_COHERENT
 
 config MIGHT_HAVE_PCI
        bool
index 37c2f75..d1ec7f6 100644 (file)
 #define STATUS_AD_MASK         (1<<STATUS_AD_BIT)
 #define STATUS_IE_MASK         (1<<STATUS_IE_BIT)
 
+/* status32 Bits as encoded/expected by CLRI/SETI */
+#define CLRI_STATUS_IE_BIT     4
+
+#define CLRI_STATUS_E_MASK     0xF
+#define CLRI_STATUS_IE_MASK    (1 << CLRI_STATUS_IE_BIT)
+
 #define AUX_USER_SP            0x00D
 #define AUX_IRQ_CTRL           0x00E
 #define AUX_IRQ_ACT            0x043   /* Active Intr across all levels */
@@ -100,6 +106,13 @@ static inline long arch_local_save_flags(void)
        :
        : "memory");
 
+       /* To be compatible with irq_save()/irq_restore()
+        * encode the irq bits as expected by CLRI/SETI
+        * (this was needed to make CONFIG_TRACE_IRQFLAGS work)
+        */
+       temp = (1 << 5) |
+               ((!!(temp & STATUS_IE_MASK)) << CLRI_STATUS_IE_BIT) |
+               (temp & CLRI_STATUS_E_MASK);
        return temp;
 }
 
@@ -108,7 +121,7 @@ static inline long arch_local_save_flags(void)
  */
 static inline int arch_irqs_disabled_flags(unsigned long flags)
 {
-       return !(flags & (STATUS_IE_MASK));
+       return !(flags & CLRI_STATUS_IE_MASK);
 }
 
 static inline int arch_irqs_disabled(void)
@@ -128,11 +141,32 @@ static inline void arc_softirq_clear(int irq)
 
 #else
 
+#ifdef CONFIG_TRACE_IRQFLAGS
+
+.macro TRACE_ASM_IRQ_DISABLE
+       bl      trace_hardirqs_off
+.endm
+
+.macro TRACE_ASM_IRQ_ENABLE
+       bl      trace_hardirqs_on
+.endm
+
+#else
+
+.macro TRACE_ASM_IRQ_DISABLE
+.endm
+
+.macro TRACE_ASM_IRQ_ENABLE
+.endm
+
+#endif
 .macro IRQ_DISABLE  scratch
        clri
+       TRACE_ASM_IRQ_DISABLE
 .endm
 
 .macro IRQ_ENABLE  scratch
+       TRACE_ASM_IRQ_ENABLE
        seti
 .endm
 
index c126460..7a1c124 100644 (file)
@@ -69,8 +69,11 @@ ENTRY(handle_interrupt)
 
        clri            ; To make status32.IE agree with CPU internal state
 
-       lr  r0, [ICAUSE]
+#ifdef CONFIG_TRACE_IRQFLAGS
+       TRACE_ASM_IRQ_DISABLE
+#endif
 
+       lr  r0, [ICAUSE]
        mov   blink, ret_from_exception
 
        b.d  arch_do_IRQ
@@ -169,6 +172,11 @@ END(EV_TLBProtV)
 
 .Lrestore_regs:
 
+       # Interrpts are actually disabled from this point on, but will get
+       # reenabled after we return from interrupt/exception.
+       # But irq tracer needs to be told now...
+       TRACE_ASM_IRQ_ENABLE
+
        ld      r0, [sp, PT_status32]   ; U/K mode at time of entry
        lr      r10, [AUX_IRQ_ACT]
 
index 4314339..0cb0aba 100644 (file)
@@ -341,6 +341,9 @@ END(call_do_page_fault)
 
 .Lrestore_regs:
 
+       # Interrpts are actually disabled from this point on, but will get
+       # reenabled after we return from interrupt/exception.
+       # But irq tracer needs to be told now...
        TRACE_ASM_IRQ_ENABLE
 
        lr      r10, [status32]
index 7d2c4fb..5487d0b 100644 (file)
@@ -13,6 +13,7 @@
 #ifdef CONFIG_BLK_DEV_INITRD
 #include <linux/initrd.h>
 #endif
+#include <linux/of_fdt.h>
 #include <linux/swap.h>
 #include <linux/module.h>
 #include <linux/highmem.h>
@@ -136,6 +137,9 @@ void __init setup_arch_memory(void)
                memblock_reserve(__pa(initrd_start), initrd_end - initrd_start);
 #endif
 
+       early_init_fdt_reserve_self();
+       early_init_fdt_scan_reserved_mem();
+
        memblock_dump_all();
 
        /*----------------- node/zones setup --------------------------*/
index 55ca9c7..0467846 100644 (file)
                        ti,no-idle-on-init;
                        reg = <0x50000000 0x2000>;
                        interrupts = <100>;
-                       dmas = <&edma 52>;
+                       dmas = <&edma 52 0>;
                        dma-names = "rxtx";
                        gpmc,num-cs = <7>;
                        gpmc,num-waitpins = <2>;
index 344b861..ba580a9 100644 (file)
                gpmc: gpmc@50000000 {
                        compatible = "ti,am3352-gpmc";
                        ti,hwmods = "gpmc";
-                       dmas = <&edma 52>;
+                       dmas = <&edma 52 0>;
                        dma-names = "rxtx";
                        clocks = <&l3s_gclk>;
                        clock-names = "fck";
index 0a5fc5d..4168eb9 100644 (file)
                #cooling-cells = <2>;
        };
 
-       extcon_usb1: extcon_usb1 {
-               compatible = "linux,extcon-usb-gpio";
-               id-gpio = <&gpio7 25 GPIO_ACTIVE_HIGH>;
-               pinctrl-names = "default";
-               pinctrl-0 = <&extcon_usb1_pins>;
-       };
-
        hdmi0: connector {
                compatible = "hdmi-connector";
                label = "hdmi";
                >;
        };
 
-       extcon_usb1_pins: extcon_usb1_pins {
-               pinctrl-single,pins = <
-                       DRA7XX_CORE_IOPAD(0x37ec, PIN_INPUT_PULLUP | MUX_MODE14) /* uart1_rtsn.gpio7_25 */
-               >;
-       };
-
        tpd12s015_pins: pinmux_tpd12s015_pins {
                pinctrl-single,pins = <
                        DRA7XX_CORE_IOPAD(0x37b0, PIN_OUTPUT | MUX_MODE14)              /* gpio7_10 CT_CP_HPD */
        pinctrl-0 = <&usb1_pins>;
 };
 
-&omap_dwc3_1 {
-       extcon = <&extcon_usb1>;
-};
-
 &omap_dwc3_2 {
        extcon = <&extcon_usb2>;
 };
index e0ea6a9..792a64e 100644 (file)
@@ -4,6 +4,157 @@
  * published by the Free Software Foundation.
  */
 
+&pllss {
+       /*
+        * See TRM "2.6.10 Connected outputso DPLLS" and
+        * "2.6.11 Connected Outputs of DPLLJ". Only clkout is
+        * connected except for hdmi and usb.
+        */
+       adpll_mpu_ck: adpll@40 {
+               #clock-cells = <1>;
+               compatible = "ti,dm814-adpll-s-clock";
+               reg = <0x40 0x40>;
+               clocks = <&devosc_ck &devosc_ck &devosc_ck>;
+               clock-names = "clkinp", "clkinpulow", "clkinphif";
+               clock-output-names = "481c5040.adpll.dcoclkldo",
+                                    "481c5040.adpll.clkout",
+                                    "481c5040.adpll.clkoutx2",
+                                    "481c5040.adpll.clkouthif";
+       };
+
+       adpll_dsp_ck: adpll@80 {
+               #clock-cells = <1>;
+               compatible = "ti,dm814-adpll-lj-clock";
+               reg = <0x80 0x30>;
+               clocks = <&devosc_ck &devosc_ck>;
+               clock-names = "clkinp", "clkinpulow";
+               clock-output-names = "481c5080.adpll.dcoclkldo",
+                                    "481c5080.adpll.clkout",
+                                    "481c5080.adpll.clkoutldo";
+       };
+
+       adpll_sgx_ck: adpll@b0 {
+               #clock-cells = <1>;
+               compatible = "ti,dm814-adpll-lj-clock";
+               reg = <0xb0 0x30>;
+               clocks = <&devosc_ck &devosc_ck>;
+               clock-names = "clkinp", "clkinpulow";
+               clock-output-names = "481c50b0.adpll.dcoclkldo",
+                                    "481c50b0.adpll.clkout",
+                                    "481c50b0.adpll.clkoutldo";
+       };
+
+       adpll_hdvic_ck: adpll@e0 {
+               #clock-cells = <1>;
+               compatible = "ti,dm814-adpll-lj-clock";
+               reg = <0xe0 0x30>;
+               clocks = <&devosc_ck &devosc_ck>;
+               clock-names = "clkinp", "clkinpulow";
+               clock-output-names = "481c50e0.adpll.dcoclkldo",
+                                    "481c50e0.adpll.clkout",
+                                    "481c50e0.adpll.clkoutldo";
+       };
+
+       adpll_l3_ck: adpll@110 {
+               #clock-cells = <1>;
+               compatible = "ti,dm814-adpll-lj-clock";
+               reg = <0x110 0x30>;
+               clocks = <&devosc_ck &devosc_ck>;
+               clock-names = "clkinp", "clkinpulow";
+               clock-output-names = "481c5110.adpll.dcoclkldo",
+                                    "481c5110.adpll.clkout",
+                                    "481c5110.adpll.clkoutldo";
+       };
+
+       adpll_isp_ck: adpll@140 {
+               #clock-cells = <1>;
+               compatible = "ti,dm814-adpll-lj-clock";
+               reg = <0x140 0x30>;
+               clocks = <&devosc_ck &devosc_ck>;
+               clock-names = "clkinp", "clkinpulow";
+               clock-output-names = "481c5140.adpll.dcoclkldo",
+                                    "481c5140.adpll.clkout",
+                                    "481c5140.adpll.clkoutldo";
+       };
+
+       adpll_dss_ck: adpll@170 {
+               #clock-cells = <1>;
+               compatible = "ti,dm814-adpll-lj-clock";
+               reg = <0x170 0x30>;
+               clocks = <&devosc_ck &devosc_ck>;
+               clock-names = "clkinp", "clkinpulow";
+               clock-output-names = "481c5170.adpll.dcoclkldo",
+                                    "481c5170.adpll.clkout",
+                                    "481c5170.adpll.clkoutldo";
+       };
+
+       adpll_video0_ck: adpll@1a0 {
+               #clock-cells = <1>;
+               compatible = "ti,dm814-adpll-lj-clock";
+               reg = <0x1a0 0x30>;
+               clocks = <&devosc_ck &devosc_ck>;
+               clock-names = "clkinp", "clkinpulow";
+               clock-output-names = "481c51a0.adpll.dcoclkldo",
+                                    "481c51a0.adpll.clkout",
+                                    "481c51a0.adpll.clkoutldo";
+       };
+
+       adpll_video1_ck: adpll@1d0 {
+               #clock-cells = <1>;
+               compatible = "ti,dm814-adpll-lj-clock";
+               reg = <0x1d0 0x30>;
+               clocks = <&devosc_ck &devosc_ck>;
+               clock-names = "clkinp", "clkinpulow";
+               clock-output-names = "481c51d0.adpll.dcoclkldo",
+                                    "481c51d0.adpll.clkout",
+                                    "481c51d0.adpll.clkoutldo";
+       };
+
+       adpll_hdmi_ck: adpll@200 {
+               #clock-cells = <1>;
+               compatible = "ti,dm814-adpll-lj-clock";
+               reg = <0x200 0x30>;
+               clocks = <&devosc_ck &devosc_ck>;
+               clock-names = "clkinp", "clkinpulow";
+               clock-output-names = "481c5200.adpll.dcoclkldo",
+                                    "481c5200.adpll.clkout",
+                                    "481c5200.adpll.clkoutldo";
+       };
+
+       adpll_audio_ck: adpll@230 {
+               #clock-cells = <1>;
+               compatible = "ti,dm814-adpll-lj-clock";
+               reg = <0x230 0x30>;
+               clocks = <&devosc_ck &devosc_ck>;
+               clock-names = "clkinp", "clkinpulow";
+               clock-output-names = "481c5230.adpll.dcoclkldo",
+                                    "481c5230.adpll.clkout",
+                                    "481c5230.adpll.clkoutldo";
+       };
+
+       adpll_usb_ck: adpll@260 {
+               #clock-cells = <1>;
+               compatible = "ti,dm814-adpll-lj-clock";
+               reg = <0x260 0x30>;
+               clocks = <&devosc_ck &devosc_ck>;
+               clock-names = "clkinp", "clkinpulow";
+               clock-output-names = "481c5260.adpll.dcoclkldo",
+                                    "481c5260.adpll.clkout",
+                                    "481c5260.adpll.clkoutldo";
+       };
+
+       adpll_ddr_ck: adpll@290 {
+               #clock-cells = <1>;
+               compatible = "ti,dm814-adpll-lj-clock";
+               reg = <0x290 0x30>;
+               clocks = <&devosc_ck &devosc_ck>;
+               clock-names = "clkinp", "clkinpulow";
+               clock-output-names = "481c5290.adpll.dcoclkldo",
+                                    "481c5290.adpll.clkout",
+                                    "481c5290.adpll.clkoutldo";
+       };
+};
+
 &pllss_clocks {
        timer1_fck: timer1_fck {
                #clock-cells = <0>;
                reg = <0x2e0>;
        };
 
+       /* CPTS_RFT_CLK in RMII_REFCLK_SRC, usually sourced from auiod */
+       cpsw_cpts_rft_clk: cpsw_cpts_rft_clk {
+               #clock-cells = <0>;
+               compatible = "ti,mux-clock";
+               clocks = <&adpll_video0_ck 1
+                         &adpll_video1_ck 1
+                         &adpll_audio_ck 1>;
+               ti,bit-shift = <1>;
+               reg = <0x2e8>;
+       };
+
+       /* REVISIT: Set up with a proper mux using RMII_REFCLK_SRC */
+       cpsw_125mhz_gclk: cpsw_125mhz_gclk {
+               #clock-cells = <0>;
+               compatible = "fixed-clock";
+               clock-frequency = <125000000>;
+       };
+
        sysclk18_ck: sysclk18_ck {
                #clock-cells = <0>;
                compatible = "ti,mux-clock";
                compatible = "fixed-clock";
                clock-frequency = <1000000000>;
        };
-
-       sysclk4_ck: sysclk4_ck {
-               #clock-cells = <0>;
-               compatible = "fixed-clock";
-               clock-frequency = <222000000>;
-       };
-
-       sysclk6_ck: sysclk6_ck {
-               #clock-cells = <0>;
-               compatible = "fixed-clock";
-               clock-frequency = <100000000>;
-       };
-
-       sysclk10_ck: sysclk10_ck {
-               #clock-cells = <0>;
-               compatible = "fixed-clock";
-               clock-frequency = <48000000>;
-       };
-
-        cpsw_125mhz_gclk: cpsw_125mhz_gclk {
-               #clock-cells = <0>;
-               compatible = "fixed-clock";
-               clock-frequency = <125000000>;
-       };
-
-       cpsw_cpts_rft_clk: cpsw_cpts_rft_clk {
-               #clock-cells = <0>;
-               compatible = "fixed-clock";
-               clock-frequency = <250000000>;
-       };
-
 };
 
 &prcm_clocks {
                clock-div = <78125>;
        };
 
+       /* L4_HS 220 MHz*/
+       sysclk4_ck: sysclk4_ck {
+               #clock-cells = <0>;
+               compatible = "ti,fixed-factor-clock";
+               clocks = <&adpll_l3_ck 1>;
+               ti,clock-mult = <1>;
+               ti,clock-div = <1>;
+       };
+
+       /* L4_FWCFG */
+       sysclk5_ck: sysclk5_ck {
+               #clock-cells = <0>;
+               compatible = "ti,fixed-factor-clock";
+               clocks = <&adpll_l3_ck 1>;
+               ti,clock-mult = <1>;
+               ti,clock-div = <2>;
+       };
+
+       /* L4_LS 110 MHz */
+       sysclk6_ck: sysclk6_ck {
+               #clock-cells = <0>;
+               compatible = "ti,fixed-factor-clock";
+               clocks = <&adpll_l3_ck 1>;
+               ti,clock-mult = <1>;
+               ti,clock-div = <2>;
+       };
+
+       sysclk8_ck: sysclk8_ck {
+               #clock-cells = <0>;
+               compatible = "ti,fixed-factor-clock";
+               clocks = <&adpll_usb_ck 1>;
+               ti,clock-mult = <1>;
+               ti,clock-div = <1>;
+       };
+
+       sysclk10_ck: sysclk10_ck {
+               compatible = "ti,divider-clock";
+               reg = <0x324>;
+               ti,max-div = <7>;
+               #clock-cells = <0>;
+               clocks = <&adpll_usb_ck 1>;
+       };
+
        aud_clkin0_ck: aud_clkin0_ck {
                #clock-cells = <0>;
                compatible = "fixed-clock";
index 6f98dc8..0e49741 100644 (file)
@@ -6,6 +6,32 @@
 
 #include "dm814x-clocks.dtsi"
 
+/* Compared to dm814x, dra62x does not have hdic, l3 or dss PLLs */
+&adpll_hdvic_ck {
+       status = "disabled";
+};
+
+&adpll_l3_ck {
+       status = "disabled";
+};
+
+&adpll_dss_ck {
+       status = "disabled";
+};
+
+/* Compared to dm814x, dra62x has interconnect clocks on isp PLL */
+&sysclk4_ck {
+       clocks = <&adpll_isp_ck 1>;
+};
+
+&sysclk5_ck {
+       clocks = <&adpll_isp_ck 1>;
+};
+
+&sysclk6_ck {
+       clocks = <&adpll_isp_ck 1>;
+};
+
 /*
  * Compared to dm814x, dra62x has different shifts and more mux options.
  * Please add the extra options for ysclk_14 and 16 if really needed.
index d0bae06..ef2164a 100644 (file)
                clock-frequency = <32768>;
        };
 
-       sys_32k_ck: sys_32k_ck {
+       sys_clk32_crystal_ck: sys_clk32_crystal_ck {
                #clock-cells = <0>;
                compatible = "fixed-clock";
                clock-frequency = <32768>;
        };
 
+       sys_clk32_pseudo_ck: sys_clk32_pseudo_ck {
+               #clock-cells = <0>;
+               compatible = "fixed-factor-clock";
+               clocks = <&sys_clkin1>;
+               clock-mult = <1>;
+               clock-div = <610>;
+       };
+
        virt_12000000_ck: virt_12000000_ck {
                #clock-cells = <0>;
                compatible = "fixed-clock";
                ti,bit-shift = <22>;
                reg = <0x0558>;
        };
+
+       sys_32k_ck: sys_32k_ck {
+               #clock-cells = <0>;
+               compatible = "ti,mux-clock";
+               clocks = <&sys_clk32_crystal_ck>, <&sys_clk32_pseudo_ck>, <&sys_clk32_pseudo_ck>, <&sys_clk32_pseudo_ck>;
+               ti,bit-shift = <8>;
+               reg = <0x6c4>;
+       };
 };
index ef53305..8193139 100644 (file)
@@ -1,6 +1,6 @@
 /dts-v1/;
 
-#include <dt-bindings/interrupt-controller/arm-gic.h>
+#include <dt-bindings/interrupt-controller/irq.h>
 #include <dt-bindings/clock/qcom,gcc-msm8974.h>
 #include "skeleton.dtsi"
 
                        clock-names = "core", "iface";
                        #address-cells = <1>;
                        #size-cells = <0>;
-                       dmas = <&blsp2_dma 20>, <&blsp2_dma 21>;
-                       dma-names = "tx", "rx";
                };
 
                spmi_bus: spmi@fc4cf000 {
                        interrupt-controller;
                        #interrupt-cells = <4>;
                };
-
-               blsp2_dma: dma-controller@f9944000 {
-                       compatible = "qcom,bam-v1.4.0";
-                       reg = <0xf9944000 0x19000>;
-                       interrupts = <GIC_SPI 239 IRQ_TYPE_LEVEL_HIGH>;
-                       clocks = <&gcc GCC_BLSP2_AHB_CLK>;
-                       clock-names = "bam_clk";
-                       #dma-cells = <1>;
-                       qcom,ee = <0>;
-               };
        };
 
        smd {
index 0ad71b8..cc6e28f 100644 (file)
 };
 
 &pcie_bus_clk {
+       clock-frequency = <100000000>;
        status = "okay";
 };
 
index 6c08314..a9285d9 100644 (file)
 };
 
 &pfc {
-       pinctrl-0 = <&scif_clk_pins>;
-       pinctrl-names = "default";
-
        scif0_pins: serial0 {
                renesas,groups = "scif0_data_d";
                renesas,function = "scif0";
        };
 
-       scif_clk_pins: scif_clk {
-               renesas,groups = "scif_clk";
-               renesas,function = "scif_clk";
-       };
-
        ether_pins: ether {
                renesas,groups = "eth_link", "eth_mdio", "eth_rmii";
                renesas,function = "eth";
        status = "okay";
 };
 
-&scif_clk {
-       clock-frequency = <14745600>;
-       status = "okay";
-};
-
 &ether {
        pinctrl-0 = <&ether_pins &phy1_pins>;
        pinctrl-names = "default";
 };
 
 &pcie_bus_clk {
+       clock-frequency = <100000000>;
        status = "okay";
 };
 
index 6439f05..1cd1b6a 100644 (file)
                pcie_bus_clk: pcie_bus_clk {
                        compatible = "fixed-clock";
                        #clock-cells = <0>;
-                       clock-frequency = <100000000>;
+                       clock-frequency = <0>;
                        clock-output-names = "pcie_bus";
-                       status = "disabled";
                };
 
                /* External SCIF clock */
                        #clock-cells = <0>;
                        /* This value must be overridden by the board. */
                        clock-frequency = <0>;
-                       status = "disabled";
                };
 
                /* External USB clock - can be overridden by the board */
                        /* This value must be overridden by the board. */
                        clock-frequency = <0>;
                        clock-output-names = "can_clk";
-                       status = "disabled";
                };
 
                /* Special CPG clocks */
index e11d99d..690352d 100644 (file)
@@ -91,10 +91,7 @@ CONFIG_SATA_AHCI=y
 CONFIG_SATA_MV=y
 CONFIG_NETDEVICES=y
 CONFIG_NET_DSA_MV88E6060=y
-CONFIG_NET_DSA_MV88E6131=y
-CONFIG_NET_DSA_MV88E6123=y
-CONFIG_NET_DSA_MV88E6171=y
-CONFIG_NET_DSA_MV88E6352=y
+CONFIG_NET_DSA_MV88E6XXX=y
 CONFIG_MV643XX_ETH=y
 CONFIG_R8169=y
 CONFIG_MARVELL_PHY=y
index dc5797a..6492407 100644 (file)
@@ -66,7 +66,7 @@ CONFIG_SATA_AHCI=y
 CONFIG_AHCI_MVEBU=y
 CONFIG_SATA_MV=y
 CONFIG_NETDEVICES=y
-CONFIG_NET_DSA_MV88E6171=y
+CONFIG_NET_DSA_MV88E6XXX=y
 CONFIG_MV643XX_ETH=y
 CONFIG_MVNETA=y
 CONFIG_MVPP2=y
index 6a5bc27..27a70a7 100644 (file)
@@ -85,8 +85,7 @@ CONFIG_ATA=y
 CONFIG_SATA_MV=y
 CONFIG_NETDEVICES=y
 CONFIG_MII=y
-CONFIG_NET_DSA_MV88E6131=y
-CONFIG_NET_DSA_MV88E6123=y
+CONFIG_NET_DSA_MV88E6XXX=y
 CONFIG_MV643XX_ETH=y
 CONFIG_MARVELL_PHY=y
 # CONFIG_INPUT_MOUSEDEV is not set
index a5edd7d..3d039ef 100644 (file)
@@ -71,6 +71,7 @@ struct platform_device *__init imx_add_sdhci_esdhc_imx(
        if (!pdata)
                pdata = &default_esdhc_pdata;
 
-       return imx_add_platform_device(data->devid, data->id, res,
-                       ARRAY_SIZE(res), pdata, sizeof(*pdata));
+       return imx_add_platform_device_dmamask(data->devid, data->id, res,
+                       ARRAY_SIZE(res), pdata, sizeof(*pdata),
+                       DMA_BIT_MASK(32));
 }
index 7581e03..ef9ed36 100644 (file)
@@ -461,7 +461,7 @@ static struct clockdomain ipu_7xx_clkdm = {
        .cm_inst          = DRA7XX_CM_CORE_AON_IPU_INST,
        .clkdm_offs       = DRA7XX_CM_CORE_AON_IPU_IPU_CDOFFS,
        .dep_bit          = DRA7XX_IPU_STATDEP_SHIFT,
-       .flags            = CLKDM_CAN_HWSUP_SWSUP,
+       .flags            = CLKDM_CAN_SWSUP,
 };
 
 static struct clockdomain mpu1_7xx_clkdm = {
index 9821be6..49de4dd 100644 (file)
@@ -737,7 +737,8 @@ void __init omap5_init_late(void)
 #ifdef CONFIG_SOC_DRA7XX
 void __init dra7xx_init_early(void)
 {
-       omap2_set_globals_tap(-1, OMAP2_L4_IO_ADDRESS(DRA7XX_TAP_BASE));
+       omap2_set_globals_tap(DRA7XX_CLASS,
+                             OMAP2_L4_IO_ADDRESS(DRA7XX_TAP_BASE));
        omap2_set_globals_prcm_mpu(OMAP2_L4_IO_ADDRESS(OMAP54XX_PRCM_MPU_BASE));
        omap2_control_base_init();
        omap4_pm_init_early();
index f397bd6..2c04f27 100644 (file)
@@ -274,6 +274,10 @@ static inline void omap5_irq_save_context(void)
  */
 static void irq_save_context(void)
 {
+       /* DRA7 has no SAR to save */
+       if (soc_is_dra7xx())
+               return;
+
        if (!sar_base)
                sar_base = omap4_get_sar_ram_base();
 
@@ -290,6 +294,9 @@ static void irq_sar_clear(void)
 {
        u32 val;
        u32 offset = SAR_BACKUP_STATUS_OFFSET;
+       /* DRA7 has no SAR to save */
+       if (soc_is_dra7xx())
+               return;
 
        if (soc_is_omap54xx())
                offset = OMAP5_SAR_BACKUP_STATUS_OFFSET;
index 2dbd378..d44e0e2 100644 (file)
@@ -198,7 +198,6 @@ void omap_sram_idle(void)
        int per_next_state = PWRDM_POWER_ON;
        int core_next_state = PWRDM_POWER_ON;
        int per_going_off;
-       int core_prev_state;
        u32 sdrc_pwr = 0;
 
        mpu_next_state = pwrdm_read_next_pwrst(mpu_pwrdm);
@@ -278,16 +277,20 @@ void omap_sram_idle(void)
                sdrc_write_reg(sdrc_pwr, SDRC_POWER);
 
        /* CORE */
-       if (core_next_state < PWRDM_POWER_ON) {
-               core_prev_state = pwrdm_read_prev_pwrst(core_pwrdm);
-               if (core_prev_state == PWRDM_POWER_OFF) {
-                       omap3_core_restore_context();
-                       omap3_cm_restore_context();
-                       omap3_sram_restore_context();
-                       omap2_sms_restore_context();
-               }
+       if (core_next_state < PWRDM_POWER_ON &&
+           pwrdm_read_prev_pwrst(core_pwrdm) == PWRDM_POWER_OFF) {
+               omap3_core_restore_context();
+               omap3_cm_restore_context();
+               omap3_sram_restore_context();
+               omap2_sms_restore_context();
+       } else {
+               /*
+                * In off-mode resume path above, omap3_core_restore_context
+                * also handles the INTC autoidle restore done here so limit
+                * this to non-off mode resume paths so we don't do it twice.
+                */
+               omap3_intc_resume_idle();
        }
-       omap3_intc_resume_idle();
 
        pwrdm_post_transition(NULL);
 
index ad008e4..67d79f9 100644 (file)
@@ -40,8 +40,7 @@ static void __init shmobile_setup_delay_hz(unsigned int max_cpu_core_hz,
 void __init shmobile_init_delay(void)
 {
        struct device_node *np, *cpus;
-       bool is_a7_a8_a9 = false;
-       bool is_a15 = false;
+       unsigned int div = 0;
        bool has_arch_timer = false;
        u32 max_freq = 0;
 
@@ -55,27 +54,22 @@ void __init shmobile_init_delay(void)
                if (!of_property_read_u32(np, "clock-frequency", &freq))
                        max_freq = max(max_freq, freq);
 
-               if (of_device_is_compatible(np, "arm,cortex-a8") ||
-                   of_device_is_compatible(np, "arm,cortex-a9")) {
-                       is_a7_a8_a9 = true;
-               } else if (of_device_is_compatible(np, "arm,cortex-a7")) {
-                       is_a7_a8_a9 = true;
-                       has_arch_timer = true;
-               } else if (of_device_is_compatible(np, "arm,cortex-a15")) {
-                       is_a15 = true;
+               if (of_device_is_compatible(np, "arm,cortex-a8")) {
+                       div = 2;
+               } else if (of_device_is_compatible(np, "arm,cortex-a9")) {
+                       div = 1;
+               } else if (of_device_is_compatible(np, "arm,cortex-a7") ||
+                        of_device_is_compatible(np, "arm,cortex-a15")) {
+                       div = 1;
                        has_arch_timer = true;
                }
        }
 
        of_node_put(cpus);
 
-       if (!max_freq)
+       if (!max_freq || !div)
                return;
 
-       if (!has_arch_timer || !IS_ENABLED(CONFIG_ARM_ARCH_TIMER)) {
-               if (is_a7_a8_a9)
-                       shmobile_setup_delay_hz(max_freq, 1, 3);
-               else if (is_a15)
-                       shmobile_setup_delay_hz(max_freq, 2, 4);
-       }
+       if (!has_arch_timer || !IS_ENABLED(CONFIG_ARM_ARCH_TIMER))
+               shmobile_setup_delay_hz(max_freq, 1, div);
 }
index a055a5d..ba04877 100644 (file)
                                     <0 113 4>,
                                     <0 114 4>,
                                     <0 115 4>;
+                       channel = <12>;
                        port-id = <1>;
                        dma-coherent;
                        clocks = <&xge1clk 0>;
index ae4a173..5147d76 100644 (file)
                                     <0x0 0x65 0x4>,
                                     <0x0 0x66 0x4>,
                                     <0x0 0x67 0x4>;
+                       channel = <0>;
                        dma-coherent;
                        clocks = <&xge0clk 0>;
                        /* mac address will be overwritten by the bootloader */
index 933cba3..b6a130c 100644 (file)
@@ -24,17 +24,19 @@ soc0: soc@000000000 {
        };
 
        dsaf0: dsa@c7000000 {
+               #address-cells = <1>;
+               #size-cells = <0>;
                compatible = "hisilicon,hns-dsaf-v1";
                mode = "6port-16rss";
                interrupt-parent = <&mbigen_dsa>;
 
-               reg = <0x0 0xC0000000 0x0 0x420000
-                      0x0 0xC2000000 0x0 0x300000
-                      0x0 0xc5000000 0x0 0x890000
+               reg = <0x0 0xc5000000 0x0 0x890000
                       0x0 0xc7000000 0x0 0x60000
                       >;
 
-               phy-handle = <0 0 0 0 &soc0_phy0 &soc0_phy1 0 0>;
+               reg-names = "ppe-base","dsaf-base";
+               subctrl-syscon = <&dsaf_subctrl>;
+               reset-field-offset = <0>;
                interrupts = <
                        /* [14] ge fifo err 8 / xge 6**/
                        149 0x4 150 0x4 151 0x4 152 0x4
@@ -122,12 +124,31 @@ soc0: soc@000000000 {
                buf-size = <4096>;
                desc-num = <1024>;
                dma-coherent;
+
+               port@0 {
+                       reg = <0>;
+                       serdes-syscon = <&serdes_ctrl0>;
+               };
+               port@1 {
+                       reg = <1>;
+                       serdes-syscon = <&serdes_ctrl0>;
+               };
+               port@4 {
+                       reg = <4>;
+                       phy-handle = <&soc0_phy0>;
+                       serdes-syscon = <&serdes_ctrl1>;
+               };
+               port@5 {
+                       reg = <5>;
+                       phy-handle = <&soc0_phy1>;
+                       serdes-syscon = <&serdes_ctrl1>;
+               };
        };
 
        eth0: ethernet@0{
                compatible = "hisilicon,hns-nic-v1";
                ae-handle = <&dsaf0>;
-               port-id = <0>;
+               port-idx-in-ae = <0>;
                local-mac-address = [00 00 00 01 00 58];
                status = "disabled";
                dma-coherent;
@@ -135,56 +156,25 @@ soc0: soc@000000000 {
        eth1: ethernet@1{
                compatible = "hisilicon,hns-nic-v1";
                ae-handle = <&dsaf0>;
-               port-id = <1>;
+               port-idx-in-ae = <1>;
+               local-mac-address = [00 00 00 01 00 59];
                status = "disabled";
                dma-coherent;
        };
-       eth2: ethernet@2{
+       eth2: ethernet@4{
                compatible = "hisilicon,hns-nic-v1";
                ae-handle = <&dsaf0>;
-               port-id = <2>;
+               port-idx-in-ae = <4>;
                local-mac-address = [00 00 00 01 00 5a];
                status = "disabled";
                dma-coherent;
        };
-       eth3: ethernet@3{
+       eth3: ethernet@5{
                compatible = "hisilicon,hns-nic-v1";
                ae-handle = <&dsaf0>;
-               port-id = <3>;
+               port-idx-in-ae = <5>;
                local-mac-address = [00 00 00 01 00 5b];
                status = "disabled";
                dma-coherent;
        };
-       eth4: ethernet@4{
-               compatible = "hisilicon,hns-nic-v1";
-               ae-handle = <&dsaf0>;
-               port-id = <4>;
-               local-mac-address = [00 00 00 01 00 5c];
-               status = "disabled";
-               dma-coherent;
-       };
-       eth5: ethernet@5{
-               compatible = "hisilicon,hns-nic-v1";
-               ae-handle = <&dsaf0>;
-               port-id = <5>;
-               local-mac-address = [00 00 00 01 00 5d];
-               status = "disabled";
-               dma-coherent;
-       };
-       eth6: ethernet@6{
-               compatible = "hisilicon,hns-nic-v1";
-               ae-handle = <&dsaf0>;
-               port-id = <6>;
-               local-mac-address = [00 00 00 01 00 5e];
-               status = "disabled";
-               dma-coherent;
-       };
-       eth7: ethernet@7{
-               compatible = "hisilicon,hns-nic-v1";
-               ae-handle = <&dsaf0>;
-               port-id = <7>;
-               local-mac-address = [00 00 00 01 00 5f];
-               status = "disabled";
-               dma-coherent;
-       };
 };
index 727ae5f..b0ed443 100644 (file)
@@ -70,7 +70,6 @@
                i2c3 = &i2c3;
                i2c4 = &i2c4;
                i2c5 = &i2c5;
-               i2c6 = &i2c6;
        };
 };
 
index e682a3f..651c9d9 100644 (file)
 
                i2c2: i2c@58782000 {
                        compatible = "socionext,uniphier-fi2c";
-                       status = "disabled";
                        reg = <0x58782000 0x80>;
                        #address-cells = <1>;
                        #size-cells = <0>;
                        interrupts = <0 43 4>;
-                       pinctrl-names = "default";
-                       pinctrl-0 = <&pinctrl_i2c2>;
                        clocks = <&i2c_clk>;
-                       clock-frequency = <100000>;
+                       clock-frequency = <400000>;
                };
 
                i2c3: i2c@58783000 {
 
                i2c4: i2c@58784000 {
                        compatible = "socionext,uniphier-fi2c";
+                       status = "disabled";
                        reg = <0x58784000 0x80>;
                        #address-cells = <1>;
                        #size-cells = <0>;
                        interrupts = <0 45 4>;
+                       pinctrl-names = "default";
+                       pinctrl-0 = <&pinctrl_i2c4>;
                        clocks = <&i2c_clk>;
-                       clock-frequency = <400000>;
+                       clock-frequency = <100000>;
                };
 
                i2c5: i2c@58785000 {
                        clock-frequency = <400000>;
                };
 
-               i2c6: i2c@58786000 {
-                       compatible = "socionext,uniphier-fi2c";
-                       reg = <0x58786000 0x80>;
-                       #address-cells = <1>;
-                       #size-cells = <0>;
-                       interrupts = <0 26 4>;
-                       clocks = <&i2c_clk>;
-                       clock-frequency = <400000>;
-               };
-
                system_bus: system-bus@58c00000 {
                        compatible = "socionext,uniphier-system-bus";
                        status = "disabled";
index 4203d5f..85da0f5 100644 (file)
@@ -588,6 +588,15 @@ set_hcr:
        msr     vpidr_el2, x0
        msr     vmpidr_el2, x1
 
+       /*
+        * When VHE is not in use, early init of EL2 and EL1 needs to be
+        * done here.
+        * When VHE _is_ in use, EL1 will not be used in the host and
+        * requires no configuration, and all non-hyp-specific EL2 setup
+        * will be done via the _EL1 system register aliases in __cpu_setup.
+        */
+       cbnz    x2, 1f
+
        /* sctlr_el1 */
        mov     x0, #0x0800                     // Set/clear RES{1,0} bits
 CPU_BE(        movk    x0, #0x33d0, lsl #16    )       // Set EE and E0E on BE systems
@@ -597,6 +606,7 @@ CPU_LE(     movk    x0, #0x30d0, lsl #16    )       // Clear EE and E0E on LE systems
        /* Coprocessor traps. */
        mov     x0, #0x33ff
        msr     cptr_el2, x0                    // Disable copro. traps to EL2
+1:
 
 #ifdef CONFIG_COMPAT
        msr     hstr_el2, xzr                   // Disable CP15 traps to EL2
@@ -734,7 +744,8 @@ ENDPROC(__secondary_switched)
 
        .macro  update_early_cpu_boot_status status, tmp1, tmp2
        mov     \tmp2, #\status
-       str_l   \tmp2, __early_cpu_boot_status, \tmp1
+       adr_l   \tmp1, __early_cpu_boot_status
+       str     \tmp2, [\tmp1]
        dmb     sy
        dc      ivac, \tmp1                     // Invalidate potentially stale cache line
        .endm
index aef3605..18a71bc 100644 (file)
@@ -52,6 +52,7 @@ static void write_pen_release(u64 val)
 static int smp_spin_table_cpu_init(unsigned int cpu)
 {
        struct device_node *dn;
+       int ret;
 
        dn = of_get_cpu_node(cpu, NULL);
        if (!dn)
@@ -60,15 +61,15 @@ static int smp_spin_table_cpu_init(unsigned int cpu)
        /*
         * Determine the address from which the CPU is polling.
         */
-       if (of_property_read_u64(dn, "cpu-release-addr",
-                                &cpu_release_addr[cpu])) {
+       ret = of_property_read_u64(dn, "cpu-release-addr",
+                                  &cpu_release_addr[cpu]);
+       if (ret)
                pr_err("CPU %d: missing or invalid cpu-release-addr property\n",
                       cpu);
 
-               return -1;
-       }
+       of_node_put(dn);
 
-       return 0;
+       return ret;
 }
 
 static int smp_spin_table_cpu_prepare(unsigned int cpu)
index c2cfcb1..2fcefe7 100644 (file)
@@ -68,7 +68,7 @@ void *memset(void *s, int c, size_t count)
                  "=r" (charcnt),       /* %1  Output */
                  "=r" (dwordcnt),      /* %2  Output */
                  "=r" (fill8reg),      /* %3  Output */
-                 "=r" (wrkrega)        /* %4  Output */
+                 "=&r" (wrkrega)       /* %4  Output only */
                : "r" (c),              /* %5  Input */
                  "0" (s),              /* %0  Input/Output */
                  "1" (count)           /* %1  Input/Output */
index 3fa9df7..2fc5d4d 100644 (file)
@@ -384,3 +384,5 @@ SYSCALL(ni_syscall)
 SYSCALL(ni_syscall)
 SYSCALL(mlock2)
 SYSCALL(copy_file_range)
+COMPAT_SYS_SPU(preadv2)
+COMPAT_SYS_SPU(pwritev2)
index 1f2594d..cf12c58 100644 (file)
@@ -12,7 +12,7 @@
 #include <uapi/asm/unistd.h>
 
 
-#define NR_syscalls            380
+#define NR_syscalls            382
 
 #define __NR__exit __NR_exit
 
index 8dde199..f63c96c 100644 (file)
@@ -31,6 +31,7 @@
 #define PPC_FEATURE_PSERIES_PERFMON_COMPAT \
                                        0x00000040
 
+/* Reserved - do not use               0x00000004 */
 #define PPC_FEATURE_TRUE_LE            0x00000002
 #define PPC_FEATURE_PPC_LE             0x00000001
 
index 940290d..e9f5f41 100644 (file)
 #define __NR_membarrier                365
 #define __NR_mlock2            378
 #define __NR_copy_file_range   379
+#define __NR_preadv2           380
+#define __NR_pwritev2          381
 
 #endif /* _UAPI_ASM_POWERPC_UNISTD_H_ */
index 7030b03..a15fe1d 100644 (file)
@@ -148,23 +148,25 @@ static struct ibm_pa_feature {
        unsigned long   cpu_features;   /* CPU_FTR_xxx bit */
        unsigned long   mmu_features;   /* MMU_FTR_xxx bit */
        unsigned int    cpu_user_ftrs;  /* PPC_FEATURE_xxx bit */
+       unsigned int    cpu_user_ftrs2; /* PPC_FEATURE2_xxx bit */
        unsigned char   pabyte;         /* byte number in ibm,pa-features */
        unsigned char   pabit;          /* bit number (big-endian) */
        unsigned char   invert;         /* if 1, pa bit set => clear feature */
 } ibm_pa_features[] __initdata = {
-       {0, 0, PPC_FEATURE_HAS_MMU,     0, 0, 0},
-       {0, 0, PPC_FEATURE_HAS_FPU,     0, 1, 0},
-       {CPU_FTR_CTRL, 0, 0,            0, 3, 0},
-       {CPU_FTR_NOEXECUTE, 0, 0,       0, 6, 0},
-       {CPU_FTR_NODSISRALIGN, 0, 0,    1, 1, 1},
-       {0, MMU_FTR_CI_LARGE_PAGE, 0,   1, 2, 0},
-       {CPU_FTR_REAL_LE, PPC_FEATURE_TRUE_LE, 5, 0, 0},
+       {0, 0, PPC_FEATURE_HAS_MMU, 0,          0, 0, 0},
+       {0, 0, PPC_FEATURE_HAS_FPU, 0,          0, 1, 0},
+       {CPU_FTR_CTRL, 0, 0, 0,                 0, 3, 0},
+       {CPU_FTR_NOEXECUTE, 0, 0, 0,            0, 6, 0},
+       {CPU_FTR_NODSISRALIGN, 0, 0, 0,         1, 1, 1},
+       {0, MMU_FTR_CI_LARGE_PAGE, 0, 0,                1, 2, 0},
+       {CPU_FTR_REAL_LE, 0, PPC_FEATURE_TRUE_LE, 0, 5, 0, 0},
        /*
-        * If the kernel doesn't support TM (ie. CONFIG_PPC_TRANSACTIONAL_MEM=n),
-        * we don't want to turn on CPU_FTR_TM here, so we use CPU_FTR_TM_COMP
-        * which is 0 if the kernel doesn't support TM.
+        * If the kernel doesn't support TM (ie CONFIG_PPC_TRANSACTIONAL_MEM=n),
+        * we don't want to turn on TM here, so we use the *_COMP versions
+        * which are 0 if the kernel doesn't support TM.
         */
-       {CPU_FTR_TM_COMP, 0, 0,         22, 0, 0},
+       {CPU_FTR_TM_COMP, 0, 0,
+        PPC_FEATURE2_HTM_COMP|PPC_FEATURE2_HTM_NOSC_COMP, 22, 0, 0},
 };
 
 static void __init scan_features(unsigned long node, const unsigned char *ftrs,
@@ -195,10 +197,12 @@ static void __init scan_features(unsigned long node, const unsigned char *ftrs,
                if (bit ^ fp->invert) {
                        cur_cpu_spec->cpu_features |= fp->cpu_features;
                        cur_cpu_spec->cpu_user_features |= fp->cpu_user_ftrs;
+                       cur_cpu_spec->cpu_user_features2 |= fp->cpu_user_ftrs2;
                        cur_cpu_spec->mmu_features |= fp->mmu_features;
                } else {
                        cur_cpu_spec->cpu_features &= ~fp->cpu_features;
                        cur_cpu_spec->cpu_user_features &= ~fp->cpu_user_ftrs;
+                       cur_cpu_spec->cpu_user_features2 &= ~fp->cpu_user_ftrs2;
                        cur_cpu_spec->mmu_features &= ~fp->mmu_features;
                }
        }
index d29ad95..081b2ad 100644 (file)
@@ -11,7 +11,7 @@ typedef struct {
        spinlock_t list_lock;
        struct list_head pgtable_list;
        struct list_head gmap_list;
-       unsigned long asce_bits;
+       unsigned long asce;
        unsigned long asce_limit;
        unsigned long vdso_base;
        /* The mmu context allocates 4K page tables. */
index d321469..c837b79 100644 (file)
@@ -26,12 +26,28 @@ static inline int init_new_context(struct task_struct *tsk,
        mm->context.has_pgste = 0;
        mm->context.use_skey = 0;
 #endif
-       if (mm->context.asce_limit == 0) {
+       switch (mm->context.asce_limit) {
+       case 1UL << 42:
+               /*
+                * forked 3-level task, fall through to set new asce with new
+                * mm->pgd
+                */
+       case 0:
                /* context created by exec, set asce limit to 4TB */
-               mm->context.asce_bits = _ASCE_TABLE_LENGTH |
-                       _ASCE_USER_BITS | _ASCE_TYPE_REGION3;
                mm->context.asce_limit = STACK_TOP_MAX;
-       } else if (mm->context.asce_limit == (1UL << 31)) {
+               mm->context.asce = __pa(mm->pgd) | _ASCE_TABLE_LENGTH |
+                                  _ASCE_USER_BITS | _ASCE_TYPE_REGION3;
+               break;
+       case 1UL << 53:
+               /* forked 4-level task, set new asce with new mm->pgd */
+               mm->context.asce = __pa(mm->pgd) | _ASCE_TABLE_LENGTH |
+                                  _ASCE_USER_BITS | _ASCE_TYPE_REGION2;
+               break;
+       case 1UL << 31:
+               /* forked 2-level compat task, set new asce with new mm->pgd */
+               mm->context.asce = __pa(mm->pgd) | _ASCE_TABLE_LENGTH |
+                                  _ASCE_USER_BITS | _ASCE_TYPE_SEGMENT;
+               /* pgd_alloc() did not increase mm->nr_pmds */
                mm_inc_nr_pmds(mm);
        }
        crst_table_init((unsigned long *) mm->pgd, pgd_entry_type(mm));
@@ -42,7 +58,7 @@ static inline int init_new_context(struct task_struct *tsk,
 
 static inline void set_user_asce(struct mm_struct *mm)
 {
-       S390_lowcore.user_asce = mm->context.asce_bits | __pa(mm->pgd);
+       S390_lowcore.user_asce = mm->context.asce;
        if (current->thread.mm_segment.ar4)
                __ctl_load(S390_lowcore.user_asce, 7, 7);
        set_cpu_flag(CIF_ASCE);
@@ -71,7 +87,7 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next,
 {
        int cpu = smp_processor_id();
 
-       S390_lowcore.user_asce = next->context.asce_bits | __pa(next->pgd);
+       S390_lowcore.user_asce = next->context.asce;
        if (prev == next)
                return;
        if (MACHINE_HAS_TLB_LC)
index 9b3d9b6..da34cb6 100644 (file)
@@ -52,8 +52,8 @@ static inline unsigned long pgd_entry_type(struct mm_struct *mm)
        return _REGION2_ENTRY_EMPTY;
 }
 
-int crst_table_upgrade(struct mm_struct *, unsigned long limit);
-void crst_table_downgrade(struct mm_struct *, unsigned long limit);
+int crst_table_upgrade(struct mm_struct *);
+void crst_table_downgrade(struct mm_struct *);
 
 static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long address)
 {
index d6fd22e..18cdede 100644 (file)
@@ -175,7 +175,7 @@ extern __vector128 init_task_fpu_regs[__NUM_VXRS];
        regs->psw.mask  = PSW_USER_BITS | PSW_MASK_BA;                  \
        regs->psw.addr  = new_psw;                                      \
        regs->gprs[15]  = new_stackp;                                   \
-       crst_table_downgrade(current->mm, 1UL << 31);                   \
+       crst_table_downgrade(current->mm);                              \
        execve_tail();                                                  \
 } while (0)
 
index ca148f7..a2e6ef3 100644 (file)
@@ -110,8 +110,7 @@ static inline void __tlb_flush_asce(struct mm_struct *mm, unsigned long asce)
 static inline void __tlb_flush_kernel(void)
 {
        if (MACHINE_HAS_IDTE)
-               __tlb_flush_idte((unsigned long) init_mm.pgd |
-                                init_mm.context.asce_bits);
+               __tlb_flush_idte(init_mm.context.asce);
        else
                __tlb_flush_global();
 }
@@ -133,8 +132,7 @@ static inline void __tlb_flush_asce(struct mm_struct *mm, unsigned long asce)
 static inline void __tlb_flush_kernel(void)
 {
        if (MACHINE_HAS_TLB_LC)
-               __tlb_flush_idte_local((unsigned long) init_mm.pgd |
-                                      init_mm.context.asce_bits);
+               __tlb_flush_idte_local(init_mm.context.asce);
        else
                __tlb_flush_local();
 }
@@ -148,8 +146,7 @@ static inline void __tlb_flush_mm(struct mm_struct * mm)
         * only ran on the local cpu.
         */
        if (MACHINE_HAS_IDTE && list_empty(&mm->context.gmap_list))
-               __tlb_flush_asce(mm, (unsigned long) mm->pgd |
-                                mm->context.asce_bits);
+               __tlb_flush_asce(mm, mm->context.asce);
        else
                __tlb_flush_full(mm);
 }
index c7b0451..2489b2e 100644 (file)
@@ -89,7 +89,8 @@ void __init paging_init(void)
                asce_bits = _ASCE_TYPE_REGION3 | _ASCE_TABLE_LENGTH;
                pgd_type = _REGION3_ENTRY_EMPTY;
        }
-       S390_lowcore.kernel_asce = (__pa(init_mm.pgd) & PAGE_MASK) | asce_bits;
+       init_mm.context.asce = (__pa(init_mm.pgd) & PAGE_MASK) | asce_bits;
+       S390_lowcore.kernel_asce = init_mm.context.asce;
        clear_table((unsigned long *) init_mm.pgd, pgd_type,
                    sizeof(unsigned long)*2048);
        vmem_map_init();
index 45c4daa..89cf09e 100644 (file)
@@ -174,7 +174,7 @@ int s390_mmap_check(unsigned long addr, unsigned long len, unsigned long flags)
        if (!(flags & MAP_FIXED))
                addr = 0;
        if ((addr + len) >= TASK_SIZE)
-               return crst_table_upgrade(current->mm, TASK_MAX_SIZE);
+               return crst_table_upgrade(current->mm);
        return 0;
 }
 
@@ -191,7 +191,7 @@ s390_get_unmapped_area(struct file *filp, unsigned long addr,
                return area;
        if (area == -ENOMEM && !is_compat_task() && TASK_SIZE < TASK_MAX_SIZE) {
                /* Upgrade the page table to 4 levels and retry. */
-               rc = crst_table_upgrade(mm, TASK_MAX_SIZE);
+               rc = crst_table_upgrade(mm);
                if (rc)
                        return (unsigned long) rc;
                area = arch_get_unmapped_area(filp, addr, len, pgoff, flags);
@@ -213,7 +213,7 @@ s390_get_unmapped_area_topdown(struct file *filp, const unsigned long addr,
                return area;
        if (area == -ENOMEM && !is_compat_task() && TASK_SIZE < TASK_MAX_SIZE) {
                /* Upgrade the page table to 4 levels and retry. */
-               rc = crst_table_upgrade(mm, TASK_MAX_SIZE);
+               rc = crst_table_upgrade(mm);
                if (rc)
                        return (unsigned long) rc;
                area = arch_get_unmapped_area_topdown(filp, addr, len,
index f6c3de2..e8b5962 100644 (file)
@@ -76,81 +76,52 @@ static void __crst_table_upgrade(void *arg)
        __tlb_flush_local();
 }
 
-int crst_table_upgrade(struct mm_struct *mm, unsigned long limit)
+int crst_table_upgrade(struct mm_struct *mm)
 {
        unsigned long *table, *pgd;
-       unsigned long entry;
-       int flush;
 
-       BUG_ON(limit > TASK_MAX_SIZE);
-       flush = 0;
-repeat:
+       /* upgrade should only happen from 3 to 4 levels */
+       BUG_ON(mm->context.asce_limit != (1UL << 42));
+
        table = crst_table_alloc(mm);
        if (!table)
                return -ENOMEM;
+
        spin_lock_bh(&mm->page_table_lock);
-       if (mm->context.asce_limit < limit) {
-               pgd = (unsigned long *) mm->pgd;
-               if (mm->context.asce_limit <= (1UL << 31)) {
-                       entry = _REGION3_ENTRY_EMPTY;
-                       mm->context.asce_limit = 1UL << 42;
-                       mm->context.asce_bits = _ASCE_TABLE_LENGTH |
-                                               _ASCE_USER_BITS |
-                                               _ASCE_TYPE_REGION3;
-               } else {
-                       entry = _REGION2_ENTRY_EMPTY;
-                       mm->context.asce_limit = 1UL << 53;
-                       mm->context.asce_bits = _ASCE_TABLE_LENGTH |
-                                               _ASCE_USER_BITS |
-                                               _ASCE_TYPE_REGION2;
-               }
-               crst_table_init(table, entry);
-               pgd_populate(mm, (pgd_t *) table, (pud_t *) pgd);
-               mm->pgd = (pgd_t *) table;
-               mm->task_size = mm->context.asce_limit;
-               table = NULL;
-               flush = 1;
-       }
+       pgd = (unsigned long *) mm->pgd;
+       crst_table_init(table, _REGION2_ENTRY_EMPTY);
+       pgd_populate(mm, (pgd_t *) table, (pud_t *) pgd);
+       mm->pgd = (pgd_t *) table;
+       mm->context.asce_limit = 1UL << 53;
+       mm->context.asce = __pa(mm->pgd) | _ASCE_TABLE_LENGTH |
+                          _ASCE_USER_BITS | _ASCE_TYPE_REGION2;
+       mm->task_size = mm->context.asce_limit;
        spin_unlock_bh(&mm->page_table_lock);
-       if (table)
-               crst_table_free(mm, table);
-       if (mm->context.asce_limit < limit)
-               goto repeat;
-       if (flush)
-               on_each_cpu(__crst_table_upgrade, mm, 0);
+
+       on_each_cpu(__crst_table_upgrade, mm, 0);
        return 0;
 }
 
-void crst_table_downgrade(struct mm_struct *mm, unsigned long limit)
+void crst_table_downgrade(struct mm_struct *mm)
 {
        pgd_t *pgd;
 
+       /* downgrade should only happen from 3 to 2 levels (compat only) */
+       BUG_ON(mm->context.asce_limit != (1UL << 42));
+
        if (current->active_mm == mm) {
                clear_user_asce();
                __tlb_flush_mm(mm);
        }
-       while (mm->context.asce_limit > limit) {
-               pgd = mm->pgd;
-               switch (pgd_val(*pgd) & _REGION_ENTRY_TYPE_MASK) {
-               case _REGION_ENTRY_TYPE_R2:
-                       mm->context.asce_limit = 1UL << 42;
-                       mm->context.asce_bits = _ASCE_TABLE_LENGTH |
-                                               _ASCE_USER_BITS |
-                                               _ASCE_TYPE_REGION3;
-                       break;
-               case _REGION_ENTRY_TYPE_R3:
-                       mm->context.asce_limit = 1UL << 31;
-                       mm->context.asce_bits = _ASCE_TABLE_LENGTH |
-                                               _ASCE_USER_BITS |
-                                               _ASCE_TYPE_SEGMENT;
-                       break;
-               default:
-                       BUG();
-               }
-               mm->pgd = (pgd_t *) (pgd_val(*pgd) & _REGION_ENTRY_ORIGIN);
-               mm->task_size = mm->context.asce_limit;
-               crst_table_free(mm, (unsigned long *) pgd);
-       }
+
+       pgd = mm->pgd;
+       mm->pgd = (pgd_t *) (pgd_val(*pgd) & _REGION_ENTRY_ORIGIN);
+       mm->context.asce_limit = 1UL << 31;
+       mm->context.asce = __pa(mm->pgd) | _ASCE_TABLE_LENGTH |
+                          _ASCE_USER_BITS | _ASCE_TYPE_SEGMENT;
+       mm->task_size = mm->context.asce_limit;
+       crst_table_free(mm, (unsigned long *) pgd);
+
        if (current->active_mm == mm)
                set_user_asce(mm);
 }
index e595e89..1ea8c07 100644 (file)
@@ -457,7 +457,7 @@ int zpci_dma_init_device(struct zpci_dev *zdev)
        zdev->dma_table = dma_alloc_cpu_table();
        if (!zdev->dma_table) {
                rc = -ENOMEM;
-               goto out_clean;
+               goto out;
        }
 
        /*
@@ -477,18 +477,22 @@ int zpci_dma_init_device(struct zpci_dev *zdev)
        zdev->iommu_bitmap = vzalloc(zdev->iommu_pages / 8);
        if (!zdev->iommu_bitmap) {
                rc = -ENOMEM;
-               goto out_reg;
+               goto free_dma_table;
        }
 
        rc = zpci_register_ioat(zdev, 0, zdev->start_dma, zdev->end_dma,
                                (u64) zdev->dma_table);
        if (rc)
-               goto out_reg;
-       return 0;
+               goto free_bitmap;
 
-out_reg:
+       return 0;
+free_bitmap:
+       vfree(zdev->iommu_bitmap);
+       zdev->iommu_bitmap = NULL;
+free_dma_table:
        dma_free_cpu_table(zdev->dma_table);
-out_clean:
+       zdev->dma_table = NULL;
+out:
        return rc;
 }
 
index fb23fd6..c74d370 100644 (file)
@@ -24,7 +24,6 @@ CONFIG_INET_AH=y
 CONFIG_INET_ESP=y
 CONFIG_INET_IPCOMP=y
 # CONFIG_INET_LRO is not set
-CONFIG_IPV6_PRIVACY=y
 CONFIG_INET6_AH=m
 CONFIG_INET6_ESP=m
 CONFIG_INET6_IPCOMP=m
index 04920ab..3583d67 100644 (file)
@@ -48,7 +48,6 @@ CONFIG_SYN_COOKIES=y
 CONFIG_INET_AH=y
 CONFIG_INET_ESP=y
 CONFIG_INET_IPCOMP=y
-CONFIG_IPV6_PRIVACY=y
 CONFIG_IPV6_ROUTER_PREF=y
 CONFIG_IPV6_ROUTE_INFO=y
 CONFIG_IPV6_OPTIMISTIC_DAD=y
index 56f9338..1d8321c 100644 (file)
@@ -48,6 +48,7 @@
 #define SUN4V_CHIP_SPARC_M6    0x06
 #define SUN4V_CHIP_SPARC_M7    0x07
 #define SUN4V_CHIP_SPARC64X    0x8a
+#define SUN4V_CHIP_SPARC_SN    0x8b
 #define SUN4V_CHIP_UNKNOWN     0xff
 
 #ifndef __ASSEMBLY__
index b6de8b1..36eee81 100644 (file)
 #define __NR_setsockopt                355
 #define __NR_mlock2            356
 #define __NR_copy_file_range   357
+#define __NR_preadv2           358
+#define __NR_pwritev2          359
 
-#define NR_syscalls            358
+#define NR_syscalls            360
 
 /* Bitmask values returned from kern_features system call.  */
 #define KERN_FEATURE_MIXED_MODE_STACK  0x00000001
index 4ee1ad4..655628d 100644 (file)
@@ -214,8 +214,7 @@ do_dcpe_tl1_nonfatal:       /* Ok we may use interrupt globals safely. */
        subcc           %g1, %g2, %g1           ! Next cacheline
        bge,pt          %icc, 1b
         nop
-       ba,pt           %xcc, dcpe_icpe_tl1_common
-        nop
+       ba,a,pt         %xcc, dcpe_icpe_tl1_common
 
 do_dcpe_tl1_fatal:
        sethi           %hi(1f), %g7
@@ -224,8 +223,7 @@ do_dcpe_tl1_fatal:
        mov             0x2, %o0
        call            cheetah_plus_parity_error
         add            %sp, PTREGS_OFF, %o1
-       ba,pt           %xcc, rtrap
-        nop
+       ba,a,pt         %xcc, rtrap
        .size           do_dcpe_tl1,.-do_dcpe_tl1
 
        .globl          do_icpe_tl1
@@ -259,8 +257,7 @@ do_icpe_tl1_nonfatal:       /* Ok we may use interrupt globals safely. */
        subcc           %g1, %g2, %g1
        bge,pt          %icc, 1b
         nop
-       ba,pt           %xcc, dcpe_icpe_tl1_common
-        nop
+       ba,a,pt         %xcc, dcpe_icpe_tl1_common
 
 do_icpe_tl1_fatal:
        sethi           %hi(1f), %g7
@@ -269,8 +266,7 @@ do_icpe_tl1_fatal:
        mov             0x3, %o0
        call            cheetah_plus_parity_error
         add            %sp, PTREGS_OFF, %o1
-       ba,pt           %xcc, rtrap
-        nop
+       ba,a,pt         %xcc, rtrap
        .size           do_icpe_tl1,.-do_icpe_tl1
        
        .type           dcpe_icpe_tl1_common,#function
@@ -456,7 +452,7 @@ __cheetah_log_error:
         cmp            %g2, 0x63
        be              c_cee
         nop
-       ba,pt           %xcc, c_deferred
+       ba,a,pt         %xcc, c_deferred
        .size           __cheetah_log_error,.-__cheetah_log_error
 
        /* Cheetah FECC trap handling, we get here from tl{0,1}_fecc
index dfad8b1..493e023 100644 (file)
@@ -506,6 +506,12 @@ static void __init sun4v_cpu_probe(void)
                sparc_pmu_type = "sparc-m7";
                break;
 
+       case SUN4V_CHIP_SPARC_SN:
+               sparc_cpu_type = "SPARC-SN";
+               sparc_fpu_type = "SPARC-SN integrated FPU";
+               sparc_pmu_type = "sparc-sn";
+               break;
+
        case SUN4V_CHIP_SPARC64X:
                sparc_cpu_type = "SPARC64-X";
                sparc_fpu_type = "SPARC64-X integrated FPU";
index e69ec0e..45c820e 100644 (file)
@@ -328,6 +328,7 @@ static int iterate_cpu(struct cpuinfo_tree *t, unsigned int root_index)
        case SUN4V_CHIP_NIAGARA5:
        case SUN4V_CHIP_SPARC_M6:
        case SUN4V_CHIP_SPARC_M7:
+       case SUN4V_CHIP_SPARC_SN:
        case SUN4V_CHIP_SPARC64X:
                rover_inc_table = niagara_iterate_method;
                break;
index a686482..336d275 100644 (file)
@@ -100,8 +100,8 @@ do_fpdis:
        fmuld           %f0, %f2, %f26
        faddd           %f0, %f2, %f28
        fmuld           %f0, %f2, %f30
-       b,pt            %xcc, fpdis_exit
-        nop
+       ba,a,pt         %xcc, fpdis_exit
+
 2:     andcc           %g5, FPRS_DU, %g0
        bne,pt          %icc, 3f
         fzero          %f32
@@ -144,8 +144,8 @@ do_fpdis:
        fmuld           %f32, %f34, %f58
        faddd           %f32, %f34, %f60
        fmuld           %f32, %f34, %f62
-       ba,pt           %xcc, fpdis_exit
-        nop
+       ba,a,pt         %xcc, fpdis_exit
+
 3:     mov             SECONDARY_CONTEXT, %g3
        add             %g6, TI_FPREGS, %g1
 
@@ -197,8 +197,7 @@ fpdis_exit2:
 fp_other_bounce:
        call            do_fpother
         add            %sp, PTREGS_OFF, %o0
-       ba,pt           %xcc, rtrap
-        nop
+       ba,a,pt         %xcc, rtrap
        .size           fp_other_bounce,.-fp_other_bounce
 
        .align          32
index cd1f592..a076b42 100644 (file)
@@ -414,6 +414,8 @@ sun4v_chip_type:
        cmp     %g2, 'T'
        be,pt   %xcc, 70f
         cmp    %g2, 'M'
+       be,pt   %xcc, 70f
+        cmp    %g2, 'S'
        bne,pn  %xcc, 49f
         nop
 
@@ -433,6 +435,9 @@ sun4v_chip_type:
        cmp     %g2, '7'
        be,pt   %xcc, 5f
         mov    SUN4V_CHIP_SPARC_M7, %g4
+       cmp     %g2, 'N'
+       be,pt   %xcc, 5f
+        mov    SUN4V_CHIP_SPARC_SN, %g4
        ba,pt   %xcc, 49f
         nop
 
@@ -461,9 +466,8 @@ sun4v_chip_type:
        subcc   %g3, 1, %g3
        bne,pt  %xcc, 41b
        add     %g1, 1, %g1
-       mov     SUN4V_CHIP_SPARC64X, %g4
        ba,pt   %xcc, 5f
-       nop
+        mov    SUN4V_CHIP_SPARC64X, %g4
 
 49:
        mov     SUN4V_CHIP_UNKNOWN, %g4
@@ -548,8 +552,7 @@ sun4u_init:
        stxa            %g0, [%g7] ASI_DMMU
        membar  #Sync
 
-       ba,pt           %xcc, sun4u_continue
-        nop
+       ba,a,pt         %xcc, sun4u_continue
 
 sun4v_init:
        /* Set ctx 0 */
@@ -560,14 +563,12 @@ sun4v_init:
        mov             SECONDARY_CONTEXT, %g7
        stxa            %g0, [%g7] ASI_MMU
        membar          #Sync
-       ba,pt           %xcc, niagara_tlb_fixup
-        nop
+       ba,a,pt         %xcc, niagara_tlb_fixup
 
 sun4u_continue:
        BRANCH_IF_ANY_CHEETAH(g1, g7, cheetah_tlb_fixup)
 
-       ba,pt   %xcc, spitfire_tlb_fixup
-        nop
+       ba,a,pt %xcc, spitfire_tlb_fixup
 
 niagara_tlb_fixup:
        mov     3, %g2          /* Set TLB type to hypervisor. */
@@ -595,6 +596,9 @@ niagara_tlb_fixup:
        be,pt   %xcc, niagara4_patch
         nop
        cmp     %g1, SUN4V_CHIP_SPARC_M7
+       be,pt   %xcc, niagara4_patch
+        nop
+       cmp     %g1, SUN4V_CHIP_SPARC_SN
        be,pt   %xcc, niagara4_patch
         nop
 
@@ -639,8 +643,7 @@ niagara_patch:
        call    hypervisor_patch_cachetlbops
         nop
 
-       ba,pt   %xcc, tlb_fixup_done
-        nop
+       ba,a,pt %xcc, tlb_fixup_done
 
 cheetah_tlb_fixup:
        mov     2, %g2          /* Set TLB type to cheetah+. */
@@ -659,8 +662,7 @@ cheetah_tlb_fixup:
        call    cheetah_patch_cachetlbops
         nop
 
-       ba,pt   %xcc, tlb_fixup_done
-        nop
+       ba,a,pt %xcc, tlb_fixup_done
 
 spitfire_tlb_fixup:
        /* Set TLB type to spitfire. */
@@ -774,8 +776,7 @@ setup_trap_table:
        call    %o1
         add    %sp, (2047 + 128), %o0
 
-       ba,pt   %xcc, 2f
-        nop
+       ba,a,pt %xcc, 2f
 
 1:     sethi   %hi(sparc64_ttable_tl0), %o0
        set     prom_set_trap_table_name, %g2
@@ -814,8 +815,7 @@ setup_trap_table:
 
        BRANCH_IF_ANY_CHEETAH(o2, o3, 1f)
 
-       ba,pt   %xcc, 2f
-        nop
+       ba,a,pt %xcc, 2f
 
        /* Disable STICK_INT interrupts. */
 1:
index 753b4f0..34b4933 100644 (file)
@@ -18,8 +18,7 @@ __do_privact:
 109:   or              %g7, %lo(109b), %g7
        call            do_privact
         add            %sp, PTREGS_OFF, %o0
-       ba,pt           %xcc, rtrap
-        nop
+       ba,a,pt         %xcc, rtrap
        .size           __do_privact,.-__do_privact
 
        .type           do_mna,#function
@@ -46,8 +45,7 @@ do_mna:
        mov             %l5, %o2
        call            mem_address_unaligned
         add            %sp, PTREGS_OFF, %o0
-       ba,pt           %xcc, rtrap
-        nop
+       ba,a,pt         %xcc, rtrap
        .size           do_mna,.-do_mna
 
        .type           do_lddfmna,#function
@@ -65,8 +63,7 @@ do_lddfmna:
        mov             %l5, %o2
        call            handle_lddfmna
         add            %sp, PTREGS_OFF, %o0
-       ba,pt           %xcc, rtrap
-        nop
+       ba,a,pt         %xcc, rtrap
        .size           do_lddfmna,.-do_lddfmna
 
        .type           do_stdfmna,#function
@@ -84,8 +81,7 @@ do_stdfmna:
        mov             %l5, %o2
        call            handle_stdfmna
         add            %sp, PTREGS_OFF, %o0
-       ba,pt           %xcc, rtrap
-        nop
+       ba,a,pt         %xcc, rtrap
        .size           do_stdfmna,.-do_stdfmna
 
        .type           breakpoint_trap,#function
index badf095..c2b202d 100644 (file)
@@ -245,6 +245,18 @@ static void pci_parse_of_addrs(struct platform_device *op,
        }
 }
 
+static void pci_init_dev_archdata(struct dev_archdata *sd, void *iommu,
+                                 void *stc, void *host_controller,
+                                 struct platform_device  *op,
+                                 int numa_node)
+{
+       sd->iommu = iommu;
+       sd->stc = stc;
+       sd->host_controller = host_controller;
+       sd->op = op;
+       sd->numa_node = numa_node;
+}
+
 static struct pci_dev *of_create_pci_dev(struct pci_pbm_info *pbm,
                                         struct device_node *node,
                                         struct pci_bus *bus, int devfn)
@@ -259,13 +271,10 @@ static struct pci_dev *of_create_pci_dev(struct pci_pbm_info *pbm,
        if (!dev)
                return NULL;
 
+       op = of_find_device_by_node(node);
        sd = &dev->dev.archdata;
-       sd->iommu = pbm->iommu;
-       sd->stc = &pbm->stc;
-       sd->host_controller = pbm;
-       sd->op = op = of_find_device_by_node(node);
-       sd->numa_node = pbm->numa_node;
-
+       pci_init_dev_archdata(sd, pbm->iommu, &pbm->stc, pbm, op,
+                             pbm->numa_node);
        sd = &op->dev.archdata;
        sd->iommu = pbm->iommu;
        sd->stc = &pbm->stc;
@@ -994,6 +1003,27 @@ void pcibios_set_master(struct pci_dev *dev)
        /* No special bus mastering setup handling */
 }
 
+#ifdef CONFIG_PCI_IOV
+int pcibios_add_device(struct pci_dev *dev)
+{
+       struct pci_dev *pdev;
+
+       /* Add sriov arch specific initialization here.
+        * Copy dev_archdata from PF to VF
+        */
+       if (dev->is_virtfn) {
+               struct dev_archdata *psd;
+
+               pdev = dev->physfn;
+               psd = &pdev->dev.archdata;
+               pci_init_dev_archdata(&dev->dev.archdata, psd->iommu,
+                                     psd->stc, psd->host_controller, NULL,
+                                     psd->numa_node);
+       }
+       return 0;
+}
+#endif /* CONFIG_PCI_IOV */
+
 static int __init pcibios_init(void)
 {
        pci_dfl_cache_line_size = 64 >> 2;
index 26db95b..599f120 100644 (file)
@@ -285,7 +285,8 @@ static void __init sun4v_patch(void)
 
        sun4v_patch_2insn_range(&__sun4v_2insn_patch,
                                &__sun4v_2insn_patch_end);
-       if (sun4v_chip_type == SUN4V_CHIP_SPARC_M7)
+       if (sun4v_chip_type == SUN4V_CHIP_SPARC_M7 ||
+           sun4v_chip_type == SUN4V_CHIP_SPARC_SN)
                sun_m7_patch_2insn_range(&__sun_m7_2insn_patch,
                                         &__sun_m7_2insn_patch_end);
 
@@ -524,6 +525,7 @@ static void __init init_sparc64_elf_hwcap(void)
                    sun4v_chip_type == SUN4V_CHIP_NIAGARA5 ||
                    sun4v_chip_type == SUN4V_CHIP_SPARC_M6 ||
                    sun4v_chip_type == SUN4V_CHIP_SPARC_M7 ||
+                   sun4v_chip_type == SUN4V_CHIP_SPARC_SN ||
                    sun4v_chip_type == SUN4V_CHIP_SPARC64X)
                        cap |= HWCAP_SPARC_BLKINIT;
                if (sun4v_chip_type == SUN4V_CHIP_NIAGARA2 ||
@@ -532,6 +534,7 @@ static void __init init_sparc64_elf_hwcap(void)
                    sun4v_chip_type == SUN4V_CHIP_NIAGARA5 ||
                    sun4v_chip_type == SUN4V_CHIP_SPARC_M6 ||
                    sun4v_chip_type == SUN4V_CHIP_SPARC_M7 ||
+                   sun4v_chip_type == SUN4V_CHIP_SPARC_SN ||
                    sun4v_chip_type == SUN4V_CHIP_SPARC64X)
                        cap |= HWCAP_SPARC_N2;
        }
@@ -561,6 +564,7 @@ static void __init init_sparc64_elf_hwcap(void)
                            sun4v_chip_type == SUN4V_CHIP_NIAGARA5 ||
                            sun4v_chip_type == SUN4V_CHIP_SPARC_M6 ||
                            sun4v_chip_type == SUN4V_CHIP_SPARC_M7 ||
+                           sun4v_chip_type == SUN4V_CHIP_SPARC_SN ||
                            sun4v_chip_type == SUN4V_CHIP_SPARC64X)
                                cap |= (AV_SPARC_VIS | AV_SPARC_VIS2 |
                                        AV_SPARC_ASI_BLK_INIT |
@@ -570,6 +574,7 @@ static void __init init_sparc64_elf_hwcap(void)
                            sun4v_chip_type == SUN4V_CHIP_NIAGARA5 ||
                            sun4v_chip_type == SUN4V_CHIP_SPARC_M6 ||
                            sun4v_chip_type == SUN4V_CHIP_SPARC_M7 ||
+                           sun4v_chip_type == SUN4V_CHIP_SPARC_SN ||
                            sun4v_chip_type == SUN4V_CHIP_SPARC64X)
                                cap |= (AV_SPARC_VIS3 | AV_SPARC_HPC |
                                        AV_SPARC_FMAF);
index c357e40..4a73009 100644 (file)
@@ -85,8 +85,7 @@ __spitfire_cee_trap_continue:
        ba,pt           %xcc, etraptl1
         rd             %pc, %g7
 
-       ba,pt           %xcc, 2f
-        nop
+       ba,a,pt         %xcc, 2f
 
 1:     ba,pt           %xcc, etrap_irq
         rd             %pc, %g7
@@ -100,8 +99,7 @@ __spitfire_cee_trap_continue:
        mov             %l5, %o2
        call            spitfire_access_error
         add            %sp, PTREGS_OFF, %o0
-       ba,pt           %xcc, rtrap
-        nop
+       ba,a,pt         %xcc, rtrap
        .size           __spitfire_access_error,.-__spitfire_access_error
 
        /* This is the trap handler entry point for ECC correctable
@@ -179,8 +177,7 @@ __spitfire_data_access_exception_tl1:
        mov             %l5, %o2
        call            spitfire_data_access_exception_tl1
         add            %sp, PTREGS_OFF, %o0
-       ba,pt           %xcc, rtrap
-        nop
+       ba,a,pt         %xcc, rtrap
        .size           __spitfire_data_access_exception_tl1,.-__spitfire_data_access_exception_tl1
 
        .type           __spitfire_data_access_exception,#function
@@ -200,8 +197,7 @@ __spitfire_data_access_exception:
        mov             %l5, %o2
        call            spitfire_data_access_exception
         add            %sp, PTREGS_OFF, %o0
-       ba,pt           %xcc, rtrap
-        nop
+       ba,a,pt         %xcc, rtrap
        .size           __spitfire_data_access_exception,.-__spitfire_data_access_exception
 
        .type           __spitfire_insn_access_exception_tl1,#function
@@ -220,8 +216,7 @@ __spitfire_insn_access_exception_tl1:
        mov             %l5, %o2
        call            spitfire_insn_access_exception_tl1
         add            %sp, PTREGS_OFF, %o0
-       ba,pt           %xcc, rtrap
-        nop
+       ba,a,pt         %xcc, rtrap
        .size           __spitfire_insn_access_exception_tl1,.-__spitfire_insn_access_exception_tl1
 
        .type           __spitfire_insn_access_exception,#function
@@ -240,6 +235,5 @@ __spitfire_insn_access_exception:
        mov             %l5, %o2
        call            spitfire_insn_access_exception
         add            %sp, PTREGS_OFF, %o0
-       ba,pt           %xcc, rtrap
-        nop
+       ba,a,pt         %xcc, rtrap
        .size           __spitfire_insn_access_exception,.-__spitfire_insn_access_exception
index 6c3dd6c..eac7f0d 100644 (file)
@@ -88,4 +88,4 @@ sys_call_table:
 /*340*/        .long sys_ni_syscall, sys_kcmp, sys_finit_module, sys_sched_setattr, sys_sched_getattr
 /*345*/        .long sys_renameat2, sys_seccomp, sys_getrandom, sys_memfd_create, sys_bpf
 /*350*/        .long sys_execveat, sys_membarrier, sys_userfaultfd, sys_bind, sys_listen
-/*355*/        .long sys_setsockopt, sys_mlock2, sys_copy_file_range
+/*355*/        .long sys_setsockopt, sys_mlock2, sys_copy_file_range, sys_preadv2, sys_pwritev2
index 12b524c..b0f17ff 100644 (file)
@@ -89,7 +89,7 @@ sys_call_table32:
 /*340*/        .word sys_kern_features, sys_kcmp, sys_finit_module, sys_sched_setattr, sys_sched_getattr
        .word sys32_renameat2, sys_seccomp, sys_getrandom, sys_memfd_create, sys_bpf
 /*350*/        .word sys32_execveat, sys_membarrier, sys_userfaultfd, sys_bind, sys_listen
-       .word compat_sys_setsockopt, sys_mlock2, sys_copy_file_range
+       .word compat_sys_setsockopt, sys_mlock2, sys_copy_file_range, compat_sys_preadv2, compat_sys_pwritev2
 
 #endif /* CONFIG_COMPAT */
 
@@ -170,4 +170,4 @@ sys_call_table:
 /*340*/        .word sys_kern_features, sys_kcmp, sys_finit_module, sys_sched_setattr, sys_sched_getattr
        .word sys_renameat2, sys_seccomp, sys_getrandom, sys_memfd_create, sys_bpf
 /*350*/        .word sys64_execveat, sys_membarrier, sys_userfaultfd, sys_bind, sys_listen
-       .word sys_setsockopt, sys_mlock2, sys_copy_file_range
+       .word sys_setsockopt, sys_mlock2, sys_copy_file_range, sys_preadv2, sys_pwritev2
index b7f0f3f..c731e80 100644 (file)
@@ -11,8 +11,7 @@ utrap_trap:           /* %g3=handler,%g4=level */
        mov             %l4, %o1
         call           bad_trap
         add            %sp, PTREGS_OFF, %o0
-       ba,pt           %xcc, rtrap
-        nop
+       ba,a,pt         %xcc, rtrap
 
 invoke_utrap:
        sllx            %g3, 3, %g3
index cb5789c..f6bb857 100644 (file)
@@ -45,6 +45,14 @@ static const struct vio_device_id *vio_match_device(
        return NULL;
 }
 
+static int vio_hotplug(struct device *dev, struct kobj_uevent_env *env)
+{
+       const struct vio_dev *vio_dev = to_vio_dev(dev);
+
+       add_uevent_var(env, "MODALIAS=vio:T%sS%s", vio_dev->type, vio_dev->compat);
+       return 0;
+}
+
 static int vio_bus_match(struct device *dev, struct device_driver *drv)
 {
        struct vio_dev *vio_dev = to_vio_dev(dev);
@@ -105,15 +113,25 @@ static ssize_t type_show(struct device *dev,
        return sprintf(buf, "%s\n", vdev->type);
 }
 
+static ssize_t modalias_show(struct device *dev, struct device_attribute *attr,
+                            char *buf)
+{
+       const struct vio_dev *vdev = to_vio_dev(dev);
+
+       return sprintf(buf, "vio:T%sS%s\n", vdev->type, vdev->compat);
+}
+
 static struct device_attribute vio_dev_attrs[] = {
        __ATTR_RO(devspec),
        __ATTR_RO(type),
+       __ATTR_RO(modalias),
        __ATTR_NULL
 };
 
 static struct bus_type vio_bus_type = {
        .name           = "vio",
        .dev_attrs      = vio_dev_attrs,
+       .uevent         = vio_hotplug,
        .match          = vio_bus_match,
        .probe          = vio_device_probe,
        .remove         = vio_device_remove,
index aadd321..7d02b1f 100644 (file)
@@ -33,6 +33,10 @@ ENTRY(_start)
 jiffies = jiffies_64;
 #endif
 
+#ifdef CONFIG_SPARC64
+ASSERT((swapper_tsb == 0x0000000000408000), "Error: sparc64 early assembler too large")
+#endif
+
 SECTIONS
 {
 #ifdef CONFIG_SPARC64
index 1e67ce9..855019a 100644 (file)
@@ -32,8 +32,7 @@ fill_fixup:
         rd     %pc, %g7
        call    do_sparc64_fault
         add    %sp, PTREGS_OFF, %o0
-       ba,pt   %xcc, rtrap
-        nop
+       ba,a,pt %xcc, rtrap
 
        /* Be very careful about usage of the trap globals here.
         * You cannot touch %g5 as that has the fault information.
index 1cfe6aa..09e8388 100644 (file)
@@ -1769,6 +1769,7 @@ static void __init setup_page_offset(void)
                        max_phys_bits = 47;
                        break;
                case SUN4V_CHIP_SPARC_M7:
+               case SUN4V_CHIP_SPARC_SN:
                default:
                        /* M7 and later support 52-bit virtual addresses.  */
                        sparc64_va_hole_top =    0xfff8000000000000UL;
@@ -1986,6 +1987,7 @@ static void __init sun4v_linear_pte_xor_finalize(void)
         */
        switch (sun4v_chip_type) {
        case SUN4V_CHIP_SPARC_M7:
+       case SUN4V_CHIP_SPARC_SN:
                pagecv_flag = 0x00;
                break;
        default:
@@ -2138,6 +2140,7 @@ void __init paging_init(void)
         */
        switch (sun4v_chip_type) {
        case SUN4V_CHIP_SPARC_M7:
+       case SUN4V_CHIP_SPARC_SN:
                page_cache4v_flag = _PAGE_CP_4V;
                break;
        default:
index 3f3dfb8..7189055 100644 (file)
@@ -221,8 +221,7 @@ CONFIG_NETCONSOLE_DYNAMIC=y
 CONFIG_TUN=y
 CONFIG_VETH=m
 CONFIG_NET_DSA_MV88E6060=y
-CONFIG_NET_DSA_MV88E6131=y
-CONFIG_NET_DSA_MV88E6123=y
+CONFIG_NET_DSA_MV88E6XXX=y
 CONFIG_SKY2=y
 CONFIG_PTP_1588_CLOCK_TILEGX=y
 # CONFIG_WLAN is not set
index ef9e27e..dc85468 100644 (file)
@@ -340,8 +340,7 @@ CONFIG_NETCONSOLE_DYNAMIC=y
 CONFIG_TUN=y
 CONFIG_VETH=m
 CONFIG_NET_DSA_MV88E6060=y
-CONFIG_NET_DSA_MV88E6131=y
-CONFIG_NET_DSA_MV88E6123=y
+CONFIG_NET_DSA_MV88E6XXX=y
 # CONFIG_NET_VENDOR_3COM is not set
 CONFIG_E1000E=y
 # CONFIG_WLAN is not set
index 9ef669d..2cd5b68 100644 (file)
@@ -223,7 +223,7 @@ static int uml_net_start_xmit(struct sk_buff *skb, struct net_device *dev)
        if (len == skb->len) {
                dev->stats.tx_packets++;
                dev->stats.tx_bytes += skb->len;
-               dev->trans_start = jiffies;
+               netif_trans_update(dev);
                netif_start_queue(dev);
 
                /* this is normally done in the interrupt when tx finishes */
@@ -252,7 +252,7 @@ static void uml_net_set_multicast_list(struct net_device *dev)
 
 static void uml_net_tx_timeout(struct net_device *dev)
 {
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
        netif_wake_queue(dev);
 }
 
index 86a9bec..bd3e842 100644 (file)
@@ -115,7 +115,7 @@ static __initconst const u64 amd_hw_cache_event_ids
 /*
  * AMD Performance Monitor K7 and later.
  */
-static const u64 amd_perfmon_event_map[] =
+static const u64 amd_perfmon_event_map[PERF_COUNT_HW_MAX] =
 {
   [PERF_COUNT_HW_CPU_CYCLES]                   = 0x0076,
   [PERF_COUNT_HW_INSTRUCTIONS]                 = 0x00c0,
index 68fa55b..aff7988 100644 (file)
@@ -3639,6 +3639,7 @@ __init int intel_pmu_init(void)
 
        case 78: /* 14nm Skylake Mobile */
        case 94: /* 14nm Skylake Desktop */
+       case 85: /* 14nm Skylake Server */
                x86_pmu.late_ack = true;
                memcpy(hw_cache_event_ids, skl_hw_cache_event_ids, sizeof(hw_cache_event_ids));
                memcpy(hw_cache_extra_regs, skl_hw_cache_extra_regs, sizeof(hw_cache_extra_regs));
index 6c3b7c1..1ca5d1e 100644 (file)
@@ -63,7 +63,7 @@ static enum {
 
 #define LBR_PLM (LBR_KERNEL | LBR_USER)
 
-#define LBR_SEL_MASK   0x1ff   /* valid bits in LBR_SELECT */
+#define LBR_SEL_MASK   0x3ff   /* valid bits in LBR_SELECT */
 #define LBR_NOT_SUPP   -1      /* LBR filter not supported */
 #define LBR_IGN                0       /* ignored */
 
@@ -610,8 +610,10 @@ static int intel_pmu_setup_hw_lbr_filter(struct perf_event *event)
         * The first 9 bits (LBR_SEL_MASK) in LBR_SELECT operate
         * in suppress mode. So LBR_SELECT should be set to
         * (~mask & LBR_SEL_MASK) | (mask & ~LBR_SEL_MASK)
+        * But the 10th bit LBR_CALL_STACK does not operate
+        * in suppress mode.
         */
-       reg->config = mask ^ x86_pmu.lbr_sel_mask;
+       reg->config = mask ^ (x86_pmu.lbr_sel_mask & ~LBR_CALL_STACK);
 
        if ((br_type & PERF_SAMPLE_BRANCH_NO_CYCLES) &&
            (br_type & PERF_SAMPLE_BRANCH_NO_FLAGS) &&
index 6af7cf7..09a77db 100644 (file)
@@ -136,9 +136,21 @@ static int __init pt_pmu_hw_init(void)
        struct dev_ext_attribute *de_attrs;
        struct attribute **attrs;
        size_t size;
+       u64 reg;
        int ret;
        long i;
 
+       if (boot_cpu_has(X86_FEATURE_VMX)) {
+               /*
+                * Intel SDM, 36.5 "Tracing post-VMXON" says that
+                * "IA32_VMX_MISC[bit 14]" being 1 means PT can trace
+                * post-VMXON.
+                */
+               rdmsrl(MSR_IA32_VMX_MISC, reg);
+               if (reg & BIT(14))
+                       pt_pmu.vmx = true;
+       }
+
        attrs = NULL;
 
        for (i = 0; i < PT_CPUID_LEAVES; i++) {
@@ -269,20 +281,23 @@ static void pt_config(struct perf_event *event)
 
        reg |= (event->attr.config & PT_CONFIG_MASK);
 
+       event->hw.config = reg;
        wrmsrl(MSR_IA32_RTIT_CTL, reg);
 }
 
-static void pt_config_start(bool start)
+static void pt_config_stop(struct perf_event *event)
 {
-       u64 ctl;
+       u64 ctl = READ_ONCE(event->hw.config);
+
+       /* may be already stopped by a PMI */
+       if (!(ctl & RTIT_CTL_TRACEEN))
+               return;
 
-       rdmsrl(MSR_IA32_RTIT_CTL, ctl);
-       if (start)
-               ctl |= RTIT_CTL_TRACEEN;
-       else
-               ctl &= ~RTIT_CTL_TRACEEN;
+       ctl &= ~RTIT_CTL_TRACEEN;
        wrmsrl(MSR_IA32_RTIT_CTL, ctl);
 
+       WRITE_ONCE(event->hw.config, ctl);
+
        /*
         * A wrmsr that disables trace generation serializes other PT
         * registers and causes all data packets to be written to memory,
@@ -291,8 +306,7 @@ static void pt_config_start(bool start)
         * The below WMB, separating data store and aux_head store matches
         * the consumer's RMB that separates aux_head load and data load.
         */
-       if (!start)
-               wmb();
+       wmb();
 }
 
 static void pt_config_buffer(void *buf, unsigned int topa_idx,
@@ -942,11 +956,17 @@ void intel_pt_interrupt(void)
        if (!ACCESS_ONCE(pt->handle_nmi))
                return;
 
-       pt_config_start(false);
+       /*
+        * If VMX is on and PT does not support it, don't touch anything.
+        */
+       if (READ_ONCE(pt->vmx_on))
+               return;
 
        if (!event)
                return;
 
+       pt_config_stop(event);
+
        buf = perf_get_aux(&pt->handle);
        if (!buf)
                return;
@@ -983,6 +1003,35 @@ void intel_pt_interrupt(void)
        }
 }
 
+void intel_pt_handle_vmx(int on)
+{
+       struct pt *pt = this_cpu_ptr(&pt_ctx);
+       struct perf_event *event;
+       unsigned long flags;
+
+       /* PT plays nice with VMX, do nothing */
+       if (pt_pmu.vmx)
+               return;
+
+       /*
+        * VMXON will clear RTIT_CTL.TraceEn; we need to make
+        * sure to not try to set it while VMX is on. Disable
+        * interrupts to avoid racing with pmu callbacks;
+        * concurrent PMI should be handled fine.
+        */
+       local_irq_save(flags);
+       WRITE_ONCE(pt->vmx_on, on);
+
+       if (on) {
+               /* prevent pt_config_stop() from writing RTIT_CTL */
+               event = pt->handle.event;
+               if (event)
+                       event->hw.config = 0;
+       }
+       local_irq_restore(flags);
+}
+EXPORT_SYMBOL_GPL(intel_pt_handle_vmx);
+
 /*
  * PMU callbacks
  */
@@ -992,6 +1041,9 @@ static void pt_event_start(struct perf_event *event, int mode)
        struct pt *pt = this_cpu_ptr(&pt_ctx);
        struct pt_buffer *buf = perf_get_aux(&pt->handle);
 
+       if (READ_ONCE(pt->vmx_on))
+               return;
+
        if (!buf || pt_buffer_is_full(buf, pt)) {
                event->hw.state = PERF_HES_STOPPED;
                return;
@@ -1014,7 +1066,8 @@ static void pt_event_stop(struct perf_event *event, int mode)
         * see comment in intel_pt_interrupt().
         */
        ACCESS_ONCE(pt->handle_nmi) = 0;
-       pt_config_start(false);
+
+       pt_config_stop(event);
 
        if (event->hw.state == PERF_HES_STOPPED)
                return;
index 336878a..3abb5f5 100644 (file)
@@ -65,6 +65,7 @@ enum pt_capabilities {
 struct pt_pmu {
        struct pmu              pmu;
        u32                     caps[PT_CPUID_REGS_NUM * PT_CPUID_LEAVES];
+       bool                    vmx;
 };
 
 /**
@@ -107,10 +108,12 @@ struct pt_buffer {
  * struct pt - per-cpu pt context
  * @handle:    perf output handle
  * @handle_nmi:        do handle PT PMI on this cpu, there's an active event
+ * @vmx_on:    1 if VMX is ON on this cpu
  */
 struct pt {
        struct perf_output_handle handle;
        int                     handle_nmi;
+       int                     vmx_on;
 };
 
 #endif /* __INTEL_PT_H__ */
index 70c93f9..1705c9d 100644 (file)
@@ -718,6 +718,7 @@ static int __init rapl_pmu_init(void)
                break;
        case 60: /* Haswell */
        case 69: /* Haswell-Celeron */
+       case 70: /* Haswell GT3e */
        case 61: /* Broadwell */
        case 71: /* Broadwell-H */
                rapl_cntr_mask = RAPL_IDX_HSW;
index f8a29d2..e6a8613 100644 (file)
@@ -4,6 +4,7 @@
 #include <asm/page.h>
 #include <asm-generic/hugetlb.h>
 
+#define hugepages_supported() cpu_has_pse
 
 static inline int is_hugepage_only_range(struct mm_struct *mm,
                                         unsigned long addr,
index 5a2ed3e..f353061 100644 (file)
@@ -285,6 +285,10 @@ static inline void perf_events_lapic_init(void)    { }
 static inline void perf_check_microcode(void) { }
 #endif
 
+#ifdef CONFIG_CPU_SUP_INTEL
+ extern void intel_pt_handle_vmx(int on);
+#endif
+
 #if defined(CONFIG_PERF_EVENTS) && defined(CONFIG_CPU_SUP_AMD)
  extern void amd_pmu_enable_virt(void);
  extern void amd_pmu_disable_virt(void);
index ad59d70..ef49551 100644 (file)
@@ -256,7 +256,8 @@ static void clear_irq_vector(int irq, struct apic_chip_data *data)
        struct irq_desc *desc;
        int cpu, vector;
 
-       BUG_ON(!data->cfg.vector);
+       if (!data->cfg.vector)
+               return;
 
        vector = data->cfg.vector;
        for_each_cpu_and(cpu, data->domain, cpu_online_mask)
index 4e7c693..10c11b4 100644 (file)
@@ -152,6 +152,11 @@ static struct clocksource hyperv_cs = {
        .flags          = CLOCK_SOURCE_IS_CONTINUOUS,
 };
 
+static unsigned char hv_get_nmi_reason(void)
+{
+       return 0;
+}
+
 static void __init ms_hyperv_init_platform(void)
 {
        /*
@@ -191,6 +196,13 @@ static void __init ms_hyperv_init_platform(void)
        machine_ops.crash_shutdown = hv_machine_crash_shutdown;
 #endif
        mark_tsc_unstable("running on Hyper-V");
+
+       /*
+        * Generation 2 instances don't support reading the NMI status from
+        * 0x61 port.
+        */
+       if (efi_enabled(EFI_BOOT))
+               x86_platform.get_nmi_reason = hv_get_nmi_reason;
 }
 
 const __refconst struct hypervisor_x86 x86_hyper_ms_hyperv = {
index 54cdbd2..af11129 100644 (file)
@@ -389,12 +389,6 @@ default_entry:
        /* Make changes effective */
        wrmsr
 
-       /*
-        * And make sure that all the mappings we set up have NX set from
-        * the beginning.
-        */
-       orl $(1 << (_PAGE_BIT_NX - 32)), pa(__supported_pte_mask + 4)
-
 enable_paging:
 
 /*
index ee1c8a9..133679d 100644 (file)
@@ -3103,6 +3103,8 @@ static __init int vmx_disabled_by_bios(void)
 
 static void kvm_cpu_vmxon(u64 addr)
 {
+       intel_pt_handle_vmx(1);
+
        asm volatile (ASM_VMX_VMXON_RAX
                        : : "a"(&addr), "m"(addr)
                        : "memory", "cc");
@@ -3172,6 +3174,8 @@ static void vmclear_local_loaded_vmcss(void)
 static void kvm_cpu_vmxoff(void)
 {
        asm volatile (__ex(ASM_VMX_VMXOFF) : : : "cc");
+
+       intel_pt_handle_vmx(0);
 }
 
 static void hardware_disable(void)
index 8bea847..f65a33f 100644 (file)
@@ -32,8 +32,9 @@ early_param("noexec", noexec_setup);
 
 void x86_configure_nx(void)
 {
-       /* If disable_nx is set, clear NX on all new mappings going forward. */
-       if (disable_nx)
+       if (boot_cpu_has(X86_FEATURE_NX) && !disable_nx)
+               __supported_pte_mask |= _PAGE_NX;
+       else
                __supported_pte_mask &= ~_PAGE_NX;
 }
 
index 9e2ba5c..f42e78d 100644 (file)
@@ -27,6 +27,12 @@ static bool xen_pvspin = true;
 
 static void xen_qlock_kick(int cpu)
 {
+       int irq = per_cpu(lock_kicker_irq, cpu);
+
+       /* Don't kick if the target's kicker interrupt is not initialized. */
+       if (irq == -1)
+               return;
+
        xen_send_IPI_one(cpu, XEN_SPIN_UNLOCK_VECTOR);
 }
 
index 976a385..66a5d15 100644 (file)
@@ -428,7 +428,7 @@ static int iss_net_start_xmit(struct sk_buff *skb, struct net_device *dev)
        if (len == skb->len) {
                lp->stats.tx_packets++;
                lp->stats.tx_bytes += skb->len;
-               dev->trans_start = jiffies;
+               netif_trans_update(dev);
                netif_start_queue(dev);
 
                /* this is normally done in the interrupt when tx finishes */
index 94a1843..0ede6d7 100644 (file)
@@ -538,7 +538,6 @@ static int _rbd_dev_v2_snap_size(struct rbd_device *rbd_dev, u64 snap_id,
                                u8 *order, u64 *snap_size);
 static int _rbd_dev_v2_snap_features(struct rbd_device *rbd_dev, u64 snap_id,
                u64 *snap_features);
-static u64 rbd_snap_id_by_name(struct rbd_device *rbd_dev, const char *name);
 
 static int rbd_open(struct block_device *bdev, fmode_t mode)
 {
@@ -3127,9 +3126,6 @@ static void rbd_watch_cb(u64 ver, u64 notify_id, u8 opcode, void *data)
        struct rbd_device *rbd_dev = (struct rbd_device *)data;
        int ret;
 
-       if (!rbd_dev)
-               return;
-
        dout("%s: \"%s\" notify_id %llu opcode %u\n", __func__,
                rbd_dev->header_name, (unsigned long long)notify_id,
                (unsigned int)opcode);
@@ -3263,6 +3259,9 @@ static void rbd_dev_header_unwatch_sync(struct rbd_device *rbd_dev)
 
        ceph_osdc_cancel_event(rbd_dev->watch_event);
        rbd_dev->watch_event = NULL;
+
+       dout("%s flushing notifies\n", __func__);
+       ceph_osdc_flush_notifies(&rbd_dev->rbd_client->client->osdc);
 }
 
 /*
@@ -3642,21 +3641,14 @@ static void rbd_exists_validate(struct rbd_device *rbd_dev)
 static void rbd_dev_update_size(struct rbd_device *rbd_dev)
 {
        sector_t size;
-       bool removing;
 
        /*
-        * Don't hold the lock while doing disk operations,
-        * or lock ordering will conflict with the bdev mutex via:
-        * rbd_add() -> blkdev_get() -> rbd_open()
+        * If EXISTS is not set, rbd_dev->disk may be NULL, so don't
+        * try to update its size.  If REMOVING is set, updating size
+        * is just useless work since the device can't be opened.
         */
-       spin_lock_irq(&rbd_dev->lock);
-       removing = test_bit(RBD_DEV_FLAG_REMOVING, &rbd_dev->flags);
-       spin_unlock_irq(&rbd_dev->lock);
-       /*
-        * If the device is being removed, rbd_dev->disk has
-        * been destroyed, so don't try to update its size
-        */
-       if (!removing) {
+       if (test_bit(RBD_DEV_FLAG_EXISTS, &rbd_dev->flags) &&
+           !test_bit(RBD_DEV_FLAG_REMOVING, &rbd_dev->flags)) {
                size = (sector_t)rbd_dev->mapping.size / SECTOR_SIZE;
                dout("setting size to %llu sectors", (unsigned long long)size);
                set_capacity(rbd_dev->disk, size);
@@ -4191,7 +4183,7 @@ static int _rbd_dev_v2_snap_features(struct rbd_device *rbd_dev, u64 snap_id,
                __le64 features;
                __le64 incompat;
        } __attribute__ ((packed)) features_buf = { 0 };
-       u64 incompat;
+       u64 unsup;
        int ret;
 
        ret = rbd_obj_method_sync(rbd_dev, rbd_dev->header_name,
@@ -4204,9 +4196,12 @@ static int _rbd_dev_v2_snap_features(struct rbd_device *rbd_dev, u64 snap_id,
        if (ret < sizeof (features_buf))
                return -ERANGE;
 
-       incompat = le64_to_cpu(features_buf.incompat);
-       if (incompat & ~RBD_FEATURES_SUPPORTED)
+       unsup = le64_to_cpu(features_buf.incompat) & ~RBD_FEATURES_SUPPORTED;
+       if (unsup) {
+               rbd_warn(rbd_dev, "image uses unsupported features: 0x%llx",
+                        unsup);
                return -ENXIO;
+       }
 
        *snap_features = le64_to_cpu(features_buf.features);
 
@@ -5187,6 +5182,10 @@ out_err:
        return ret;
 }
 
+/*
+ * rbd_dev->header_rwsem must be locked for write and will be unlocked
+ * upon return.
+ */
 static int rbd_dev_device_setup(struct rbd_device *rbd_dev)
 {
        int ret;
@@ -5195,7 +5194,7 @@ static int rbd_dev_device_setup(struct rbd_device *rbd_dev)
 
        ret = rbd_dev_id_get(rbd_dev);
        if (ret)
-               return ret;
+               goto err_out_unlock;
 
        BUILD_BUG_ON(DEV_NAME_LEN
                        < sizeof (RBD_DRV_NAME) + MAX_INT_FORMAT_WIDTH);
@@ -5236,8 +5235,9 @@ static int rbd_dev_device_setup(struct rbd_device *rbd_dev)
        /* Everything's ready.  Announce the disk to the world. */
 
        set_bit(RBD_DEV_FLAG_EXISTS, &rbd_dev->flags);
-       add_disk(rbd_dev->disk);
+       up_write(&rbd_dev->header_rwsem);
 
+       add_disk(rbd_dev->disk);
        pr_info("%s: added with size 0x%llx\n", rbd_dev->disk->disk_name,
                (unsigned long long) rbd_dev->mapping.size);
 
@@ -5252,6 +5252,8 @@ err_out_blkdev:
                unregister_blkdev(rbd_dev->major, rbd_dev->name);
 err_out_id:
        rbd_dev_id_put(rbd_dev);
+err_out_unlock:
+       up_write(&rbd_dev->header_rwsem);
        return ret;
 }
 
@@ -5442,6 +5444,7 @@ static ssize_t do_rbd_add(struct bus_type *bus,
        spec = NULL;            /* rbd_dev now owns this */
        rbd_opts = NULL;        /* rbd_dev now owns this */
 
+       down_write(&rbd_dev->header_rwsem);
        rc = rbd_dev_image_probe(rbd_dev, 0);
        if (rc < 0)
                goto err_out_rbd_dev;
@@ -5471,6 +5474,7 @@ out:
        return rc;
 
 err_out_rbd_dev:
+       up_write(&rbd_dev->header_rwsem);
        rbd_dev_destroy(rbd_dev);
 err_out_client:
        rbd_put_client(rbdc);
@@ -5577,12 +5581,6 @@ static ssize_t do_rbd_remove(struct bus_type *bus,
                return ret;
 
        rbd_dev_header_unwatch_sync(rbd_dev);
-       /*
-        * flush remaining watch callbacks - these must be complete
-        * before the osd_client is shutdown
-        */
-       dout("%s: flushing notifies", __func__);
-       ceph_osdc_flush_notifies(&rbd_dev->rbd_client->client->osdc);
 
        /*
         * Don't free anything from rbd_dev->disk until after all
index 47ca4b3..641c2d1 100644 (file)
@@ -206,7 +206,8 @@ static int ath3k_load_firmware(struct usb_device *udev,
                                const struct firmware *firmware)
 {
        u8 *send_buf;
-       int err, pipe, len, size, sent = 0;
+       int len = 0;
+       int err, pipe, size, sent = 0;
        int count = firmware->size;
 
        BT_DBG("udev %p", udev);
@@ -302,7 +303,8 @@ static int ath3k_load_fwfile(struct usb_device *udev,
                const struct firmware *firmware)
 {
        u8 *send_buf;
-       int err, pipe, len, size, count, sent = 0;
+       int len = 0;
+       int err, pipe, size, count, sent = 0;
        int ret;
 
        count = firmware->size;
index 0590473..f742384 100644 (file)
 #include <linux/bitops.h>
 #include <linux/slab.h>
 #include <net/bluetooth/bluetooth.h>
+#include <linux/err.h>
+#include <linux/gpio.h>
+#include <linux/gfp.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <linux/of_gpio.h>
+#include <linux/of_platform.h>
+#include <linux/platform_device.h>
+#include <linux/pm_runtime.h>
+#include <linux/slab.h>
+#include <linux/of_irq.h>
 
 #define BTM_HEADER_LEN                 4
 #define BTM_UPLD_SIZE                  2312
index f25a825..7ad8d61 100644 (file)
@@ -510,34 +510,39 @@ static int btmrvl_download_cal_data(struct btmrvl_private *priv,
 static int btmrvl_check_device_tree(struct btmrvl_private *priv)
 {
        struct device_node *dt_node;
+       struct btmrvl_sdio_card *card = priv->btmrvl_dev.card;
        u8 cal_data[BT_CAL_HDR_LEN + BT_CAL_DATA_SIZE];
-       int ret;
-       u32 val;
+       int ret = 0;
+       u16 gpio, gap;
+
+       if (card->plt_of_node) {
+               dt_node = card->plt_of_node;
+               ret = of_property_read_u16(dt_node, "marvell,wakeup-pin",
+                                          &gpio);
+               if (ret)
+                       gpio = (priv->btmrvl_dev.gpio_gap & 0xff00) >> 8;
+
+               ret = of_property_read_u16(dt_node, "marvell,wakeup-gap-ms",
+                                          &gap);
+               if (ret)
+                       gap = (u8)(priv->btmrvl_dev.gpio_gap & 0x00ff);
 
-       for_each_compatible_node(dt_node, NULL, "btmrvl,cfgdata") {
-               ret = of_property_read_u32(dt_node, "btmrvl,gpio-gap", &val);
-               if (!ret)
-                       priv->btmrvl_dev.gpio_gap = val;
+               priv->btmrvl_dev.gpio_gap = (gpio << 8) + gap;
 
-               ret = of_property_read_u8_array(dt_node, "btmrvl,cal-data",
+               ret = of_property_read_u8_array(dt_node, "marvell,cal-data",
                                                cal_data + BT_CAL_HDR_LEN,
                                                BT_CAL_DATA_SIZE);
-               if (ret) {
-                       of_node_put(dt_node);
+               if (ret)
                        return ret;
-               }
 
                BT_DBG("Use cal data from device tree");
                ret = btmrvl_download_cal_data(priv, cal_data,
                                               BT_CAL_DATA_SIZE);
-               if (ret) {
+               if (ret)
                        BT_ERR("Fail to download calibrate data");
-                       of_node_put(dt_node);
-                       return ret;
-               }
        }
 
-       return 0;
+       return ret;
 }
 
 static int btmrvl_setup(struct hci_dev *hdev)
index c6ef248..f425ddf 100644 (file)
@@ -52,6 +52,68 @@ static struct memory_type_mapping mem_type_mapping_tbl[] = {
        {"EXTLAST", NULL, 0, 0xFE},
 };
 
+static const struct of_device_id btmrvl_sdio_of_match_table[] = {
+       { .compatible = "marvell,sd8897-bt" },
+       { .compatible = "marvell,sd8997-bt" },
+       { }
+};
+
+static irqreturn_t btmrvl_wake_irq_bt(int irq, void *priv)
+{
+       struct btmrvl_plt_wake_cfg *cfg = priv;
+
+       if (cfg->irq_bt >= 0) {
+               pr_info("%s: wake by bt", __func__);
+               cfg->wake_by_bt = true;
+               disable_irq_nosync(irq);
+       }
+
+       return IRQ_HANDLED;
+}
+
+/* This function parses device tree node using mmc subnode devicetree API.
+ * The device node is saved in card->plt_of_node.
+ * If the device tree node exists and includes interrupts attributes, this
+ * function will request platform specific wakeup interrupt.
+ */
+static int btmrvl_sdio_probe_of(struct device *dev,
+                               struct btmrvl_sdio_card *card)
+{
+       struct btmrvl_plt_wake_cfg *cfg;
+       int ret;
+
+       if (!dev->of_node ||
+           !of_match_node(btmrvl_sdio_of_match_table, dev->of_node)) {
+               pr_err("sdio platform data not available");
+               return -1;
+       }
+
+       card->plt_of_node = dev->of_node;
+
+       card->plt_wake_cfg = devm_kzalloc(dev, sizeof(*card->plt_wake_cfg),
+                                         GFP_KERNEL);
+       cfg = card->plt_wake_cfg;
+       if (cfg && card->plt_of_node) {
+               cfg->irq_bt = irq_of_parse_and_map(card->plt_of_node, 0);
+               if (!cfg->irq_bt) {
+                       dev_err(dev, "fail to parse irq_bt from device tree");
+               } else {
+                       ret = devm_request_irq(dev, cfg->irq_bt,
+                                              btmrvl_wake_irq_bt,
+                                              IRQF_TRIGGER_LOW,
+                                              "bt_wake", cfg);
+                       if (ret) {
+                               dev_err(dev,
+                                       "Failed to request irq_bt %d (%d)\n",
+                                       cfg->irq_bt, ret);
+                       }
+                       disable_irq(cfg->irq_bt);
+               }
+       }
+
+       return 0;
+}
+
 /* The btmrvl_sdio_remove() callback function is called
  * when user removes this module from kernel space or ejects
  * the card from the slot. The driver handles these 2 cases
@@ -1464,6 +1526,9 @@ static int btmrvl_sdio_probe(struct sdio_func *func,
 
        btmrvl_sdio_enable_host_int(card);
 
+       /* Device tree node parsing and platform specific configuration*/
+       btmrvl_sdio_probe_of(&func->dev, card);
+
        priv = btmrvl_add_card(card);
        if (!priv) {
                BT_ERR("Initializing card failed!");
@@ -1544,6 +1609,13 @@ static int btmrvl_sdio_suspend(struct device *dev)
                return 0;
        }
 
+       /* Enable platform specific wakeup interrupt */
+       if (card->plt_wake_cfg && card->plt_wake_cfg->irq_bt >= 0) {
+               card->plt_wake_cfg->wake_by_bt = false;
+               enable_irq(card->plt_wake_cfg->irq_bt);
+               enable_irq_wake(card->plt_wake_cfg->irq_bt);
+       }
+
        priv = card->priv;
        priv->adapter->is_suspending = true;
        hcidev = priv->btmrvl_dev.hcidev;
@@ -1606,6 +1678,13 @@ static int btmrvl_sdio_resume(struct device *dev)
        BT_DBG("%s: SDIO resume", hcidev->name);
        hci_resume_dev(hcidev);
 
+       /* Disable platform specific wakeup interrupt */
+       if (card->plt_wake_cfg && card->plt_wake_cfg->irq_bt >= 0) {
+               disable_irq_wake(card->plt_wake_cfg->irq_bt);
+               if (!card->plt_wake_cfg->wake_by_bt)
+                       disable_irq(card->plt_wake_cfg->irq_bt);
+       }
+
        return 0;
 }
 
index 1a3bd06..3a522d2 100644 (file)
 
 #define FIRMWARE_READY                         0xfedc
 
+struct btmrvl_plt_wake_cfg {
+       int irq_bt;
+       bool wake_by_bt;
+};
 
 struct btmrvl_sdio_card_reg {
        u8 cfg;
@@ -97,6 +101,8 @@ struct btmrvl_sdio_card {
        u16 sd_blksz_fw_dl;
        u8 rx_unit;
        struct btmrvl_private *priv;
+       struct device_node *plt_of_node;
+       struct btmrvl_plt_wake_cfg *plt_wake_cfg;
 };
 
 struct btmrvl_sdio_device {
index 0d4e372..6aae959 100644 (file)
@@ -2001,12 +2001,13 @@ static int btusb_setup_intel_new(struct hci_dev *hdev)
                return -EINVAL;
        }
 
-       /* At the moment only the hardware variant iBT 3.0 (LnP/SfP) is
-        * supported by this firmware loading method. This check has been
-        * put in place to ensure correct forward compatibility options
-        * when newer hardware variants come along.
+       /* At the moment the iBT 3.0 hardware variants 0x0b (LnP/SfP)
+        * and 0x0c (WsP) are supported by this firmware loading method.
+        *
+        * This check has been put in place to ensure correct forward
+        * compatibility options when newer hardware variants come along.
         */
-       if (ver.hw_variant != 0x0b) {
+       if (ver.hw_variant != 0x0b && ver.hw_variant != 0x0c) {
                BT_ERR("%s: Unsupported Intel hardware variant (%u)",
                       hdev->name, ver.hw_variant);
                return -EINVAL;
index 91d6051..f6f2b01 100644 (file)
@@ -1210,8 +1210,7 @@ static int intel_probe(struct platform_device *pdev)
 
        idev->pdev = pdev;
 
-       idev->reset = devm_gpiod_get_optional(&pdev->dev, "reset",
-                                             GPIOD_OUT_LOW);
+       idev->reset = devm_gpiod_get(&pdev->dev, "reset", GPIOD_OUT_LOW);
        if (IS_ERR(idev->reset)) {
                dev_err(&pdev->dev, "Unable to retrieve gpio\n");
                return PTR_ERR(idev->reset);
@@ -1223,8 +1222,7 @@ static int intel_probe(struct platform_device *pdev)
 
                dev_err(&pdev->dev, "No IRQ, falling back to gpio-irq\n");
 
-               host_wake = devm_gpiod_get_optional(&pdev->dev, "host-wake",
-                                                   GPIOD_IN);
+               host_wake = devm_gpiod_get(&pdev->dev, "host-wake", GPIOD_IN);
                if (IS_ERR(host_wake)) {
                        dev_err(&pdev->dev, "Unable to retrieve IRQ\n");
                        goto no_irq;
index f67ea1c..aba3121 100644 (file)
@@ -50,6 +50,7 @@ struct vhci_data {
        wait_queue_head_t read_wait;
        struct sk_buff_head readq;
 
+       struct mutex open_mutex;
        struct delayed_work open_timeout;
 };
 
@@ -87,12 +88,15 @@ static int vhci_send_frame(struct hci_dev *hdev, struct sk_buff *skb)
        return 0;
 }
 
-static int vhci_create_device(struct vhci_data *data, __u8 opcode)
+static int __vhci_create_device(struct vhci_data *data, __u8 opcode)
 {
        struct hci_dev *hdev;
        struct sk_buff *skb;
        __u8 dev_type;
 
+       if (data->hdev)
+               return -EBADFD;
+
        /* bits 0-1 are dev_type (BR/EDR or AMP) */
        dev_type = opcode & 0x03;
 
@@ -151,6 +155,17 @@ static int vhci_create_device(struct vhci_data *data, __u8 opcode)
        return 0;
 }
 
+static int vhci_create_device(struct vhci_data *data, __u8 opcode)
+{
+       int err;
+
+       mutex_lock(&data->open_mutex);
+       err = __vhci_create_device(data, opcode);
+       mutex_unlock(&data->open_mutex);
+
+       return err;
+}
+
 static inline ssize_t vhci_get_user(struct vhci_data *data,
                                    struct iov_iter *from)
 {
@@ -191,11 +206,6 @@ static inline ssize_t vhci_get_user(struct vhci_data *data,
        case HCI_VENDOR_PKT:
                cancel_delayed_work_sync(&data->open_timeout);
 
-               if (data->hdev) {
-                       kfree_skb(skb);
-                       return -EBADFD;
-               }
-
                opcode = *((__u8 *) skb->data);
                skb_pull(skb, 1);
 
@@ -320,6 +330,7 @@ static int vhci_open(struct inode *inode, struct file *file)
        skb_queue_head_init(&data->readq);
        init_waitqueue_head(&data->read_wait);
 
+       mutex_init(&data->open_mutex);
        INIT_DELAYED_WORK(&data->open_timeout, vhci_open_timeout);
 
        file->private_data = data;
index 22c2765..e524e83 100644 (file)
@@ -3969,7 +3969,7 @@ static netdev_tx_t hdlcdev_xmit(struct sk_buff *skb,
        dev_kfree_skb(skb);
 
        /* save start time for transmit timeout detection */
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
 
        /* start hardware transmitter if necessary */
        spin_lock_irqsave(&info->lock, flags);
@@ -4032,7 +4032,7 @@ static int hdlcdev_open(struct net_device *dev)
        tty_kref_put(tty);
 
        /* enable network layer transmit */
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
        netif_start_queue(dev);
 
        /* inform generic HDLC layer of current DCD status */
index 02e1818..2beb396 100644 (file)
@@ -394,7 +394,7 @@ static void __init imx6q_clocks_init(struct device_node *ccm_node)
                clk[IMX6QDL_CLK_LDB_DI1_DIV_3_5] = imx_clk_fixed_factor("ldb_di1_div_3_5", "ldb_di1", 2, 7);
        } else {
                clk[IMX6QDL_CLK_ECSPI_ROOT] = imx_clk_divider("ecspi_root", "pll3_60m", base + 0x38, 19, 6);
-               clk[IMX6QDL_CLK_CAN_ROOT] = imx_clk_divider("can_root", "pll3_60", base + 0x20, 2, 6);
+               clk[IMX6QDL_CLK_CAN_ROOT] = imx_clk_divider("can_root", "pll3_60m", base + 0x20, 2, 6);
                clk[IMX6QDL_CLK_IPG_PER] = imx_clk_fixup_divider("ipg_per", "ipg", base + 0x1c, 0, 6, imx_cscmr1_fixup);
                clk[IMX6QDL_CLK_UART_SERIAL_PODF] = imx_clk_divider("uart_serial_podf", "pll3_80m",          base + 0x24, 0,  6);
                clk[IMX6QDL_CLK_LDB_DI0_DIV_3_5] = imx_clk_fixed_factor("ldb_di0_div_3_5", "ldb_di0_sel", 2, 7);
index 2bcecaf..c407c47 100644 (file)
@@ -42,7 +42,7 @@ static void __init tango_clocksource_init(struct device_node *np)
 
        ret = clocksource_mmio_init(xtal_in_cnt, "tango-xtal", xtal_freq, 350,
                                    32, clocksource_mmio_readl_up);
-       if (!ret) {
+       if (ret) {
                pr_err("%s: registration failed\n", np->full_name);
                return;
        }
index 10a5cfe..5f1147f 100644 (file)
@@ -193,12 +193,8 @@ unsigned int dbs_update(struct cpufreq_policy *policy)
                wall_time = cur_wall_time - j_cdbs->prev_cpu_wall;
                j_cdbs->prev_cpu_wall = cur_wall_time;
 
-               if (cur_idle_time <= j_cdbs->prev_cpu_idle) {
-                       idle_time = 0;
-               } else {
-                       idle_time = cur_idle_time - j_cdbs->prev_cpu_idle;
-                       j_cdbs->prev_cpu_idle = cur_idle_time;
-               }
+               idle_time = cur_idle_time - j_cdbs->prev_cpu_idle;
+               j_cdbs->prev_cpu_idle = cur_idle_time;
 
                if (ignore_nice) {
                        u64 cur_nice = kcpustat_cpu(j).cpustat[CPUTIME_NICE];
index 30fe323..f502d5b 100644 (file)
@@ -813,6 +813,11 @@ static int core_get_max_pstate(void)
                        if (err)
                                goto skip_tar;
 
+                       /* For level 1 and 2, bits[23:16] contain the ratio */
+                       if (tdp_ctrl)
+                               tdp_ratio >>= 16;
+
+                       tdp_ratio &= 0xff; /* ratios are only 8 bits long */
                        if (tdp_ratio - 1 == tar) {
                                max_pstate = tar;
                                pr_debug("max_pstate=TAC %x\n", max_pstate);
index a0d4a08..aae0554 100644 (file)
@@ -63,6 +63,14 @@ static void to_talitos_ptr(struct talitos_ptr *ptr, dma_addr_t dma_addr,
                ptr->eptr = upper_32_bits(dma_addr);
 }
 
+static void copy_talitos_ptr(struct talitos_ptr *dst_ptr,
+                            struct talitos_ptr *src_ptr, bool is_sec1)
+{
+       dst_ptr->ptr = src_ptr->ptr;
+       if (!is_sec1)
+               dst_ptr->eptr = src_ptr->eptr;
+}
+
 static void to_talitos_ptr_len(struct talitos_ptr *ptr, unsigned int len,
                               bool is_sec1)
 {
@@ -1083,21 +1091,20 @@ static int ipsec_esp(struct talitos_edesc *edesc, struct aead_request *areq,
        sg_count = dma_map_sg(dev, areq->src, edesc->src_nents ?: 1,
                              (areq->src == areq->dst) ? DMA_BIDIRECTIONAL
                                                           : DMA_TO_DEVICE);
-
        /* hmac data */
        desc->ptr[1].len = cpu_to_be16(areq->assoclen);
        if (sg_count > 1 &&
            (ret = sg_to_link_tbl_offset(areq->src, sg_count, 0,
                                         areq->assoclen,
                                         &edesc->link_tbl[tbl_off])) > 1) {
-               tbl_off += ret;
-
                to_talitos_ptr(&desc->ptr[1], edesc->dma_link_tbl + tbl_off *
                               sizeof(struct talitos_ptr), 0);
                desc->ptr[1].j_extent = DESC_PTR_LNKTBL_JUMP;
 
                dma_sync_single_for_device(dev, edesc->dma_link_tbl,
                                           edesc->dma_len, DMA_BIDIRECTIONAL);
+
+               tbl_off += ret;
        } else {
                to_talitos_ptr(&desc->ptr[1], sg_dma_address(areq->src), 0);
                desc->ptr[1].j_extent = 0;
@@ -1126,11 +1133,13 @@ static int ipsec_esp(struct talitos_edesc *edesc, struct aead_request *areq,
        if (edesc->desc.hdr & DESC_HDR_MODE1_MDEU_CICV)
                sg_link_tbl_len += authsize;
 
-       if (sg_count > 1 &&
-           (ret = sg_to_link_tbl_offset(areq->src, sg_count, areq->assoclen,
-                                        sg_link_tbl_len,
-                                        &edesc->link_tbl[tbl_off])) > 1) {
-               tbl_off += ret;
+       if (sg_count == 1) {
+               to_talitos_ptr(&desc->ptr[4], sg_dma_address(areq->src) +
+                              areq->assoclen, 0);
+       } else if ((ret = sg_to_link_tbl_offset(areq->src, sg_count,
+                                               areq->assoclen, sg_link_tbl_len,
+                                               &edesc->link_tbl[tbl_off])) >
+                  1) {
                desc->ptr[4].j_extent |= DESC_PTR_LNKTBL_JUMP;
                to_talitos_ptr(&desc->ptr[4], edesc->dma_link_tbl +
                                              tbl_off *
@@ -1138,8 +1147,10 @@ static int ipsec_esp(struct talitos_edesc *edesc, struct aead_request *areq,
                dma_sync_single_for_device(dev, edesc->dma_link_tbl,
                                           edesc->dma_len,
                                           DMA_BIDIRECTIONAL);
-       } else
-               to_talitos_ptr(&desc->ptr[4], sg_dma_address(areq->src), 0);
+               tbl_off += ret;
+       } else {
+               copy_talitos_ptr(&desc->ptr[4], &edesc->link_tbl[tbl_off], 0);
+       }
 
        /* cipher out */
        desc->ptr[5].len = cpu_to_be16(cryptlen);
@@ -1151,11 +1162,13 @@ static int ipsec_esp(struct talitos_edesc *edesc, struct aead_request *areq,
 
        edesc->icv_ool = false;
 
-       if (sg_count > 1 &&
-           (sg_count = sg_to_link_tbl_offset(areq->dst, sg_count,
+       if (sg_count == 1) {
+               to_talitos_ptr(&desc->ptr[5], sg_dma_address(areq->dst) +
+                              areq->assoclen, 0);
+       } else if ((sg_count =
+                       sg_to_link_tbl_offset(areq->dst, sg_count,
                                              areq->assoclen, cryptlen,
-                                             &edesc->link_tbl[tbl_off])) >
-           1) {
+                                             &edesc->link_tbl[tbl_off])) > 1) {
                struct talitos_ptr *tbl_ptr = &edesc->link_tbl[tbl_off];
 
                to_talitos_ptr(&desc->ptr[5], edesc->dma_link_tbl +
@@ -1178,8 +1191,9 @@ static int ipsec_esp(struct talitos_edesc *edesc, struct aead_request *areq,
                                           edesc->dma_len, DMA_BIDIRECTIONAL);
 
                edesc->icv_ool = true;
-       } else
-               to_talitos_ptr(&desc->ptr[5], sg_dma_address(areq->dst), 0);
+       } else {
+               copy_talitos_ptr(&desc->ptr[5], &edesc->link_tbl[tbl_off], 0);
+       }
 
        /* iv out */
        map_single_talitos_ptr(dev, &desc->ptr[6], ivsize, ctx->iv,
@@ -2629,21 +2643,11 @@ struct talitos_crypto_alg {
        struct talitos_alg_template algt;
 };
 
-static int talitos_cra_init(struct crypto_tfm *tfm)
+static int talitos_init_common(struct talitos_ctx *ctx,
+                              struct talitos_crypto_alg *talitos_alg)
 {
-       struct crypto_alg *alg = tfm->__crt_alg;
-       struct talitos_crypto_alg *talitos_alg;
-       struct talitos_ctx *ctx = crypto_tfm_ctx(tfm);
        struct talitos_private *priv;
 
-       if ((alg->cra_flags & CRYPTO_ALG_TYPE_MASK) == CRYPTO_ALG_TYPE_AHASH)
-               talitos_alg = container_of(__crypto_ahash_alg(alg),
-                                          struct talitos_crypto_alg,
-                                          algt.alg.hash);
-       else
-               talitos_alg = container_of(alg, struct talitos_crypto_alg,
-                                          algt.alg.crypto);
-
        /* update context with ptr to dev */
        ctx->dev = talitos_alg->dev;
 
@@ -2661,10 +2665,33 @@ static int talitos_cra_init(struct crypto_tfm *tfm)
        return 0;
 }
 
+static int talitos_cra_init(struct crypto_tfm *tfm)
+{
+       struct crypto_alg *alg = tfm->__crt_alg;
+       struct talitos_crypto_alg *talitos_alg;
+       struct talitos_ctx *ctx = crypto_tfm_ctx(tfm);
+
+       if ((alg->cra_flags & CRYPTO_ALG_TYPE_MASK) == CRYPTO_ALG_TYPE_AHASH)
+               talitos_alg = container_of(__crypto_ahash_alg(alg),
+                                          struct talitos_crypto_alg,
+                                          algt.alg.hash);
+       else
+               talitos_alg = container_of(alg, struct talitos_crypto_alg,
+                                          algt.alg.crypto);
+
+       return talitos_init_common(ctx, talitos_alg);
+}
+
 static int talitos_cra_init_aead(struct crypto_aead *tfm)
 {
-       talitos_cra_init(crypto_aead_tfm(tfm));
-       return 0;
+       struct aead_alg *alg = crypto_aead_alg(tfm);
+       struct talitos_crypto_alg *talitos_alg;
+       struct talitos_ctx *ctx = crypto_aead_ctx(tfm);
+
+       talitos_alg = container_of(alg, struct talitos_crypto_alg,
+                                  algt.alg.aead);
+
+       return talitos_init_common(ctx, talitos_alg);
 }
 
 static int talitos_cra_init_ahash(struct crypto_tfm *tfm)
index 01087a3..792bdae 100644 (file)
@@ -1866,7 +1866,7 @@ static int i7core_mce_check_error(struct notifier_block *nb, unsigned long val,
 
        i7_dev = get_i7core_dev(mce->socketid);
        if (!i7_dev)
-               return NOTIFY_BAD;
+               return NOTIFY_DONE;
 
        mci = i7_dev->mci;
        pvt = mci->pvt_info;
index 93f0d41..8bf745d 100644 (file)
@@ -362,6 +362,7 @@ struct sbridge_pvt {
 
        /* Memory type detection */
        bool                    is_mirrored, is_lockstep, is_close_pg;
+       bool                    is_chan_hash;
 
        /* Fifo double buffers */
        struct mce              mce_entry[MCE_LOG_LEN];
@@ -1060,6 +1061,20 @@ static inline u8 sad_pkg_ha(u8 pkg)
        return (pkg >> 2) & 0x1;
 }
 
+static int haswell_chan_hash(int idx, u64 addr)
+{
+       int i;
+
+       /*
+        * XOR even bits from 12:26 to bit0 of idx,
+        *     odd bits from 13:27 to bit1
+        */
+       for (i = 12; i < 28; i += 2)
+               idx ^= (addr >> i) & 3;
+
+       return idx;
+}
+
 /****************************************************************************
                        Memory check routines
  ****************************************************************************/
@@ -1616,6 +1631,10 @@ static int get_dimm_config(struct mem_ctl_info *mci)
                KNL_MAX_CHANNELS : NUM_CHANNELS;
        u64 knl_mc_sizes[KNL_MAX_CHANNELS];
 
+       if (pvt->info.type == HASWELL || pvt->info.type == BROADWELL) {
+               pci_read_config_dword(pvt->pci_ha0, HASWELL_HASYSDEFEATURE2, &reg);
+               pvt->is_chan_hash = GET_BITFIELD(reg, 21, 21);
+       }
        if (pvt->info.type == HASWELL || pvt->info.type == BROADWELL ||
                        pvt->info.type == KNIGHTS_LANDING)
                pci_read_config_dword(pvt->pci_sad1, SAD_TARGET, &reg);
@@ -2118,12 +2137,15 @@ static int get_memory_error_data(struct mem_ctl_info *mci,
        }
 
        ch_way = TAD_CH(reg) + 1;
-       sck_way = 1 << TAD_SOCK(reg);
+       sck_way = TAD_SOCK(reg);
 
        if (ch_way == 3)
                idx = addr >> 6;
-       else
+       else {
                idx = (addr >> (6 + sck_way + shiftup)) & 0x3;
+               if (pvt->is_chan_hash)
+                       idx = haswell_chan_hash(idx, addr);
+       }
        idx = idx % ch_way;
 
        /*
@@ -2157,7 +2179,7 @@ static int get_memory_error_data(struct mem_ctl_info *mci,
                switch(ch_way) {
                case 2:
                case 4:
-                       sck_xch = 1 << sck_way * (ch_way >> 1);
+                       sck_xch = (1 << sck_way) * (ch_way >> 1);
                        break;
                default:
                        sprintf(msg, "Invalid mirror set. Can't decode addr");
@@ -2193,7 +2215,7 @@ static int get_memory_error_data(struct mem_ctl_info *mci,
 
        ch_addr = addr - offset;
        ch_addr >>= (6 + shiftup);
-       ch_addr /= ch_way * sck_way;
+       ch_addr /= sck_xch;
        ch_addr <<= (6 + shiftup);
        ch_addr |= addr & ((1 << (6 + shiftup)) - 1);
 
@@ -3146,7 +3168,7 @@ static int sbridge_mce_check_error(struct notifier_block *nb, unsigned long val,
 
        mci = get_mci_for_node_id(mce->socketid);
        if (!mci)
-               return NOTIFY_BAD;
+               return NOTIFY_DONE;
        pvt = mci->pvt_info;
 
        /*
index f4ea80d..309311b 100644 (file)
@@ -1023,7 +1023,7 @@ static int fwnet_send_packet(struct fwnet_packet_task *ptask)
 
        spin_unlock_irqrestore(&dev->lock, flags);
 
-       dev->netdev->trans_start = jiffies;
+       netif_trans_update(dev->netdev);
  out:
        if (free)
                fwnet_free_ptask(ptask);
index 0ac594c..34b7419 100644 (file)
@@ -202,29 +202,44 @@ static const struct variable_validate variable_validate[] = {
        { NULL_GUID, "", NULL },
 };
 
+/*
+ * Check if @var_name matches the pattern given in @match_name.
+ *
+ * @var_name: an array of @len non-NUL characters.
+ * @match_name: a NUL-terminated pattern string, optionally ending in "*". A
+ *              final "*" character matches any trailing characters @var_name,
+ *              including the case when there are none left in @var_name.
+ * @match: on output, the number of non-wildcard characters in @match_name
+ *         that @var_name matches, regardless of the return value.
+ * @return: whether @var_name fully matches @match_name.
+ */
 static bool
 variable_matches(const char *var_name, size_t len, const char *match_name,
                 int *match)
 {
        for (*match = 0; ; (*match)++) {
                char c = match_name[*match];
-               char u = var_name[*match];
 
-               /* Wildcard in the matching name means we've matched */
-               if (c == '*')
+               switch (c) {
+               case '*':
+                       /* Wildcard in @match_name means we've matched. */
                        return true;
 
-               /* Case sensitive match */
-               if (!c && *match == len)
-                       return true;
+               case '\0':
+                       /* @match_name has ended. Has @var_name too? */
+                       return (*match == len);
 
-               if (c != u)
+               default:
+                       /*
+                        * We've reached a non-wildcard char in @match_name.
+                        * Continue only if there's an identical character in
+                        * @var_name.
+                        */
+                       if (*match < len && c == var_name[*match])
+                               continue;
                        return false;
-
-               if (!c)
-                       return true;
+               }
        }
-       return true;
 }
 
 bool
index 11bfee8..b5d0580 100644 (file)
@@ -360,7 +360,7 @@ static struct cpuidle_ops psci_cpuidle_ops __initdata = {
        .init = psci_dt_cpu_init_idle,
 };
 
-CPUIDLE_METHOD_OF_DECLARE(psci, "arm,psci", &psci_cpuidle_ops);
+CPUIDLE_METHOD_OF_DECLARE(psci, "psci", &psci_cpuidle_ops);
 #endif
 #endif
 
index d9ab0cd..4d9a315 100644 (file)
@@ -196,44 +196,6 @@ static int gpio_rcar_irq_set_wake(struct irq_data *d, unsigned int on)
        return 0;
 }
 
-static void gpio_rcar_irq_bus_lock(struct irq_data *d)
-{
-       struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
-       struct gpio_rcar_priv *p = gpiochip_get_data(gc);
-
-       pm_runtime_get_sync(&p->pdev->dev);
-}
-
-static void gpio_rcar_irq_bus_sync_unlock(struct irq_data *d)
-{
-       struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
-       struct gpio_rcar_priv *p = gpiochip_get_data(gc);
-
-       pm_runtime_put(&p->pdev->dev);
-}
-
-
-static int gpio_rcar_irq_request_resources(struct irq_data *d)
-{
-       struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
-       struct gpio_rcar_priv *p = gpiochip_get_data(gc);
-       int error;
-
-       error = pm_runtime_get_sync(&p->pdev->dev);
-       if (error < 0)
-               return error;
-
-       return 0;
-}
-
-static void gpio_rcar_irq_release_resources(struct irq_data *d)
-{
-       struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
-       struct gpio_rcar_priv *p = gpiochip_get_data(gc);
-
-       pm_runtime_put(&p->pdev->dev);
-}
-
 static irqreturn_t gpio_rcar_irq_handler(int irq, void *dev_id)
 {
        struct gpio_rcar_priv *p = dev_id;
@@ -280,32 +242,18 @@ static void gpio_rcar_config_general_input_output_mode(struct gpio_chip *chip,
 
 static int gpio_rcar_request(struct gpio_chip *chip, unsigned offset)
 {
-       struct gpio_rcar_priv *p = gpiochip_get_data(chip);
-       int error;
-
-       error = pm_runtime_get_sync(&p->pdev->dev);
-       if (error < 0)
-               return error;
-
-       error = pinctrl_request_gpio(chip->base + offset);
-       if (error)
-               pm_runtime_put(&p->pdev->dev);
-
-       return error;
+       return pinctrl_request_gpio(chip->base + offset);
 }
 
 static void gpio_rcar_free(struct gpio_chip *chip, unsigned offset)
 {
-       struct gpio_rcar_priv *p = gpiochip_get_data(chip);
-
        pinctrl_free_gpio(chip->base + offset);
 
-       /* Set the GPIO as an input to ensure that the next GPIO request won't
+       /*
+        * Set the GPIO as an input to ensure that the next GPIO request won't
         * drive the GPIO pin as an output.
         */
        gpio_rcar_config_general_input_output_mode(chip, offset, false);
-
-       pm_runtime_put(&p->pdev->dev);
 }
 
 static int gpio_rcar_direction_input(struct gpio_chip *chip, unsigned offset)
@@ -452,6 +400,7 @@ static int gpio_rcar_probe(struct platform_device *pdev)
        }
 
        pm_runtime_enable(dev);
+       pm_runtime_get_sync(dev);
 
        io = platform_get_resource(pdev, IORESOURCE_MEM, 0);
        irq = platform_get_resource(pdev, IORESOURCE_IRQ, 0);
@@ -488,10 +437,6 @@ static int gpio_rcar_probe(struct platform_device *pdev)
        irq_chip->irq_unmask = gpio_rcar_irq_enable;
        irq_chip->irq_set_type = gpio_rcar_irq_set_type;
        irq_chip->irq_set_wake = gpio_rcar_irq_set_wake;
-       irq_chip->irq_bus_lock = gpio_rcar_irq_bus_lock;
-       irq_chip->irq_bus_sync_unlock = gpio_rcar_irq_bus_sync_unlock;
-       irq_chip->irq_request_resources = gpio_rcar_irq_request_resources;
-       irq_chip->irq_release_resources = gpio_rcar_irq_release_resources;
        irq_chip->flags = IRQCHIP_SET_TYPE_MASKED | IRQCHIP_MASK_ON_SUSPEND;
 
        ret = gpiochip_add_data(gpio_chip, p);
@@ -522,6 +467,7 @@ static int gpio_rcar_probe(struct platform_device *pdev)
 err1:
        gpiochip_remove(gpio_chip);
 err0:
+       pm_runtime_put(dev);
        pm_runtime_disable(dev);
        return ret;
 }
@@ -532,6 +478,7 @@ static int gpio_rcar_remove(struct platform_device *pdev)
 
        gpiochip_remove(&p->gpio_chip);
 
+       pm_runtime_put(&pdev->dev);
        pm_runtime_disable(&pdev->dev);
        return 0;
 }
index 682070d..2dc5258 100644 (file)
@@ -977,7 +977,7 @@ bool acpi_can_fallback_to_crs(struct acpi_device *adev, const char *con_id)
                lookup = kmalloc(sizeof(*lookup), GFP_KERNEL);
                if (lookup) {
                        lookup->adev = adev;
-                       lookup->con_id = con_id;
+                       lookup->con_id = kstrdup(con_id, GFP_KERNEL);
                        list_add_tail(&lookup->node, &acpi_crs_lookup_list);
                }
        }
index b77489d..1bcbade 100644 (file)
@@ -1591,6 +1591,7 @@ struct amdgpu_uvd {
        struct amdgpu_bo        *vcpu_bo;
        void                    *cpu_addr;
        uint64_t                gpu_addr;
+       unsigned                fw_version;
        void                    *saved_bo;
        atomic_t                handles[AMDGPU_MAX_UVD_HANDLES];
        struct drm_file         *filp[AMDGPU_MAX_UVD_HANDLES];
index d6b0bff..b7b583c 100644 (file)
@@ -425,6 +425,10 @@ static int acp_resume(void *handle)
        struct acp_pm_domain *apd;
        struct amdgpu_device *adev = (struct amdgpu_device *)handle;
 
+       /* return early if no ACP */
+       if (!adev->acp.acp_genpd)
+               return 0;
+
        /* SMU block will power on ACP irrespective of ACP runtime status.
         * Power off explicitly based on genpd ACP runtime status so that ACP
         * hw and ACP-genpd status are in sync.
index 0020a0e..35a1248 100644 (file)
@@ -63,10 +63,6 @@ bool amdgpu_has_atpx(void) {
        return amdgpu_atpx_priv.atpx_detected;
 }
 
-bool amdgpu_has_atpx_dgpu_power_cntl(void) {
-       return amdgpu_atpx_priv.atpx.functions.power_cntl;
-}
-
 /**
  * amdgpu_atpx_call - call an ATPX method
  *
@@ -146,6 +142,13 @@ static void amdgpu_atpx_parse_functions(struct amdgpu_atpx_functions *f, u32 mas
  */
 static int amdgpu_atpx_validate(struct amdgpu_atpx *atpx)
 {
+       /* make sure required functions are enabled */
+       /* dGPU power control is required */
+       if (atpx->functions.power_cntl == false) {
+               printk("ATPX dGPU power cntl not present, forcing\n");
+               atpx->functions.power_cntl = true;
+       }
+
        if (atpx->functions.px_params) {
                union acpi_object *info;
                struct atpx_px_params output;
index 6121174..2139da7 100644 (file)
@@ -62,12 +62,6 @@ static const char *amdgpu_asic_name[] = {
        "LAST",
 };
 
-#if defined(CONFIG_VGA_SWITCHEROO)
-bool amdgpu_has_atpx_dgpu_power_cntl(void);
-#else
-static inline bool amdgpu_has_atpx_dgpu_power_cntl(void) { return false; }
-#endif
-
 bool amdgpu_device_is_px(struct drm_device *dev)
 {
        struct amdgpu_device *adev = dev->dev_private;
@@ -1485,7 +1479,7 @@ int amdgpu_device_init(struct amdgpu_device *adev,
 
        if (amdgpu_runtime_pm == 1)
                runtime = true;
-       if (amdgpu_device_is_px(ddev) && amdgpu_has_atpx_dgpu_power_cntl())
+       if (amdgpu_device_is_px(ddev))
                runtime = true;
        vga_switcheroo_register_client(adev->pdev, &amdgpu_switcheroo_ops, runtime);
        if (runtime)
index aef70db..b04337d 100644 (file)
@@ -303,7 +303,7 @@ static int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file
                        fw_info.feature = adev->vce.fb_version;
                        break;
                case AMDGPU_INFO_FW_UVD:
-                       fw_info.ver = 0;
+                       fw_info.ver = adev->uvd.fw_version;
                        fw_info.feature = 0;
                        break;
                case AMDGPU_INFO_FW_GMC:
index 8d432e6..81bd964 100644 (file)
@@ -53,7 +53,7 @@ struct amdgpu_hpd;
 
 #define AMDGPU_MAX_HPD_PINS 6
 #define AMDGPU_MAX_CRTCS 6
-#define AMDGPU_MAX_AFMT_BLOCKS 7
+#define AMDGPU_MAX_AFMT_BLOCKS 9
 
 enum amdgpu_rmx_type {
        RMX_OFF,
@@ -309,8 +309,8 @@ struct amdgpu_mode_info {
        struct atom_context *atom_context;
        struct card_info *atom_card_info;
        bool mode_config_initialized;
-       struct amdgpu_crtc *crtcs[6];
-       struct amdgpu_afmt *afmt[7];
+       struct amdgpu_crtc *crtcs[AMDGPU_MAX_CRTCS];
+       struct amdgpu_afmt *afmt[AMDGPU_MAX_AFMT_BLOCKS];
        /* DVI-I properties */
        struct drm_property *coherent_mode_property;
        /* DAC enable load detect */
index 6f3369d..11af449 100644 (file)
@@ -223,6 +223,8 @@ static int amdgpu_verify_access(struct ttm_buffer_object *bo, struct file *filp)
 {
        struct amdgpu_bo *rbo = container_of(bo, struct amdgpu_bo, tbo);
 
+       if (amdgpu_ttm_tt_get_usermm(bo->ttm))
+               return -EPERM;
        return drm_vma_node_verify_access(&rbo->gem_base.vma_node, filp);
 }
 
index 338da80..871018c 100644 (file)
@@ -158,6 +158,9 @@ int amdgpu_uvd_sw_init(struct amdgpu_device *adev)
        DRM_INFO("Found UVD firmware Version: %hu.%hu Family ID: %hu\n",
                version_major, version_minor, family_id);
 
+       adev->uvd.fw_version = ((version_major << 24) | (version_minor << 16) |
+                               (family_id << 8));
+
        bo_size = AMDGPU_GPU_PAGE_ALIGN(le32_to_cpu(hdr->ucode_size_bytes) + 8)
                 +  AMDGPU_UVD_STACK_SIZE + AMDGPU_UVD_HEAP_SIZE;
        r = amdgpu_bo_create(adev, bo_size, PAGE_SIZE, true,
@@ -255,6 +258,8 @@ int amdgpu_uvd_suspend(struct amdgpu_device *adev)
        if (i == AMDGPU_MAX_UVD_HANDLES)
                return 0;
 
+       cancel_delayed_work_sync(&adev->uvd.idle_work);
+
        size = amdgpu_bo_size(adev->uvd.vcpu_bo);
        ptr = adev->uvd.cpu_addr;
 
index 4bec0c1..481a64f 100644 (file)
@@ -234,6 +234,7 @@ int amdgpu_vce_suspend(struct amdgpu_device *adev)
        if (i == AMDGPU_MAX_VCE_HANDLES)
                return 0;
 
+       cancel_delayed_work_sync(&adev->vce.idle_work);
        /* TODO: suspending running encoding sessions isn't supported */
        return -EINVAL;
 }
index 05b0353..a4a2e6c 100644 (file)
@@ -910,7 +910,10 @@ static int gmc_v7_0_late_init(void *handle)
 {
        struct amdgpu_device *adev = (struct amdgpu_device *)handle;
 
-       return amdgpu_irq_get(adev, &adev->mc.vm_fault, 0);
+       if (amdgpu_vm_fault_stop != AMDGPU_VM_FAULT_STOP_ALWAYS)
+               return amdgpu_irq_get(adev, &adev->mc.vm_fault, 0);
+       else
+               return 0;
 }
 
 static int gmc_v7_0_sw_init(void *handle)
index 02deb32..7a9db2c 100644 (file)
@@ -870,7 +870,10 @@ static int gmc_v8_0_late_init(void *handle)
 {
        struct amdgpu_device *adev = (struct amdgpu_device *)handle;
 
-       return amdgpu_irq_get(adev, &adev->mc.vm_fault, 0);
+       if (amdgpu_vm_fault_stop != AMDGPU_VM_FAULT_STOP_ALWAYS)
+               return amdgpu_irq_get(adev, &adev->mc.vm_fault, 0);
+       else
+               return 0;
 }
 
 #define mmMC_SEQ_MISC0_FIJI 0xA71
index 27fbd79..71ea052 100644 (file)
@@ -1672,13 +1672,19 @@ static int drm_dp_payload_send_msg(struct drm_dp_mst_topology_mgr *mgr,
        u8 sinks[DRM_DP_MAX_SDP_STREAMS];
        int i;
 
+       port = drm_dp_get_validated_port_ref(mgr, port);
+       if (!port)
+               return -EINVAL;
+
        port_num = port->port_num;
        mstb = drm_dp_get_validated_mstb_ref(mgr, port->parent);
        if (!mstb) {
                mstb = drm_dp_get_last_connected_port_and_mstb(mgr, port->parent, &port_num);
 
-               if (!mstb)
+               if (!mstb) {
+                       drm_dp_put_port(port);
                        return -EINVAL;
+               }
        }
 
        txmsg = kzalloc(sizeof(*txmsg), GFP_KERNEL);
@@ -1707,6 +1713,7 @@ static int drm_dp_payload_send_msg(struct drm_dp_mst_topology_mgr *mgr,
        kfree(txmsg);
 fail_put:
        drm_dp_put_mst_branch_device(mstb);
+       drm_dp_put_port(port);
        return ret;
 }
 
@@ -1789,6 +1796,11 @@ int drm_dp_update_payload_part1(struct drm_dp_mst_topology_mgr *mgr)
                req_payload.start_slot = cur_slots;
                if (mgr->proposed_vcpis[i]) {
                        port = container_of(mgr->proposed_vcpis[i], struct drm_dp_mst_port, vcpi);
+                       port = drm_dp_get_validated_port_ref(mgr, port);
+                       if (!port) {
+                               mutex_unlock(&mgr->payload_lock);
+                               return -EINVAL;
+                       }
                        req_payload.num_slots = mgr->proposed_vcpis[i]->num_slots;
                        req_payload.vcpi = mgr->proposed_vcpis[i]->vcpi;
                } else {
@@ -1816,6 +1828,9 @@ int drm_dp_update_payload_part1(struct drm_dp_mst_topology_mgr *mgr)
                        mgr->payloads[i].payload_state = req_payload.payload_state;
                }
                cur_slots += req_payload.num_slots;
+
+               if (port)
+                       drm_dp_put_port(port);
        }
 
        for (i = 0; i < mgr->max_payloads; i++) {
@@ -2121,6 +2136,8 @@ int drm_dp_mst_topology_mgr_resume(struct drm_dp_mst_topology_mgr *mgr)
 
        if (mgr->mst_primary) {
                int sret;
+               u8 guid[16];
+
                sret = drm_dp_dpcd_read(mgr->aux, DP_DPCD_REV, mgr->dpcd, DP_RECEIVER_CAP_SIZE);
                if (sret != DP_RECEIVER_CAP_SIZE) {
                        DRM_DEBUG_KMS("dpcd read failed - undocked during suspend?\n");
@@ -2135,6 +2152,16 @@ int drm_dp_mst_topology_mgr_resume(struct drm_dp_mst_topology_mgr *mgr)
                        ret = -1;
                        goto out_unlock;
                }
+
+               /* Some hubs forget their guids after they resume */
+               sret = drm_dp_dpcd_read(mgr->aux, DP_GUID, guid, 16);
+               if (sret != 16) {
+                       DRM_DEBUG_KMS("dpcd read failed - undocked during suspend?\n");
+                       ret = -1;
+                       goto out_unlock;
+               }
+               drm_dp_check_mstb_guid(mgr->mst_primary, guid);
+
                ret = 0;
        } else
                ret = -1;
index 09198d0..306dde1 100644 (file)
@@ -572,6 +572,24 @@ int etnaviv_gpu_init(struct etnaviv_gpu *gpu)
                goto fail;
        }
 
+       /*
+        * Set the GPU linear window to be at the end of the DMA window, where
+        * the CMA area is likely to reside. This ensures that we are able to
+        * map the command buffers while having the linear window overlap as
+        * much RAM as possible, so we can optimize mappings for other buffers.
+        *
+        * For 3D cores only do this if MC2.0 is present, as with MC1.0 it leads
+        * to different views of the memory on the individual engines.
+        */
+       if (!(gpu->identity.features & chipFeatures_PIPE_3D) ||
+           (gpu->identity.minor_features0 & chipMinorFeatures0_MC20)) {
+               u32 dma_mask = (u32)dma_get_required_mask(gpu->dev);
+               if (dma_mask < PHYS_OFFSET + SZ_2G)
+                       gpu->memory_base = PHYS_OFFSET;
+               else
+                       gpu->memory_base = dma_mask - SZ_2G + 1;
+       }
+
        ret = etnaviv_hw_reset(gpu);
        if (ret)
                goto fail;
@@ -1566,7 +1584,6 @@ static int etnaviv_gpu_platform_probe(struct platform_device *pdev)
 {
        struct device *dev = &pdev->dev;
        struct etnaviv_gpu *gpu;
-       u32 dma_mask;
        int err = 0;
 
        gpu = devm_kzalloc(dev, sizeof(*gpu), GFP_KERNEL);
@@ -1576,18 +1593,6 @@ static int etnaviv_gpu_platform_probe(struct platform_device *pdev)
        gpu->dev = &pdev->dev;
        mutex_init(&gpu->lock);
 
-       /*
-        * Set the GPU linear window to be at the end of the DMA window, where
-        * the CMA area is likely to reside. This ensures that we are able to
-        * map the command buffers while having the linear window overlap as
-        * much RAM as possible, so we can optimize mappings for other buffers.
-        */
-       dma_mask = (u32)dma_get_required_mask(dev);
-       if (dma_mask < PHYS_OFFSET + SZ_2G)
-               gpu->memory_base = PHYS_OFFSET;
-       else
-               gpu->memory_base = dma_mask - SZ_2G + 1;
-
        /* Map registers: */
        gpu->mmio = etnaviv_ioremap(pdev, NULL, dev_name(gpu->dev));
        if (IS_ERR(gpu->mmio))
index 1048093..daba7eb 100644 (file)
@@ -2634,8 +2634,9 @@ struct drm_i915_cmd_table {
 
 /* WaRsDisableCoarsePowerGating:skl,bxt */
 #define NEEDS_WaRsDisableCoarsePowerGating(dev) (IS_BXT_REVID(dev, 0, BXT_REVID_A1) || \
-                                                ((IS_SKL_GT3(dev) || IS_SKL_GT4(dev)) && \
-                                                 IS_SKL_REVID(dev, 0, SKL_REVID_F0)))
+                                                IS_SKL_GT3(dev) || \
+                                                IS_SKL_GT4(dev))
+
 /*
  * dp aux and gmbus irq on gen4 seems to be able to generate legacy interrupts
  * even when in MSI mode. This results in spurious interrupt warnings if the
index 18ba813..4d30b60 100644 (file)
@@ -501,19 +501,24 @@ __i915_gem_userptr_get_pages_worker(struct work_struct *_work)
        if (pvec != NULL) {
                struct mm_struct *mm = obj->userptr.mm->mm;
 
-               down_read(&mm->mmap_sem);
-               while (pinned < npages) {
-                       ret = get_user_pages_remote(work->task, mm,
-                                       obj->userptr.ptr + pinned * PAGE_SIZE,
-                                       npages - pinned,
-                                       !obj->userptr.read_only, 0,
-                                       pvec + pinned, NULL);
-                       if (ret < 0)
-                               break;
-
-                       pinned += ret;
+               ret = -EFAULT;
+               if (atomic_inc_not_zero(&mm->mm_users)) {
+                       down_read(&mm->mmap_sem);
+                       while (pinned < npages) {
+                               ret = get_user_pages_remote
+                                       (work->task, mm,
+                                        obj->userptr.ptr + pinned * PAGE_SIZE,
+                                        npages - pinned,
+                                        !obj->userptr.read_only, 0,
+                                        pvec + pinned, NULL);
+                               if (ret < 0)
+                                       break;
+
+                               pinned += ret;
+                       }
+                       up_read(&mm->mmap_sem);
+                       mmput(mm);
                }
-               up_read(&mm->mmap_sem);
        }
 
        mutex_lock(&dev->struct_mutex);
index 6a978ce..5c6080f 100644 (file)
@@ -841,11 +841,11 @@ static int logical_ring_prepare(struct drm_i915_gem_request *req, int bytes)
                if (unlikely(total_bytes > remain_usable)) {
                        /*
                         * The base request will fit but the reserved space
-                        * falls off the end. So only need to to wait for the
-                        * reserved size after flushing out the remainder.
+                        * falls off the end. So don't need an immediate wrap
+                        * and only need to effectively wait for the reserved
+                        * size space from the start of ringbuffer.
                         */
                        wait_bytes = remain_actual + ringbuf->reserved_size;
-                       need_wrap = true;
                } else if (total_bytes > ringbuf->space) {
                        /* No wrapping required, just waiting. */
                        wait_bytes = total_bytes;
@@ -1913,15 +1913,18 @@ static int gen8_emit_request_render(struct drm_i915_gem_request *request)
        struct intel_ringbuffer *ringbuf = request->ringbuf;
        int ret;
 
-       ret = intel_logical_ring_begin(request, 6 + WA_TAIL_DWORDS);
+       ret = intel_logical_ring_begin(request, 8 + WA_TAIL_DWORDS);
        if (ret)
                return ret;
 
+       /* We're using qword write, seqno should be aligned to 8 bytes. */
+       BUILD_BUG_ON(I915_GEM_HWS_INDEX & 1);
+
        /* w/a for post sync ops following a GPGPU operation we
         * need a prior CS_STALL, which is emitted by the flush
         * following the batch.
         */
-       intel_logical_ring_emit(ringbuf, GFX_OP_PIPE_CONTROL(5));
+       intel_logical_ring_emit(ringbuf, GFX_OP_PIPE_CONTROL(6));
        intel_logical_ring_emit(ringbuf,
                                (PIPE_CONTROL_GLOBAL_GTT_IVB |
                                 PIPE_CONTROL_CS_STALL |
@@ -1929,7 +1932,10 @@ static int gen8_emit_request_render(struct drm_i915_gem_request *request)
        intel_logical_ring_emit(ringbuf, hws_seqno_address(request->ring));
        intel_logical_ring_emit(ringbuf, 0);
        intel_logical_ring_emit(ringbuf, i915_gem_request_get_seqno(request));
+       /* We're thrashing one dword of HWS. */
+       intel_logical_ring_emit(ringbuf, 0);
        intel_logical_ring_emit(ringbuf, MI_USER_INTERRUPT);
+       intel_logical_ring_emit(ringbuf, MI_NOOP);
        return intel_logical_ring_advance_and_submit(request);
 }
 
index 347d4df..8ed3cf3 100644 (file)
@@ -2876,25 +2876,28 @@ skl_plane_relative_data_rate(const struct intel_crtc_state *cstate,
                             const struct drm_plane_state *pstate,
                             int y)
 {
-       struct intel_crtc *intel_crtc = to_intel_crtc(cstate->base.crtc);
+       struct intel_plane_state *intel_pstate = to_intel_plane_state(pstate);
        struct drm_framebuffer *fb = pstate->fb;
+       uint32_t width = 0, height = 0;
+
+       width = drm_rect_width(&intel_pstate->src) >> 16;
+       height = drm_rect_height(&intel_pstate->src) >> 16;
+
+       if (intel_rotation_90_or_270(pstate->rotation))
+               swap(width, height);
 
        /* for planar format */
        if (fb->pixel_format == DRM_FORMAT_NV12) {
                if (y)  /* y-plane data rate */
-                       return intel_crtc->config->pipe_src_w *
-                               intel_crtc->config->pipe_src_h *
+                       return width * height *
                                drm_format_plane_cpp(fb->pixel_format, 0);
                else    /* uv-plane data rate */
-                       return (intel_crtc->config->pipe_src_w/2) *
-                               (intel_crtc->config->pipe_src_h/2) *
+                       return (width / 2) * (height / 2) *
                                drm_format_plane_cpp(fb->pixel_format, 1);
        }
 
        /* for packed formats */
-       return intel_crtc->config->pipe_src_w *
-               intel_crtc->config->pipe_src_h *
-               drm_format_plane_cpp(fb->pixel_format, 0);
+       return width * height * drm_format_plane_cpp(fb->pixel_format, 0);
 }
 
 /*
@@ -2973,8 +2976,9 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *cstate,
                struct drm_framebuffer *fb = plane->state->fb;
                int id = skl_wm_plane_id(intel_plane);
 
-               if (fb == NULL)
+               if (!to_intel_plane_state(plane->state)->visible)
                        continue;
+
                if (plane->type == DRM_PLANE_TYPE_CURSOR)
                        continue;
 
@@ -3000,7 +3004,7 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *cstate,
                uint16_t plane_blocks, y_plane_blocks = 0;
                int id = skl_wm_plane_id(intel_plane);
 
-               if (pstate->fb == NULL)
+               if (!to_intel_plane_state(pstate)->visible)
                        continue;
                if (plane->type == DRM_PLANE_TYPE_CURSOR)
                        continue;
@@ -3123,26 +3127,36 @@ static bool skl_compute_plane_wm(const struct drm_i915_private *dev_priv,
 {
        struct drm_plane *plane = &intel_plane->base;
        struct drm_framebuffer *fb = plane->state->fb;
+       struct intel_plane_state *intel_pstate =
+                                       to_intel_plane_state(plane->state);
        uint32_t latency = dev_priv->wm.skl_latency[level];
        uint32_t method1, method2;
        uint32_t plane_bytes_per_line, plane_blocks_per_line;
        uint32_t res_blocks, res_lines;
        uint32_t selected_result;
        uint8_t cpp;
+       uint32_t width = 0, height = 0;
 
-       if (latency == 0 || !cstate->base.active || !fb)
+       if (latency == 0 || !cstate->base.active || !intel_pstate->visible)
                return false;
 
+       width = drm_rect_width(&intel_pstate->src) >> 16;
+       height = drm_rect_height(&intel_pstate->src) >> 16;
+
+       if (intel_rotation_90_or_270(plane->state->rotation))
+               swap(width, height);
+
        cpp = drm_format_plane_cpp(fb->pixel_format, 0);
        method1 = skl_wm_method1(skl_pipe_pixel_rate(cstate),
                                 cpp, latency);
        method2 = skl_wm_method2(skl_pipe_pixel_rate(cstate),
                                 cstate->base.adjusted_mode.crtc_htotal,
-                                cstate->pipe_src_w,
-                                cpp, fb->modifier[0],
+                                width,
+                                cpp,
+                                fb->modifier[0],
                                 latency);
 
-       plane_bytes_per_line = cstate->pipe_src_w * cpp;
+       plane_bytes_per_line = width * cpp;
        plane_blocks_per_line = DIV_ROUND_UP(plane_bytes_per_line, 512);
 
        if (fb->modifier[0] == I915_FORMAT_MOD_Y_TILED ||
index 45ce45a..9121646 100644 (file)
@@ -968,7 +968,7 @@ static int gen9_init_workarounds(struct intel_engine_cs *ring)
 
        /* WaForceContextSaveRestoreNonCoherent:skl,bxt */
        tmp = HDC_FORCE_CONTEXT_SAVE_RESTORE_NON_COHERENT;
-       if (IS_SKL_REVID(dev, SKL_REVID_F0, SKL_REVID_F0) ||
+       if (IS_SKL_REVID(dev, SKL_REVID_F0, REVID_FOREVER) ||
            IS_BXT_REVID(dev, BXT_REVID_B0, REVID_FOREVER))
                tmp |= HDC_FORCE_CSR_NON_COHERENT_OVR_DISABLE;
        WA_SET_BIT_MASKED(HDC_CHICKEN0, tmp);
@@ -1085,7 +1085,8 @@ static int skl_init_workarounds(struct intel_engine_cs *ring)
                WA_SET_BIT_MASKED(HIZ_CHICKEN,
                                  BDW_HIZ_POWER_COMPILER_CLOCK_GATING_DISABLE);
 
-       if (IS_SKL_REVID(dev, 0, SKL_REVID_F0)) {
+       /* This is tied to WaForceContextSaveRestoreNonCoherent */
+       if (IS_SKL_REVID(dev, 0, REVID_FOREVER)) {
                /*
                 *Use Force Non-Coherent whenever executing a 3D context. This
                 * is a workaround for a possible hang in the unlikely event
@@ -2090,10 +2091,12 @@ int intel_pin_and_map_ringbuffer_obj(struct drm_device *dev,
 {
        struct drm_i915_private *dev_priv = to_i915(dev);
        struct drm_i915_gem_object *obj = ringbuf->obj;
+       /* Ring wraparound at offset 0 sometimes hangs. No idea why. */
+       unsigned flags = PIN_OFFSET_BIAS | 4096;
        int ret;
 
        if (HAS_LLC(dev_priv) && !obj->stolen) {
-               ret = i915_gem_obj_ggtt_pin(obj, PAGE_SIZE, 0);
+               ret = i915_gem_obj_ggtt_pin(obj, PAGE_SIZE, flags);
                if (ret)
                        return ret;
 
@@ -2109,7 +2112,8 @@ int intel_pin_and_map_ringbuffer_obj(struct drm_device *dev,
                        return -ENOMEM;
                }
        } else {
-               ret = i915_gem_obj_ggtt_pin(obj, PAGE_SIZE, PIN_MAPPABLE);
+               ret = i915_gem_obj_ggtt_pin(obj, PAGE_SIZE,
+                                           flags | PIN_MAPPABLE);
                if (ret)
                        return ret;
 
@@ -2454,11 +2458,11 @@ static int __intel_ring_prepare(struct intel_engine_cs *ring, int bytes)
                if (unlikely(total_bytes > remain_usable)) {
                        /*
                         * The base request will fit but the reserved space
-                        * falls off the end. So only need to to wait for the
-                        * reserved size after flushing out the remainder.
+                        * falls off the end. So don't need an immediate wrap
+                        * and only need to effectively wait for the reserved
+                        * size space from the start of ringbuffer.
                         */
                        wait_bytes = remain_actual + ringbuf->reserved_size;
-                       need_wrap = true;
                } else if (total_bytes > ringbuf->space) {
                        /* No wrapping required, just waiting. */
                        wait_bytes = total_bytes;
index 436d8f2..68b6f69 100644 (file)
@@ -1189,7 +1189,11 @@ static void intel_uncore_fw_domains_init(struct drm_device *dev)
        } else if (IS_HASWELL(dev) || IS_BROADWELL(dev)) {
                dev_priv->uncore.funcs.force_wake_get =
                        fw_domains_get_with_thread_status;
-               dev_priv->uncore.funcs.force_wake_put = fw_domains_put;
+               if (IS_HASWELL(dev))
+                       dev_priv->uncore.funcs.force_wake_put =
+                               fw_domains_put_with_fifo;
+               else
+                       dev_priv->uncore.funcs.force_wake_put = fw_domains_put;
                fw_domain_init(dev_priv, FW_DOMAIN_ID_RENDER,
                               FORCEWAKE_MT, FORCEWAKE_ACK_HSW);
        } else if (IS_IVYBRIDGE(dev)) {
index ae96ebc..e81aefe 100644 (file)
@@ -1276,18 +1276,18 @@ nouveau_connector_create(struct drm_device *dev, int index)
                break;
        default:
                if (disp->dithering_mode) {
+                       nv_connector->dithering_mode = DITHERING_MODE_AUTO;
                        drm_object_attach_property(&connector->base,
                                                   disp->dithering_mode,
                                                   nv_connector->
                                                   dithering_mode);
-                       nv_connector->dithering_mode = DITHERING_MODE_AUTO;
                }
                if (disp->dithering_depth) {
+                       nv_connector->dithering_depth = DITHERING_DEPTH_AUTO;
                        drm_object_attach_property(&connector->base,
                                                   disp->dithering_depth,
                                                   nv_connector->
                                                   dithering_depth);
-                       nv_connector->dithering_depth = DITHERING_DEPTH_AUTO;
                }
                break;
        }
index c56a886..b2de290 100644 (file)
@@ -1832,6 +1832,8 @@ gf100_gr_init(struct gf100_gr *gr)
 
        gf100_gr_mmio(gr, gr->func->mmio);
 
+       nvkm_mask(device, TPC_UNIT(0, 0, 0x05c), 0x00000001, 0x00000001);
+
        memcpy(tpcnr, gr->tpc_nr, sizeof(gr->tpc_nr));
        for (i = 0, gpc = -1; i < gr->tpc_total; i++) {
                do {
index 76c4bdf..34f7a29 100644 (file)
@@ -2608,10 +2608,152 @@ static void evergreen_agp_enable(struct radeon_device *rdev)
        WREG32(VM_CONTEXT1_CNTL, 0);
 }
 
+static const unsigned ni_dig_offsets[] =
+{
+       NI_DIG0_REGISTER_OFFSET,
+       NI_DIG1_REGISTER_OFFSET,
+       NI_DIG2_REGISTER_OFFSET,
+       NI_DIG3_REGISTER_OFFSET,
+       NI_DIG4_REGISTER_OFFSET,
+       NI_DIG5_REGISTER_OFFSET
+};
+
+static const unsigned ni_tx_offsets[] =
+{
+       NI_DCIO_UNIPHY0_UNIPHY_TX_CONTROL1,
+       NI_DCIO_UNIPHY1_UNIPHY_TX_CONTROL1,
+       NI_DCIO_UNIPHY2_UNIPHY_TX_CONTROL1,
+       NI_DCIO_UNIPHY3_UNIPHY_TX_CONTROL1,
+       NI_DCIO_UNIPHY4_UNIPHY_TX_CONTROL1,
+       NI_DCIO_UNIPHY5_UNIPHY_TX_CONTROL1
+};
+
+static const unsigned evergreen_dp_offsets[] =
+{
+       EVERGREEN_DP0_REGISTER_OFFSET,
+       EVERGREEN_DP1_REGISTER_OFFSET,
+       EVERGREEN_DP2_REGISTER_OFFSET,
+       EVERGREEN_DP3_REGISTER_OFFSET,
+       EVERGREEN_DP4_REGISTER_OFFSET,
+       EVERGREEN_DP5_REGISTER_OFFSET
+};
+
+
+/*
+ * Assumption is that EVERGREEN_CRTC_MASTER_EN enable for requested crtc
+ * We go from crtc to connector and it is not relible  since it
+ * should be an opposite direction .If crtc is enable then
+ * find the dig_fe which selects this crtc and insure that it enable.
+ * if such dig_fe is found then find dig_be which selects found dig_be and
+ * insure that it enable and in DP_SST mode.
+ * if UNIPHY_PLL_CONTROL1.enable then we should disconnect timing
+ * from dp symbols clocks .
+ */
+static bool evergreen_is_dp_sst_stream_enabled(struct radeon_device *rdev,
+                                              unsigned crtc_id, unsigned *ret_dig_fe)
+{
+       unsigned i;
+       unsigned dig_fe;
+       unsigned dig_be;
+       unsigned dig_en_be;
+       unsigned uniphy_pll;
+       unsigned digs_fe_selected;
+       unsigned dig_be_mode;
+       unsigned dig_fe_mask;
+       bool is_enabled = false;
+       bool found_crtc = false;
+
+       /* loop through all running dig_fe to find selected crtc */
+       for (i = 0; i < ARRAY_SIZE(ni_dig_offsets); i++) {
+               dig_fe = RREG32(NI_DIG_FE_CNTL + ni_dig_offsets[i]);
+               if (dig_fe & NI_DIG_FE_CNTL_SYMCLK_FE_ON &&
+                   crtc_id == NI_DIG_FE_CNTL_SOURCE_SELECT(dig_fe)) {
+                       /* found running pipe */
+                       found_crtc = true;
+                       dig_fe_mask = 1 << i;
+                       dig_fe = i;
+                       break;
+               }
+       }
+
+       if (found_crtc) {
+               /* loop through all running dig_be to find selected dig_fe */
+               for (i = 0; i < ARRAY_SIZE(ni_dig_offsets); i++) {
+                       dig_be = RREG32(NI_DIG_BE_CNTL + ni_dig_offsets[i]);
+                       /* if dig_fe_selected by dig_be? */
+                       digs_fe_selected = NI_DIG_BE_CNTL_FE_SOURCE_SELECT(dig_be);
+                       dig_be_mode = NI_DIG_FE_CNTL_MODE(dig_be);
+                       if (dig_fe_mask &  digs_fe_selected &&
+                           /* if dig_be in sst mode? */
+                           dig_be_mode == NI_DIG_BE_DPSST) {
+                               dig_en_be = RREG32(NI_DIG_BE_EN_CNTL +
+                                                  ni_dig_offsets[i]);
+                               uniphy_pll = RREG32(NI_DCIO_UNIPHY0_PLL_CONTROL1 +
+                                                   ni_tx_offsets[i]);
+                               /* dig_be enable and tx is running */
+                               if (dig_en_be & NI_DIG_BE_EN_CNTL_ENABLE &&
+                                   dig_en_be & NI_DIG_BE_EN_CNTL_SYMBCLK_ON &&
+                                   uniphy_pll & NI_DCIO_UNIPHY0_PLL_CONTROL1_ENABLE) {
+                                       is_enabled = true;
+                                       *ret_dig_fe = dig_fe;
+                                       break;
+                               }
+                       }
+               }
+       }
+
+       return is_enabled;
+}
+
+/*
+ * Blank dig when in dp sst mode
+ * Dig ignores crtc timing
+ */
+static void evergreen_blank_dp_output(struct radeon_device *rdev,
+                                     unsigned dig_fe)
+{
+       unsigned stream_ctrl;
+       unsigned fifo_ctrl;
+       unsigned counter = 0;
+
+       if (dig_fe >= ARRAY_SIZE(evergreen_dp_offsets)) {
+               DRM_ERROR("invalid dig_fe %d\n", dig_fe);
+               return;
+       }
+
+       stream_ctrl = RREG32(EVERGREEN_DP_VID_STREAM_CNTL +
+                            evergreen_dp_offsets[dig_fe]);
+       if (!(stream_ctrl & EVERGREEN_DP_VID_STREAM_CNTL_ENABLE)) {
+               DRM_ERROR("dig %d , should be enable\n", dig_fe);
+               return;
+       }
+
+       stream_ctrl &=~EVERGREEN_DP_VID_STREAM_CNTL_ENABLE;
+       WREG32(EVERGREEN_DP_VID_STREAM_CNTL +
+              evergreen_dp_offsets[dig_fe], stream_ctrl);
+
+       stream_ctrl = RREG32(EVERGREEN_DP_VID_STREAM_CNTL +
+                            evergreen_dp_offsets[dig_fe]);
+       while (counter < 32 && stream_ctrl & EVERGREEN_DP_VID_STREAM_STATUS) {
+               msleep(1);
+               counter++;
+               stream_ctrl = RREG32(EVERGREEN_DP_VID_STREAM_CNTL +
+                                    evergreen_dp_offsets[dig_fe]);
+       }
+       if (counter >= 32 )
+               DRM_ERROR("counter exceeds %d\n", counter);
+
+       fifo_ctrl = RREG32(EVERGREEN_DP_STEER_FIFO + evergreen_dp_offsets[dig_fe]);
+       fifo_ctrl |= EVERGREEN_DP_STEER_FIFO_RESET;
+       WREG32(EVERGREEN_DP_STEER_FIFO + evergreen_dp_offsets[dig_fe], fifo_ctrl);
+
+}
+
 void evergreen_mc_stop(struct radeon_device *rdev, struct evergreen_mc_save *save)
 {
        u32 crtc_enabled, tmp, frame_count, blackout;
        int i, j;
+       unsigned dig_fe;
 
        if (!ASIC_IS_NODCE(rdev)) {
                save->vga_render_control = RREG32(VGA_RENDER_CONTROL);
@@ -2651,7 +2793,17 @@ void evergreen_mc_stop(struct radeon_device *rdev, struct evergreen_mc_save *sav
                                        break;
                                udelay(1);
                        }
-
+                       /*we should disable dig if it drives dp sst*/
+                       /*but we are in radeon_device_init and the topology is unknown*/
+                       /*and it is available after radeon_modeset_init*/
+                       /*the following method radeon_atom_encoder_dpms_dig*/
+                       /*does the job if we initialize it properly*/
+                       /*for now we do it this manually*/
+                       /**/
+                       if (ASIC_IS_DCE5(rdev) &&
+                           evergreen_is_dp_sst_stream_enabled(rdev, i ,&dig_fe))
+                               evergreen_blank_dp_output(rdev, dig_fe);
+                       /*we could remove 6 lines below*/
                        /* XXX this is a hack to avoid strange behavior with EFI on certain systems */
                        WREG32(EVERGREEN_CRTC_UPDATE_LOCK + crtc_offsets[i], 1);
                        tmp = RREG32(EVERGREEN_CRTC_CONTROL + crtc_offsets[i]);
index aa939df..b436bad 100644 (file)
 
 /* HDMI blocks at 0x7030, 0x7c30, 0x10830, 0x11430, 0x12030, 0x12c30 */
 #define EVERGREEN_HDMI_BASE                            0x7030
+/*DIG block*/
+#define NI_DIG0_REGISTER_OFFSET                 (0x7000  - 0x7000)
+#define NI_DIG1_REGISTER_OFFSET                 (0x7C00  - 0x7000)
+#define NI_DIG2_REGISTER_OFFSET                 (0x10800 - 0x7000)
+#define NI_DIG3_REGISTER_OFFSET                 (0x11400 - 0x7000)
+#define NI_DIG4_REGISTER_OFFSET                 (0x12000 - 0x7000)
+#define NI_DIG5_REGISTER_OFFSET                 (0x12C00 - 0x7000)
+
+
+#define NI_DIG_FE_CNTL                               0x7000
+#       define NI_DIG_FE_CNTL_SOURCE_SELECT(x)        ((x) & 0x3)
+#       define NI_DIG_FE_CNTL_SYMCLK_FE_ON            (1<<24)
+
+
+#define NI_DIG_BE_CNTL                    0x7140
+#       define NI_DIG_BE_CNTL_FE_SOURCE_SELECT(x)     (((x) >> 8 ) & 0x3F)
+#       define NI_DIG_FE_CNTL_MODE(x)                 (((x) >> 16) & 0x7 )
+
+#define NI_DIG_BE_EN_CNTL                              0x7144
+#       define NI_DIG_BE_EN_CNTL_ENABLE               (1 << 0)
+#       define NI_DIG_BE_EN_CNTL_SYMBCLK_ON           (1 << 8)
+#       define NI_DIG_BE_DPSST 0
 
 /* Display Port block */
+#define EVERGREEN_DP0_REGISTER_OFFSET                 (0x730C  - 0x730C)
+#define EVERGREEN_DP1_REGISTER_OFFSET                 (0x7F0C  - 0x730C)
+#define EVERGREEN_DP2_REGISTER_OFFSET                 (0x10B0C - 0x730C)
+#define EVERGREEN_DP3_REGISTER_OFFSET                 (0x1170C - 0x730C)
+#define EVERGREEN_DP4_REGISTER_OFFSET                 (0x1230C - 0x730C)
+#define EVERGREEN_DP5_REGISTER_OFFSET                 (0x12F0C - 0x730C)
+
+
+#define EVERGREEN_DP_VID_STREAM_CNTL                    0x730C
+#       define EVERGREEN_DP_VID_STREAM_CNTL_ENABLE     (1 << 0)
+#       define EVERGREEN_DP_VID_STREAM_STATUS          (1 <<16)
+#define EVERGREEN_DP_STEER_FIFO                         0x7310
+#       define EVERGREEN_DP_STEER_FIFO_RESET           (1 << 0)
 #define EVERGREEN_DP_SEC_CNTL                           0x7280
 #       define EVERGREEN_DP_SEC_STREAM_ENABLE           (1 << 0)
 #       define EVERGREEN_DP_SEC_ASP_ENABLE              (1 << 4)
 #       define EVERGREEN_DP_SEC_N_BASE_MULTIPLE(x)      (((x) & 0xf) << 24)
 #       define EVERGREEN_DP_SEC_SS_EN                   (1 << 28)
 
+/*DCIO_UNIPHY block*/
+#define NI_DCIO_UNIPHY0_UNIPHY_TX_CONTROL1            (0x6600  -0x6600)
+#define NI_DCIO_UNIPHY1_UNIPHY_TX_CONTROL1            (0x6640  -0x6600)
+#define NI_DCIO_UNIPHY2_UNIPHY_TX_CONTROL1            (0x6680 - 0x6600)
+#define NI_DCIO_UNIPHY3_UNIPHY_TX_CONTROL1            (0x66C0 - 0x6600)
+#define NI_DCIO_UNIPHY4_UNIPHY_TX_CONTROL1            (0x6700 - 0x6600)
+#define NI_DCIO_UNIPHY5_UNIPHY_TX_CONTROL1            (0x6740 - 0x6600)
+
+#define NI_DCIO_UNIPHY0_PLL_CONTROL1                   0x6618
+#       define NI_DCIO_UNIPHY0_PLL_CONTROL1_ENABLE     (1 << 0)
+
 #endif
index fd8c4d3..95f4fea 100644 (file)
@@ -62,10 +62,6 @@ bool radeon_has_atpx(void) {
        return radeon_atpx_priv.atpx_detected;
 }
 
-bool radeon_has_atpx_dgpu_power_cntl(void) {
-       return radeon_atpx_priv.atpx.functions.power_cntl;
-}
-
 /**
  * radeon_atpx_call - call an ATPX method
  *
@@ -145,6 +141,13 @@ static void radeon_atpx_parse_functions(struct radeon_atpx_functions *f, u32 mas
  */
 static int radeon_atpx_validate(struct radeon_atpx *atpx)
 {
+       /* make sure required functions are enabled */
+       /* dGPU power control is required */
+       if (atpx->functions.power_cntl == false) {
+               printk("ATPX dGPU power cntl not present, forcing\n");
+               atpx->functions.power_cntl = true;
+       }
+
        if (atpx->functions.px_params) {
                union acpi_object *info;
                struct atpx_px_params output;
index cfcc099..81a63d7 100644 (file)
@@ -2002,10 +2002,12 @@ radeon_add_atom_connector(struct drm_device *dev,
                                                   rdev->mode_info.dither_property,
                                                   RADEON_FMT_DITHER_DISABLE);
 
-                       if (radeon_audio != 0)
+                       if (radeon_audio != 0) {
                                drm_object_attach_property(&radeon_connector->base.base,
                                                           rdev->mode_info.audio_property,
                                                           RADEON_AUDIO_AUTO);
+                               radeon_connector->audio = RADEON_AUDIO_AUTO;
+                       }
                        if (ASIC_IS_DCE5(rdev))
                                drm_object_attach_property(&radeon_connector->base.base,
                                                           rdev->mode_info.output_csc_property,
@@ -2130,6 +2132,7 @@ radeon_add_atom_connector(struct drm_device *dev,
                                drm_object_attach_property(&radeon_connector->base.base,
                                                           rdev->mode_info.audio_property,
                                                           RADEON_AUDIO_AUTO);
+                               radeon_connector->audio = RADEON_AUDIO_AUTO;
                        }
                        if (connector_type == DRM_MODE_CONNECTOR_DVII) {
                                radeon_connector->dac_load_detect = true;
@@ -2185,6 +2188,7 @@ radeon_add_atom_connector(struct drm_device *dev,
                                drm_object_attach_property(&radeon_connector->base.base,
                                                           rdev->mode_info.audio_property,
                                                           RADEON_AUDIO_AUTO);
+                               radeon_connector->audio = RADEON_AUDIO_AUTO;
                        }
                        if (ASIC_IS_DCE5(rdev))
                                drm_object_attach_property(&radeon_connector->base.base,
@@ -2237,6 +2241,7 @@ radeon_add_atom_connector(struct drm_device *dev,
                                drm_object_attach_property(&radeon_connector->base.base,
                                                           rdev->mode_info.audio_property,
                                                           RADEON_AUDIO_AUTO);
+                               radeon_connector->audio = RADEON_AUDIO_AUTO;
                        }
                        if (ASIC_IS_DCE5(rdev))
                                drm_object_attach_property(&radeon_connector->base.base,
index 4fd1a96..d0826fb 100644 (file)
@@ -103,12 +103,6 @@ static const char radeon_family_name[][16] = {
        "LAST",
 };
 
-#if defined(CONFIG_VGA_SWITCHEROO)
-bool radeon_has_atpx_dgpu_power_cntl(void);
-#else
-static inline bool radeon_has_atpx_dgpu_power_cntl(void) { return false; }
-#endif
-
 #define RADEON_PX_QUIRK_DISABLE_PX  (1 << 0)
 #define RADEON_PX_QUIRK_LONG_WAKEUP (1 << 1)
 
@@ -1305,9 +1299,9 @@ int radeon_device_init(struct radeon_device *rdev,
        }
        rdev->fence_context = fence_context_alloc(RADEON_NUM_RINGS);
 
-       DRM_INFO("initializing kernel modesetting (%s 0x%04X:0x%04X 0x%04X:0x%04X).\n",
-               radeon_family_name[rdev->family], pdev->vendor, pdev->device,
-               pdev->subsystem_vendor, pdev->subsystem_device);
+       DRM_INFO("initializing kernel modesetting (%s 0x%04X:0x%04X 0x%04X:0x%04X 0x%02X).\n",
+                radeon_family_name[rdev->family], pdev->vendor, pdev->device,
+                pdev->subsystem_vendor, pdev->subsystem_device, pdev->revision);
 
        /* mutex initialization are all done here so we
         * can recall function without having locking issues */
@@ -1439,7 +1433,7 @@ int radeon_device_init(struct radeon_device *rdev,
         * ignore it */
        vga_client_register(rdev->pdev, rdev, NULL, radeon_vga_set_decode);
 
-       if ((rdev->flags & RADEON_IS_PX) && radeon_has_atpx_dgpu_power_cntl())
+       if (rdev->flags & RADEON_IS_PX)
                runtime = true;
        vga_switcheroo_register_client(rdev->pdev, &radeon_switcheroo_ops, runtime);
        if (runtime)
index 7dddfdc..90f7394 100644 (file)
@@ -235,6 +235,8 @@ static int radeon_verify_access(struct ttm_buffer_object *bo, struct file *filp)
 {
        struct radeon_bo *rbo = container_of(bo, struct radeon_bo, tbo);
 
+       if (radeon_ttm_tt_has_userptr(bo->ttm))
+               return -EPERM;
        return drm_vma_node_verify_access(&rbo->gem_base.vma_node, filp);
 }
 
index af4df81..e6abc09 100644 (file)
@@ -2931,6 +2931,7 @@ static struct si_dpm_quirk si_dpm_quirk_list[] = {
        { PCI_VENDOR_ID_ATI, 0x6811, 0x1462, 0x2015, 0, 120000 },
        { PCI_VENDOR_ID_ATI, 0x6811, 0x1043, 0x2015, 0, 120000 },
        { PCI_VENDOR_ID_ATI, 0x6811, 0x148c, 0x2015, 0, 120000 },
+       { PCI_VENDOR_ID_ATI, 0x6810, 0x1682, 0x9275, 0, 120000 },
        { 0, 0, 0, 0 },
 };
 
index 4cbf265..e3daafa 100644 (file)
@@ -230,22 +230,13 @@ EXPORT_SYMBOL(ttm_bo_del_sub_from_lru);
 
 void ttm_bo_move_to_lru_tail(struct ttm_buffer_object *bo)
 {
-       struct ttm_bo_device *bdev = bo->bdev;
-       struct ttm_mem_type_manager *man;
+       int put_count = 0;
 
        lockdep_assert_held(&bo->resv->lock.base);
 
-       if (bo->mem.placement & TTM_PL_FLAG_NO_EVICT) {
-               list_del_init(&bo->swap);
-               list_del_init(&bo->lru);
-
-       } else {
-               if (bo->ttm && !(bo->ttm->page_flags & TTM_PAGE_FLAG_SG))
-                       list_move_tail(&bo->swap, &bo->glob->swap_lru);
-
-               man = &bdev->man[bo->mem.mem_type];
-               list_move_tail(&bo->lru, &man->lru);
-       }
+       put_count = ttm_bo_del_from_lru(bo);
+       ttm_bo_list_ref_sub(bo, put_count, true);
+       ttm_bo_add_to_lru(bo);
 }
 EXPORT_SYMBOL(ttm_bo_move_to_lru_tail);
 
index 4854dac..5fd1fd0 100644 (file)
@@ -267,11 +267,23 @@ static int virtio_gpu_crtc_atomic_check(struct drm_crtc *crtc,
        return 0;
 }
 
+static void virtio_gpu_crtc_atomic_flush(struct drm_crtc *crtc,
+                                        struct drm_crtc_state *old_state)
+{
+       unsigned long flags;
+
+       spin_lock_irqsave(&crtc->dev->event_lock, flags);
+       if (crtc->state->event)
+               drm_crtc_send_vblank_event(crtc, crtc->state->event);
+       spin_unlock_irqrestore(&crtc->dev->event_lock, flags);
+}
+
 static const struct drm_crtc_helper_funcs virtio_gpu_crtc_helper_funcs = {
        .enable        = virtio_gpu_crtc_enable,
        .disable       = virtio_gpu_crtc_disable,
        .mode_set_nofb = virtio_gpu_crtc_mode_set_nofb,
        .atomic_check  = virtio_gpu_crtc_atomic_check,
+       .atomic_flush  = virtio_gpu_crtc_atomic_flush,
 };
 
 static void virtio_gpu_enc_mode_set(struct drm_encoder *encoder,
index 723ba16..1a1a87c 100644 (file)
@@ -3293,19 +3293,19 @@ static const struct vmw_cmd_entry vmw_cmd_entries[SVGA_3D_CMD_MAX] = {
                    &vmw_cmd_dx_cid_check, true, false, true),
        VMW_CMD_DEF(SVGA_3D_CMD_DX_DEFINE_QUERY, &vmw_cmd_dx_define_query,
                    true, false, true),
-       VMW_CMD_DEF(SVGA_3D_CMD_DX_DESTROY_QUERY, &vmw_cmd_ok,
+       VMW_CMD_DEF(SVGA_3D_CMD_DX_DESTROY_QUERY, &vmw_cmd_dx_cid_check,
                    true, false, true),
        VMW_CMD_DEF(SVGA_3D_CMD_DX_BIND_QUERY, &vmw_cmd_dx_bind_query,
                    true, false, true),
        VMW_CMD_DEF(SVGA_3D_CMD_DX_SET_QUERY_OFFSET,
-                   &vmw_cmd_ok, true, false, true),
-       VMW_CMD_DEF(SVGA_3D_CMD_DX_BEGIN_QUERY, &vmw_cmd_ok,
+                   &vmw_cmd_dx_cid_check, true, false, true),
+       VMW_CMD_DEF(SVGA_3D_CMD_DX_BEGIN_QUERY, &vmw_cmd_dx_cid_check,
                    true, false, true),
-       VMW_CMD_DEF(SVGA_3D_CMD_DX_END_QUERY, &vmw_cmd_ok,
+       VMW_CMD_DEF(SVGA_3D_CMD_DX_END_QUERY, &vmw_cmd_dx_cid_check,
                    true, false, true),
        VMW_CMD_DEF(SVGA_3D_CMD_DX_READBACK_QUERY, &vmw_cmd_invalid,
                    true, false, true),
-       VMW_CMD_DEF(SVGA_3D_CMD_DX_SET_PREDICATION, &vmw_cmd_invalid,
+       VMW_CMD_DEF(SVGA_3D_CMD_DX_SET_PREDICATION, &vmw_cmd_dx_cid_check,
                    true, false, true),
        VMW_CMD_DEF(SVGA_3D_CMD_DX_SET_VIEWPORTS, &vmw_cmd_dx_cid_check,
                    true, false, true),
index 3b1faf7..679a4cb 100644 (file)
@@ -573,9 +573,9 @@ static int vmw_fb_set_par(struct fb_info *info)
                mode = old_mode;
                old_mode = NULL;
        } else if (!vmw_kms_validate_mode_vram(vmw_priv,
-                                              mode->hdisplay *
-                                              (var->bits_per_pixel + 7) / 8,
-                                              mode->vdisplay)) {
+                                       mode->hdisplay *
+                                       DIV_ROUND_UP(var->bits_per_pixel, 8),
+                                       mode->vdisplay)) {
                drm_mode_destroy(vmw_priv->dev, mode);
                return -EINVAL;
        }
index c6eaff5..0238f01 100644 (file)
 #define USB_DEVICE_ID_CORSAIR_K90      0x1b02
 
 #define USB_VENDOR_ID_CREATIVELABS     0x041e
+#define USB_DEVICE_ID_CREATIVE_SB_OMNI_SURROUND_51     0x322c
 #define USB_DEVICE_ID_PRODIKEYS_PCMIDI 0x2801
 
 #define USB_VENDOR_ID_CVTOUCH          0x1ff7
index ed2f68e..53fc856 100644 (file)
@@ -71,6 +71,7 @@ static const struct hid_blacklist {
        { USB_VENDOR_ID_CH, USB_DEVICE_ID_CH_3AXIS_5BUTTON_STICK, HID_QUIRK_NOGET },
        { USB_VENDOR_ID_CH, USB_DEVICE_ID_CH_AXIS_295, HID_QUIRK_NOGET },
        { USB_VENDOR_ID_CHICONY, USB_DEVICE_ID_CHICONY_PIXART_USB_OPTICAL_MOUSE, HID_QUIRK_ALWAYS_POLL },
+       { USB_VENDOR_ID_CREATIVELABS, USB_DEVICE_ID_CREATIVE_SB_OMNI_SURROUND_51, HID_QUIRK_NOGET },
        { USB_VENDOR_ID_DMI, USB_DEVICE_ID_DMI_ENC, HID_QUIRK_NOGET },
        { USB_VENDOR_ID_DRAGONRISE, USB_DEVICE_ID_DRAGONRISE_WIIU, HID_QUIRK_MULTI_INPUT },
        { USB_VENDOR_ID_ELAN, HID_ANY_ID, HID_QUIRK_ALWAYS_POLL },
index 02c4efe..cf2ba43 100644 (file)
@@ -684,6 +684,7 @@ static int wacom_intuos_inout(struct wacom_wac *wacom)
 
                wacom->tool[idx] = wacom_intuos_get_tool_type(wacom->id[idx]);
 
+               wacom->shared->stylus_in_proximity = true;
                return 1;
        }
 
@@ -3395,6 +3396,10 @@ static const struct wacom_features wacom_features_0x33E =
        { "Wacom Intuos PT M 2", 21600, 13500, 2047, 63,
          INTUOSHT2, WACOM_INTUOS_RES, WACOM_INTUOS_RES, .touch_max = 16,
          .check_for_hid_type = true, .hid_type = HID_TYPE_USBNONE };
+static const struct wacom_features wacom_features_0x343 =
+       { "Wacom DTK1651", 34616, 19559, 1023, 0,
+         DTUS, WACOM_INTUOS_RES, WACOM_INTUOS_RES, 4,
+         WACOM_DTU_OFFSET, WACOM_DTU_OFFSET };
 
 static const struct wacom_features wacom_features_HID_ANY_ID =
        { "Wacom HID", .type = HID_GENERIC };
@@ -3560,6 +3565,7 @@ const struct hid_device_id wacom_ids[] = {
        { USB_DEVICE_WACOM(0x33C) },
        { USB_DEVICE_WACOM(0x33D) },
        { USB_DEVICE_WACOM(0x33E) },
+       { USB_DEVICE_WACOM(0x343) },
        { USB_DEVICE_WACOM(0x4001) },
        { USB_DEVICE_WACOM(0x4004) },
        { USB_DEVICE_WACOM(0x5000) },
index faa8e68..0967e1a 100644 (file)
@@ -975,10 +975,10 @@ config I2C_XLR
 
 config I2C_XLP9XX
        tristate "XLP9XX I2C support"
-       depends on CPU_XLP || COMPILE_TEST
+       depends on CPU_XLP || ARCH_VULCAN || COMPILE_TEST
        help
          This driver enables support for the on-chip I2C interface of
-         the Broadcom XLP9xx/XLP5xx MIPS processors.
+         the Broadcom XLP9xx/XLP5xx MIPS and Vulcan ARM64 processors.
 
          This driver can also be built as a module.  If so, the module will
          be called i2c-xlp9xx.
index 714bdc8..b167ab2 100644 (file)
@@ -116,8 +116,8 @@ struct cpm_i2c {
        cbd_t __iomem *rbase;
        u_char *txbuf[CPM_MAXBD];
        u_char *rxbuf[CPM_MAXBD];
-       u32 txdma[CPM_MAXBD];
-       u32 rxdma[CPM_MAXBD];
+       dma_addr_t txdma[CPM_MAXBD];
+       dma_addr_t rxdma[CPM_MAXBD];
 };
 
 static irqreturn_t cpm_i2c_interrupt(int irq, void *dev_id)
index b29c750..f54ece8 100644 (file)
@@ -671,7 +671,9 @@ static int exynos5_i2c_xfer(struct i2c_adapter *adap,
                return -EIO;
        }
 
-       clk_prepare_enable(i2c->clk);
+       ret = clk_enable(i2c->clk);
+       if (ret)
+               return ret;
 
        for (i = 0; i < num; i++, msgs++) {
                stop = (i == num - 1);
@@ -695,7 +697,7 @@ static int exynos5_i2c_xfer(struct i2c_adapter *adap,
        }
 
  out:
-       clk_disable_unprepare(i2c->clk);
+       clk_disable(i2c->clk);
        return ret;
 }
 
@@ -747,7 +749,9 @@ static int exynos5_i2c_probe(struct platform_device *pdev)
                return -ENOENT;
        }
 
-       clk_prepare_enable(i2c->clk);
+       ret = clk_prepare_enable(i2c->clk);
+       if (ret)
+               return ret;
 
        mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
        i2c->regs = devm_ioremap_resource(&pdev->dev, mem);
@@ -799,6 +803,10 @@ static int exynos5_i2c_probe(struct platform_device *pdev)
 
        platform_set_drvdata(pdev, i2c);
 
+       clk_disable(i2c->clk);
+
+       return 0;
+
  err_clk:
        clk_disable_unprepare(i2c->clk);
        return ret;
@@ -810,6 +818,8 @@ static int exynos5_i2c_remove(struct platform_device *pdev)
 
        i2c_del_adapter(&i2c->adap);
 
+       clk_unprepare(i2c->clk);
+
        return 0;
 }
 
@@ -821,6 +831,8 @@ static int exynos5_i2c_suspend_noirq(struct device *dev)
 
        i2c->suspended = 1;
 
+       clk_unprepare(i2c->clk);
+
        return 0;
 }
 
@@ -830,7 +842,9 @@ static int exynos5_i2c_resume_noirq(struct device *dev)
        struct exynos5_i2c *i2c = platform_get_drvdata(pdev);
        int ret = 0;
 
-       clk_prepare_enable(i2c->clk);
+       ret = clk_prepare_enable(i2c->clk);
+       if (ret)
+               return ret;
 
        ret = exynos5_hsi2c_clock_setup(i2c);
        if (ret) {
@@ -839,7 +853,7 @@ static int exynos5_i2c_resume_noirq(struct device *dev)
        }
 
        exynos5_i2c_init(i2c);
-       clk_disable_unprepare(i2c->clk);
+       clk_disable(i2c->clk);
        i2c->suspended = 0;
 
        return 0;
index 7ba795b..1c87077 100644 (file)
@@ -75,6 +75,7 @@
 /* PCI DIDs for the Intel SMBus Message Transport (SMT) Devices */
 #define PCI_DEVICE_ID_INTEL_S1200_SMT0 0x0c59
 #define PCI_DEVICE_ID_INTEL_S1200_SMT1 0x0c5a
+#define PCI_DEVICE_ID_INTEL_DNV_SMT    0x19ac
 #define PCI_DEVICE_ID_INTEL_AVOTON_SMT 0x1f15
 
 #define ISMT_DESC_ENTRIES      2       /* number of descriptor entries */
@@ -180,6 +181,7 @@ struct ismt_priv {
 static const struct pci_device_id ismt_ids[] = {
        { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_S1200_SMT0) },
        { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_S1200_SMT1) },
+       { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_DNV_SMT) },
        { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_AVOTON_SMT) },
        { 0, }
 };
index 9096d17..3dcc5f3 100644 (file)
@@ -855,6 +855,7 @@ static struct rk3x_i2c_soc_data soc_data[3] = {
 static const struct of_device_id rk3x_i2c_match[] = {
        { .compatible = "rockchip,rk3066-i2c", .data = (void *)&soc_data[0] },
        { .compatible = "rockchip,rk3188-i2c", .data = (void *)&soc_data[1] },
+       { .compatible = "rockchip,rk3228-i2c", .data = (void *)&soc_data[2] },
        { .compatible = "rockchip,rk3288-i2c", .data = (void *)&soc_data[2] },
        {},
 };
index cb00d59..c2e257d 100644 (file)
@@ -691,7 +691,8 @@ void ib_cache_gid_set_default_gid(struct ib_device *ib_dev, u8 port,
                              NULL);
 
                /* Coudn't find default GID location */
-               WARN_ON(ix < 0);
+               if (WARN_ON(ix < 0))
+                       goto release;
 
                zattr_type.gid_type = gid_type;
 
index 4a9aa04..7713ef0 100644 (file)
@@ -48,6 +48,7 @@
 
 #include <asm/uaccess.h>
 
+#include <rdma/ib.h>
 #include <rdma/ib_cm.h>
 #include <rdma/ib_user_cm.h>
 #include <rdma/ib_marshall.h>
@@ -1103,6 +1104,9 @@ static ssize_t ib_ucm_write(struct file *filp, const char __user *buf,
        struct ib_ucm_cmd_hdr hdr;
        ssize_t result;
 
+       if (WARN_ON_ONCE(!ib_safe_file_access(filp)))
+               return -EACCES;
+
        if (len < sizeof(hdr))
                return -EINVAL;
 
index dd3bcce..c0f3826 100644 (file)
@@ -1574,6 +1574,9 @@ static ssize_t ucma_write(struct file *filp, const char __user *buf,
        struct rdma_ucm_cmd_hdr hdr;
        ssize_t ret;
 
+       if (WARN_ON_ONCE(!ib_safe_file_access(filp)))
+               return -EACCES;
+
        if (len < sizeof(hdr))
                return -EINVAL;
 
index 28ba2cc..31f422a 100644 (file)
@@ -48,6 +48,8 @@
 
 #include <asm/uaccess.h>
 
+#include <rdma/ib.h>
+
 #include "uverbs.h"
 
 MODULE_AUTHOR("Roland Dreier");
@@ -709,6 +711,9 @@ static ssize_t ib_uverbs_write(struct file *filp, const char __user *buf,
        int srcu_key;
        ssize_t ret;
 
+       if (WARN_ON_ONCE(!ib_safe_file_access(filp)))
+               return -EACCES;
+
        if (count < sizeof hdr)
                return -EINVAL;
 
index 15b8adb..b65b354 100644 (file)
@@ -1860,6 +1860,7 @@ EXPORT_SYMBOL(ib_drain_rq);
 void ib_drain_qp(struct ib_qp *qp)
 {
        ib_drain_sq(qp);
-       ib_drain_rq(qp);
+       if (!qp->srq)
+               ib_drain_rq(qp);
 }
 EXPORT_SYMBOL(ib_drain_qp);
index 42a7b89..3234a8b 100644 (file)
@@ -1390,6 +1390,8 @@ int iwch_register_device(struct iwch_dev *dev)
        dev->ibdev.iwcm->add_ref = iwch_qp_add_ref;
        dev->ibdev.iwcm->rem_ref = iwch_qp_rem_ref;
        dev->ibdev.iwcm->get_qp = iwch_get_qp;
+       memcpy(dev->ibdev.iwcm->ifname, dev->rdev.t3cdev_p->lldev->name,
+              sizeof(dev->ibdev.iwcm->ifname));
 
        ret = ib_register_device(&dev->ibdev, NULL);
        if (ret)
index b4eeb78..b0b9557 100644 (file)
@@ -162,7 +162,7 @@ static int create_cq(struct c4iw_rdev *rdev, struct t4_cq *cq,
        cq->bar2_va = c4iw_bar2_addrs(rdev, cq->cqid, T4_BAR2_QTYPE_INGRESS,
                                      &cq->bar2_qid,
                                      user ? &cq->bar2_pa : NULL);
-       if (user && !cq->bar2_va) {
+       if (user && !cq->bar2_pa) {
                pr_warn(MOD "%s: cqid %u not in BAR2 range.\n",
                        pci_name(rdev->lldi.pdev), cq->cqid);
                ret = -EINVAL;
index 124682d..7574f39 100644 (file)
@@ -580,6 +580,8 @@ int c4iw_register_device(struct c4iw_dev *dev)
        dev->ibdev.iwcm->add_ref = c4iw_qp_add_ref;
        dev->ibdev.iwcm->rem_ref = c4iw_qp_rem_ref;
        dev->ibdev.iwcm->get_qp = c4iw_get_qp;
+       memcpy(dev->ibdev.iwcm->ifname, dev->rdev.lldi.ports[0]->name,
+              sizeof(dev->ibdev.iwcm->ifname));
 
        ret = ib_register_device(&dev->ibdev, NULL);
        if (ret)
index e17fb5d..e8993e4 100644 (file)
@@ -185,6 +185,10 @@ void __iomem *c4iw_bar2_addrs(struct c4iw_rdev *rdev, unsigned int qid,
 
        if (pbar2_pa)
                *pbar2_pa = (rdev->bar2_pa + bar2_qoffset) & PAGE_MASK;
+
+       if (is_t4(rdev->lldi.adapter_type))
+               return NULL;
+
        return rdev->bar2_kva + bar2_qoffset;
 }
 
@@ -270,7 +274,7 @@ static int create_qp(struct c4iw_rdev *rdev, struct t4_wq *wq,
        /*
         * User mode must have bar2 access.
         */
-       if (user && (!wq->sq.bar2_va || !wq->rq.bar2_va)) {
+       if (user && (!wq->sq.bar2_pa || !wq->rq.bar2_pa)) {
                pr_warn(MOD "%s: sqid %u or rqid %u not in BAR2 range.\n",
                        pci_name(rdev->lldi.pdev), wq->sq.qid, wq->rq.qid);
                goto free_dma;
@@ -1895,13 +1899,27 @@ int c4iw_ib_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
 void c4iw_drain_sq(struct ib_qp *ibqp)
 {
        struct c4iw_qp *qp = to_c4iw_qp(ibqp);
+       unsigned long flag;
+       bool need_to_wait;
 
-       wait_for_completion(&qp->sq_drained);
+       spin_lock_irqsave(&qp->lock, flag);
+       need_to_wait = !t4_sq_empty(&qp->wq);
+       spin_unlock_irqrestore(&qp->lock, flag);
+
+       if (need_to_wait)
+               wait_for_completion(&qp->sq_drained);
 }
 
 void c4iw_drain_rq(struct ib_qp *ibqp)
 {
        struct c4iw_qp *qp = to_c4iw_qp(ibqp);
+       unsigned long flag;
+       bool need_to_wait;
+
+       spin_lock_irqsave(&qp->lock, flag);
+       need_to_wait = !t4_rq_empty(&qp->wq);
+       spin_unlock_irqrestore(&qp->lock, flag);
 
-       wait_for_completion(&qp->rq_drained);
+       if (need_to_wait)
+               wait_for_completion(&qp->rq_drained);
 }
index 90e5af2..e41fae2 100644 (file)
@@ -1863,7 +1863,7 @@ static enum i40iw_status_code i40iw_virtchnl_send(struct i40iw_sc_dev *dev,
 }
 
 /* client interface functions */
-static struct i40e_client_ops i40e_ops = {
+static const struct i40e_client_ops i40e_ops = {
        .open = i40iw_open,
        .close = i40iw_close,
        .l2_param_change = i40iw_l2param_change,
index fd97534..81b0e1f 100644 (file)
@@ -419,7 +419,8 @@ static int set_rq_size(struct mlx4_ib_dev *dev, struct ib_qp_cap *cap,
 }
 
 static int set_kernel_sq_size(struct mlx4_ib_dev *dev, struct ib_qp_cap *cap,
-                             enum mlx4_ib_qp_type type, struct mlx4_ib_qp *qp)
+                             enum mlx4_ib_qp_type type, struct mlx4_ib_qp *qp,
+                             bool shrink_wqe)
 {
        int s;
 
@@ -477,7 +478,7 @@ static int set_kernel_sq_size(struct mlx4_ib_dev *dev, struct ib_qp_cap *cap,
         * We set WQE size to at least 64 bytes, this way stamping
         * invalidates each WQE.
         */
-       if (dev->dev->caps.fw_ver >= MLX4_FW_VER_WQE_CTRL_NEC &&
+       if (shrink_wqe && dev->dev->caps.fw_ver >= MLX4_FW_VER_WQE_CTRL_NEC &&
            qp->sq_signal_bits && BITS_PER_LONG == 64 &&
            type != MLX4_IB_QPT_SMI && type != MLX4_IB_QPT_GSI &&
            !(type & (MLX4_IB_QPT_PROXY_SMI_OWNER | MLX4_IB_QPT_PROXY_SMI |
@@ -642,6 +643,7 @@ static int create_qp_common(struct mlx4_ib_dev *dev, struct ib_pd *pd,
 {
        int qpn;
        int err;
+       struct ib_qp_cap backup_cap;
        struct mlx4_ib_sqp *sqp;
        struct mlx4_ib_qp *qp;
        enum mlx4_ib_qp_type qp_type = (enum mlx4_ib_qp_type) init_attr->qp_type;
@@ -775,7 +777,9 @@ static int create_qp_common(struct mlx4_ib_dev *dev, struct ib_pd *pd,
                                goto err;
                }
 
-               err = set_kernel_sq_size(dev, &init_attr->cap, qp_type, qp);
+               memcpy(&backup_cap, &init_attr->cap, sizeof(backup_cap));
+               err = set_kernel_sq_size(dev, &init_attr->cap,
+                                        qp_type, qp, true);
                if (err)
                        goto err;
 
@@ -787,9 +791,20 @@ static int create_qp_common(struct mlx4_ib_dev *dev, struct ib_pd *pd,
                        *qp->db.db = 0;
                }
 
-               if (mlx4_buf_alloc(dev->dev, qp->buf_size, PAGE_SIZE * 2, &qp->buf, gfp)) {
-                       err = -ENOMEM;
-                       goto err_db;
+               if (mlx4_buf_alloc(dev->dev, qp->buf_size, qp->buf_size,
+                                  &qp->buf, gfp)) {
+                       memcpy(&init_attr->cap, &backup_cap,
+                              sizeof(backup_cap));
+                       err = set_kernel_sq_size(dev, &init_attr->cap, qp_type,
+                                                qp, false);
+                       if (err)
+                               goto err_db;
+
+                       if (mlx4_buf_alloc(dev->dev, qp->buf_size,
+                                          PAGE_SIZE * 2, &qp->buf, gfp)) {
+                               err = -ENOMEM;
+                               goto err_db;
+                       }
                }
 
                err = mlx4_mtt_init(dev->dev, qp->buf.npages, qp->buf.page_shift,
index 5acf346..4cb81f6 100644 (file)
@@ -530,7 +530,7 @@ static int mlx5_ib_query_device(struct ib_device *ibdev,
                     sizeof(struct mlx5_wqe_ctrl_seg)) /
                     sizeof(struct mlx5_wqe_data_seg);
        props->max_sge = min(max_rq_sg, max_sq_sg);
-       props->max_sge_rd = props->max_sge;
+       props->max_sge_rd          = MLX5_MAX_SGE_RD;
        props->max_cq              = 1 << MLX5_CAP_GEN(mdev, log_max_cq);
        props->max_cqe = (1 << MLX5_CAP_GEN(mdev, log_max_cq_sz)) - 1;
        props->max_mr              = 1 << MLX5_CAP_GEN(mdev, log_max_mkey);
@@ -671,8 +671,8 @@ static int mlx5_query_hca_port(struct ib_device *ibdev, u8 port,
        struct mlx5_ib_dev *dev = to_mdev(ibdev);
        struct mlx5_core_dev *mdev = dev->mdev;
        struct mlx5_hca_vport_context *rep;
-       int max_mtu;
-       int oper_mtu;
+       u16 max_mtu;
+       u16 oper_mtu;
        int err;
        u8 ib_link_width_oper;
        u8 vl_hw_cap;
@@ -1438,7 +1438,8 @@ static struct mlx5_ib_flow_prio *get_flow_table(struct mlx5_ib_dev *dev,
        if (!ft) {
                ft = mlx5_create_auto_grouped_flow_table(ns, priority,
                                                         num_entries,
-                                                        num_groups);
+                                                        num_groups,
+                                                        0);
 
                if (!IS_ERR(ft)) {
                        prio->refcount = 0;
index 3ea9e05..2b27d13 100644 (file)
@@ -356,7 +356,7 @@ static int nes_netdev_stop(struct net_device *netdev)
 /**
  * nes_nic_send
  */
-static int nes_nic_send(struct sk_buff *skb, struct net_device *netdev)
+static bool nes_nic_send(struct sk_buff *skb, struct net_device *netdev)
 {
        struct nes_vnic *nesvnic = netdev_priv(netdev);
        struct nes_device *nesdev = nesvnic->nesdev;
@@ -413,7 +413,7 @@ static int nes_nic_send(struct sk_buff *skb, struct net_device *netdev)
                                        netdev->name, skb_shinfo(skb)->nr_frags + 2, skb_headlen(skb));
                        kfree_skb(skb);
                        nesvnic->tx_sw_dropped++;
-                       return NETDEV_TX_LOCKED;
+                       return false;
                }
                set_bit(nesnic->sq_head, nesnic->first_frag_overflow);
                bus_address = pci_map_single(nesdev->pcidev, skb->data + NES_FIRST_FRAG_SIZE,
@@ -454,8 +454,7 @@ static int nes_nic_send(struct sk_buff *skb, struct net_device *netdev)
        set_wqe_32bit_value(nic_sqe->wqe_words, NES_NIC_SQ_WQE_MISC_IDX, wqe_misc);
        nesnic->sq_head++;
        nesnic->sq_head &= nesnic->sq_size - 1;
-
-       return NETDEV_TX_OK;
+       return true;
 }
 
 
@@ -479,7 +478,6 @@ static int nes_netdev_start_xmit(struct sk_buff *skb, struct net_device *netdev)
        u32 tso_wqe_length;
        u32 curr_tcp_seq;
        u32 wqe_count=1;
-       u32 send_rc;
        struct iphdr *iph;
        __le16 *wqe_fragment_length;
        u32 nr_frags;
@@ -500,9 +498,6 @@ static int nes_netdev_start_xmit(struct sk_buff *skb, struct net_device *netdev)
         *              skb_shinfo(skb)->nr_frags, skb_is_gso(skb));
         */
 
-       if (!netif_carrier_ok(netdev))
-               return NETDEV_TX_OK;
-
        if (netif_queue_stopped(netdev))
                return NETDEV_TX_BUSY;
 
@@ -673,13 +668,11 @@ tso_sq_no_longer_full:
                        skb_linearize(skb);
                        skb_set_transport_header(skb, hoffset);
                        skb_set_network_header(skb, nhoffset);
-                       send_rc = nes_nic_send(skb, netdev);
-                       if (send_rc != NETDEV_TX_OK)
+                       if (!nes_nic_send(skb, netdev))
                                return NETDEV_TX_OK;
                }
        } else {
-               send_rc = nes_nic_send(skb, netdev);
-               if (send_rc != NETDEV_TX_OK)
+               if (!nes_nic_send(skb, netdev))
                        return NETDEV_TX_OK;
        }
 
@@ -689,7 +682,7 @@ tso_sq_no_longer_full:
                nes_write32(nesdev->regs+NES_WQE_ALLOC,
                                (wqe_count << 24) | (1 << 23) | nesvnic->nic.qp_id);
 
-       netdev->trans_start = jiffies;
+       netif_trans_update(netdev);
 
        return NETDEV_TX_OK;
 }
index e449e39..24f4a78 100644 (file)
@@ -45,6 +45,8 @@
 #include <linux/export.h>
 #include <linux/uio.h>
 
+#include <rdma/ib.h>
+
 #include "qib.h"
 #include "qib_common.h"
 #include "qib_user_sdma.h"
@@ -2067,6 +2069,9 @@ static ssize_t qib_write(struct file *fp, const char __user *data,
        ssize_t ret = 0;
        void *dest;
 
+       if (WARN_ON_ONCE(!ib_safe_file_access(fp)))
+               return -EACCES;
+
        if (count < sizeof(cmd.type)) {
                ret = -EINVAL;
                goto bail;
index bd82a69..a9e3bcc 100644 (file)
@@ -1637,9 +1637,9 @@ bail:
        spin_unlock_irqrestore(&qp->s_hlock, flags);
        if (nreq) {
                if (call_send)
-                       rdi->driver_f.schedule_send_no_lock(qp);
-               else
                        rdi->driver_f.do_send(qp);
+               else
+                       rdi->driver_f.schedule_send_no_lock(qp);
        }
        return err;
 }
index c8ed535..b2f4283 100644 (file)
@@ -766,7 +766,7 @@ void ipoib_cm_send(struct net_device *dev, struct sk_buff *skb, struct ipoib_cm_
                ipoib_dma_unmap_tx(priv, tx_req);
                dev_kfree_skb_any(skb);
        } else {
-               dev->trans_start = jiffies;
+               netif_trans_update(dev);
                ++tx->tx_head;
 
                if (++priv->tx_outstanding == ipoib_sendq_size) {
index f0e55e4..3643d55 100644 (file)
@@ -637,7 +637,7 @@ void ipoib_send(struct net_device *dev, struct sk_buff *skb,
                if (netif_queue_stopped(dev))
                        netif_wake_queue(dev);
        } else {
-               dev->trans_start = jiffies;
+               netif_trans_update(dev);
 
                address->last_send = priv->tx_head;
                ++priv->tx_head;
index 80807d6..b940ef1 100644 (file)
@@ -1036,7 +1036,7 @@ static void ipoib_timeout(struct net_device *dev)
        struct ipoib_dev_priv *priv = netdev_priv(dev);
 
        ipoib_warn(priv, "transmit timeout: latency %d msecs\n",
-                  jiffies_to_msecs(jiffies - dev->trans_start));
+                  jiffies_to_msecs(jiffies - dev_trans_start(dev)));
        ipoib_warn(priv, "queue stopped %d, tx_head %u, tx_tail %u\n",
                   netif_queue_stopped(dev),
                   priv->tx_head, priv->tx_tail);
index e8a84d1..1142a93 100644 (file)
@@ -153,6 +153,7 @@ static const struct xpad_device {
        { 0x0738, 0x4728, "Mad Catz Street Fighter IV FightPad", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOX360 },
        { 0x0738, 0x4738, "Mad Catz Wired Xbox 360 Controller (SFIV)", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOX360 },
        { 0x0738, 0x4740, "Mad Catz Beat Pad", 0, XTYPE_XBOX360 },
+       { 0x0738, 0x4a01, "Mad Catz FightStick TE 2", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOXONE },
        { 0x0738, 0x6040, "Mad Catz Beat Pad Pro", MAP_DPAD_TO_BUTTONS, XTYPE_XBOX },
        { 0x0738, 0xb726, "Mad Catz Xbox controller - MW2", 0, XTYPE_XBOX360 },
        { 0x0738, 0xbeef, "Mad Catz JOYTECH NEO SE Advanced GamePad", XTYPE_XBOX360 },
@@ -304,6 +305,7 @@ static struct usb_device_id xpad_table[] = {
        XPAD_XBOX360_VENDOR(0x046d),            /* Logitech X-Box 360 style controllers */
        XPAD_XBOX360_VENDOR(0x0738),            /* Mad Catz X-Box 360 controllers */
        { USB_DEVICE(0x0738, 0x4540) },         /* Mad Catz Beat Pad */
+       XPAD_XBOXONE_VENDOR(0x0738),            /* Mad Catz FightStick TE 2 */
        XPAD_XBOX360_VENDOR(0x0e6f),            /* 0x0e6f X-Box 360 controllers */
        XPAD_XBOX360_VENDOR(0x12ab),            /* X-Box 360 dance pads */
        XPAD_XBOX360_VENDOR(0x1430),            /* RedOctane X-Box 360 controllers */
index d5994a7..9829363 100644 (file)
@@ -178,7 +178,6 @@ static int arizona_haptics_probe(struct platform_device *pdev)
        input_set_drvdata(haptics->input_dev, haptics);
 
        haptics->input_dev->name = "arizona:haptics";
-       haptics->input_dev->dev.parent = pdev->dev.parent;
        haptics->input_dev->close = arizona_haptics_close;
        __set_bit(FF_RUMBLE, haptics->input_dev->ffbit);
 
index 3f02e0e..67aab86 100644 (file)
@@ -353,7 +353,8 @@ static int pmic8xxx_pwrkey_probe(struct platform_device *pdev)
        if (of_property_read_u32(pdev->dev.of_node, "debounce", &kpd_delay))
                kpd_delay = 15625;
 
-       if (kpd_delay > 62500 || kpd_delay == 0) {
+       /* Valid range of pwr key trigger delay is 1/64 sec to 2 seconds. */
+       if (kpd_delay > USEC_PER_SEC * 2 || kpd_delay < USEC_PER_SEC / 64) {
                dev_err(&pdev->dev, "invalid power key trigger delay\n");
                return -EINVAL;
        }
@@ -385,8 +386,8 @@ static int pmic8xxx_pwrkey_probe(struct platform_device *pdev)
        pwr->name = "pmic8xxx_pwrkey";
        pwr->phys = "pmic8xxx_pwrkey/input0";
 
-       delay = (kpd_delay << 10) / USEC_PER_SEC;
-       delay = 1 + ilog2(delay);
+       delay = (kpd_delay << 6) / USEC_PER_SEC;
+       delay = ilog2(delay);
 
        err = regmap_read(regmap, PON_CNTL_1, &pon_cntl);
        if (err < 0) {
index 10c4e3d..caa5a62 100644 (file)
@@ -222,7 +222,6 @@ static int twl4030_vibra_probe(struct platform_device *pdev)
 
        info->input_dev->name = "twl4030:vibrator";
        info->input_dev->id.version = 1;
-       info->input_dev->dev.parent = pdev->dev.parent;
        info->input_dev->close = twl4030_vibra_close;
        __set_bit(FF_RUMBLE, info->input_dev->ffbit);
 
index ea63fad..53e33fa 100644 (file)
@@ -45,7 +45,6 @@
 struct vibra_info {
        struct device *dev;
        struct input_dev *input_dev;
-       struct workqueue_struct *workqueue;
        struct work_struct play_work;
        struct mutex mutex;
        int irq;
@@ -213,11 +212,7 @@ static int vibra_play(struct input_dev *input, void *data,
        info->strong_speed = effect->u.rumble.strong_magnitude;
        info->direction = effect->direction < EFFECT_DIR_180_DEG ? 1 : -1;
 
-       ret = queue_work(info->workqueue, &info->play_work);
-       if (!ret) {
-               dev_info(&input->dev, "work is already on queue\n");
-               return ret;
-       }
+       schedule_work(&info->play_work);
 
        return 0;
 }
@@ -362,7 +357,6 @@ static int twl6040_vibra_probe(struct platform_device *pdev)
 
        info->input_dev->name = "twl6040:vibrator";
        info->input_dev->id.version = 1;
-       info->input_dev->dev.parent = pdev->dev.parent;
        info->input_dev->close = twl6040_vibra_close;
        __set_bit(FF_RUMBLE, info->input_dev->ffbit);
 
index 3a7f3a4..7c18249 100644 (file)
@@ -858,6 +858,14 @@ static int gtco_probe(struct usb_interface *usbinterface,
                goto err_free_buf;
        }
 
+       /* Sanity check that a device has an endpoint */
+       if (usbinterface->altsetting[0].desc.bNumEndpoints < 1) {
+               dev_err(&usbinterface->dev,
+                       "Invalid number of endpoints\n");
+               error = -EINVAL;
+               goto err_free_urb;
+       }
+
        /*
         * The endpoint is always altsetting 0, we know this since we know
         * this device only has one interrupt endpoint
@@ -879,7 +887,7 @@ static int gtco_probe(struct usb_interface *usbinterface,
         * HID report descriptor
         */
        if (usb_get_extra_descriptor(usbinterface->cur_altsetting,
-                                    HID_DEVICE_TYPE, &hid_desc) != 0){
+                                    HID_DEVICE_TYPE, &hid_desc) != 0) {
                dev_err(&usbinterface->dev,
                        "Can't retrieve exta USB descriptor to get hid report descriptor length\n");
                error = -EIO;
index 374c129..5efadad 100644 (file)
@@ -92,6 +92,7 @@ struct iommu_dev_data {
        struct list_head dev_data_list;   /* For global dev_data_list */
        struct protection_domain *domain; /* Domain the device is bound to */
        u16 devid;                        /* PCI Device ID */
+       u16 alias;                        /* Alias Device ID */
        bool iommu_v2;                    /* Device can make use of IOMMUv2 */
        bool passthrough;                 /* Device is identity mapped */
        struct {
@@ -166,6 +167,13 @@ static struct protection_domain *to_pdomain(struct iommu_domain *dom)
        return container_of(dom, struct protection_domain, domain);
 }
 
+static inline u16 get_device_id(struct device *dev)
+{
+       struct pci_dev *pdev = to_pci_dev(dev);
+
+       return PCI_DEVID(pdev->bus->number, pdev->devfn);
+}
+
 static struct iommu_dev_data *alloc_dev_data(u16 devid)
 {
        struct iommu_dev_data *dev_data;
@@ -203,6 +211,68 @@ out_unlock:
        return dev_data;
 }
 
+static int __last_alias(struct pci_dev *pdev, u16 alias, void *data)
+{
+       *(u16 *)data = alias;
+       return 0;
+}
+
+static u16 get_alias(struct device *dev)
+{
+       struct pci_dev *pdev = to_pci_dev(dev);
+       u16 devid, ivrs_alias, pci_alias;
+
+       devid = get_device_id(dev);
+       ivrs_alias = amd_iommu_alias_table[devid];
+       pci_for_each_dma_alias(pdev, __last_alias, &pci_alias);
+
+       if (ivrs_alias == pci_alias)
+               return ivrs_alias;
+
+       /*
+        * DMA alias showdown
+        *
+        * The IVRS is fairly reliable in telling us about aliases, but it
+        * can't know about every screwy device.  If we don't have an IVRS
+        * reported alias, use the PCI reported alias.  In that case we may
+        * still need to initialize the rlookup and dev_table entries if the
+        * alias is to a non-existent device.
+        */
+       if (ivrs_alias == devid) {
+               if (!amd_iommu_rlookup_table[pci_alias]) {
+                       amd_iommu_rlookup_table[pci_alias] =
+                               amd_iommu_rlookup_table[devid];
+                       memcpy(amd_iommu_dev_table[pci_alias].data,
+                              amd_iommu_dev_table[devid].data,
+                              sizeof(amd_iommu_dev_table[pci_alias].data));
+               }
+
+               return pci_alias;
+       }
+
+       pr_info("AMD-Vi: Using IVRS reported alias %02x:%02x.%d "
+               "for device %s[%04x:%04x], kernel reported alias "
+               "%02x:%02x.%d\n", PCI_BUS_NUM(ivrs_alias), PCI_SLOT(ivrs_alias),
+               PCI_FUNC(ivrs_alias), dev_name(dev), pdev->vendor, pdev->device,
+               PCI_BUS_NUM(pci_alias), PCI_SLOT(pci_alias),
+               PCI_FUNC(pci_alias));
+
+       /*
+        * If we don't have a PCI DMA alias and the IVRS alias is on the same
+        * bus, then the IVRS table may know about a quirk that we don't.
+        */
+       if (pci_alias == devid &&
+           PCI_BUS_NUM(ivrs_alias) == pdev->bus->number) {
+               pdev->dev_flags |= PCI_DEV_FLAGS_DMA_ALIAS_DEVFN;
+               pdev->dma_alias_devfn = ivrs_alias & 0xff;
+               pr_info("AMD-Vi: Added PCI DMA alias %02x.%d for %s\n",
+                       PCI_SLOT(ivrs_alias), PCI_FUNC(ivrs_alias),
+                       dev_name(dev));
+       }
+
+       return ivrs_alias;
+}
+
 static struct iommu_dev_data *find_dev_data(u16 devid)
 {
        struct iommu_dev_data *dev_data;
@@ -215,13 +285,6 @@ static struct iommu_dev_data *find_dev_data(u16 devid)
        return dev_data;
 }
 
-static inline u16 get_device_id(struct device *dev)
-{
-       struct pci_dev *pdev = to_pci_dev(dev);
-
-       return PCI_DEVID(pdev->bus->number, pdev->devfn);
-}
-
 static struct iommu_dev_data *get_dev_data(struct device *dev)
 {
        return dev->archdata.iommu;
@@ -349,6 +412,8 @@ static int iommu_init_device(struct device *dev)
        if (!dev_data)
                return -ENOMEM;
 
+       dev_data->alias = get_alias(dev);
+
        if (pci_iommuv2_capable(pdev)) {
                struct amd_iommu *iommu;
 
@@ -369,7 +434,7 @@ static void iommu_ignore_device(struct device *dev)
        u16 devid, alias;
 
        devid = get_device_id(dev);
-       alias = amd_iommu_alias_table[devid];
+       alias = get_alias(dev);
 
        memset(&amd_iommu_dev_table[devid], 0, sizeof(struct dev_table_entry));
        memset(&amd_iommu_dev_table[alias], 0, sizeof(struct dev_table_entry));
@@ -1061,7 +1126,7 @@ static int device_flush_dte(struct iommu_dev_data *dev_data)
        int ret;
 
        iommu = amd_iommu_rlookup_table[dev_data->devid];
-       alias = amd_iommu_alias_table[dev_data->devid];
+       alias = dev_data->alias;
 
        ret = iommu_flush_dte(iommu, dev_data->devid);
        if (!ret && alias != dev_data->devid)
@@ -2039,7 +2104,7 @@ static void do_attach(struct iommu_dev_data *dev_data,
        bool ats;
 
        iommu = amd_iommu_rlookup_table[dev_data->devid];
-       alias = amd_iommu_alias_table[dev_data->devid];
+       alias = dev_data->alias;
        ats   = dev_data->ats.enabled;
 
        /* Update data structures */
@@ -2073,7 +2138,7 @@ static void do_detach(struct iommu_dev_data *dev_data)
                return;
 
        iommu = amd_iommu_rlookup_table[dev_data->devid];
-       alias = amd_iommu_alias_table[dev_data->devid];
+       alias = dev_data->alias;
 
        /* decrease reference counters */
        dev_data->domain->dev_iommu[iommu->index] -= 1;
index 2409e3b..7c39ac4 100644 (file)
@@ -826,6 +826,12 @@ static int arm_smmu_init_domain_context(struct iommu_domain *domain,
        if (smmu_domain->smmu)
                goto out_unlock;
 
+       /* We're bypassing these SIDs, so don't allocate an actual context */
+       if (domain->type == IOMMU_DOMAIN_DMA) {
+               smmu_domain->smmu = smmu;
+               goto out_unlock;
+       }
+
        /*
         * Mapping the requested stage onto what we support is surprisingly
         * complicated, mainly because the spec allows S1+S2 SMMUs without
@@ -948,7 +954,7 @@ static void arm_smmu_destroy_domain_context(struct iommu_domain *domain)
        void __iomem *cb_base;
        int irq;
 
-       if (!smmu)
+       if (!smmu || domain->type == IOMMU_DOMAIN_DMA)
                return;
 
        /*
@@ -1089,18 +1095,20 @@ static int arm_smmu_domain_add_master(struct arm_smmu_domain *smmu_domain,
        struct arm_smmu_device *smmu = smmu_domain->smmu;
        void __iomem *gr0_base = ARM_SMMU_GR0(smmu);
 
-       /* Devices in an IOMMU group may already be configured */
-       ret = arm_smmu_master_configure_smrs(smmu, cfg);
-       if (ret)
-               return ret == -EEXIST ? 0 : ret;
-
        /*
         * FIXME: This won't be needed once we have IOMMU-backed DMA ops
-        * for all devices behind the SMMU.
+        * for all devices behind the SMMU. Note that we need to take
+        * care configuring SMRs for devices both a platform_device and
+        * and a PCI device (i.e. a PCI host controller)
         */
        if (smmu_domain->domain.type == IOMMU_DOMAIN_DMA)
                return 0;
 
+       /* Devices in an IOMMU group may already be configured */
+       ret = arm_smmu_master_configure_smrs(smmu, cfg);
+       if (ret)
+               return ret == -EEXIST ? 0 : ret;
+
        for (i = 0; i < cfg->num_streamids; ++i) {
                u32 idx, s2cr;
 
index 94a30da..4dffccf 100644 (file)
@@ -467,7 +467,7 @@ static int gic_set_affinity(struct irq_data *d, const struct cpumask *cpumask,
        gic_map_to_vpe(irq, mips_cm_vp_id(cpumask_first(&tmp)));
 
        /* Update the pcpu_masks */
-       for (i = 0; i < gic_vpes; i++)
+       for (i = 0; i < min(gic_vpes, NR_CPUS); i++)
                clear_bit(irq, pcpu_masks[i].pcpu_mask);
        set_bit(irq, pcpu_masks[cpumask_first(&tmp)].pcpu_mask);
 
@@ -707,7 +707,7 @@ static int gic_shared_irq_domain_map(struct irq_domain *d, unsigned int virq,
        spin_lock_irqsave(&gic_lock, flags);
        gic_map_to_pin(intr, gic_cpu_pin);
        gic_map_to_vpe(intr, vpe);
-       for (i = 0; i < gic_vpes; i++)
+       for (i = 0; i < min(gic_vpes, NR_CPUS); i++)
                clear_bit(intr, pcpu_masks[i].pcpu_mask);
        set_bit(intr, pcpu_masks[vpe].pcpu_mask);
        spin_unlock_irqrestore(&gic_lock, flags);
index d7c2866..1a1d997 100644 (file)
@@ -1147,8 +1147,6 @@ static byte test_c_ind_mask_bit(PLCI *plci, word b)
 
 static void dump_c_ind_mask(PLCI *plci)
 {
-       static char hex_digit_table[0x10] =
-               {'0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f'};
        word i, j, k;
        dword d;
        char *p;
@@ -1165,7 +1163,7 @@ static void dump_c_ind_mask(PLCI *plci)
                                d = plci->c_ind_mask_table[i + j];
                                for (k = 0; k < 8; k++)
                                {
-                                       *(--p) = hex_digit_table[d & 0xf];
+                                       *(--p) = hex_asc_lo(d);
                                        d >>= 4;
                                }
                        }
@@ -10507,7 +10505,6 @@ static void mixer_set_bchannel_id(PLCI *plci, byte *chi)
 
 static void mixer_calculate_coefs(DIVA_CAPI_ADAPTER *a)
 {
-       static char hex_digit_table[0x10] = {'0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f'};
        word n, i, j;
        char *p;
        char hex_line[2 * MIXER_MAX_DUMP_CHANNELS + MIXER_MAX_DUMP_CHANNELS / 8 + 4];
@@ -10690,13 +10687,13 @@ static void mixer_calculate_coefs(DIVA_CAPI_ADAPTER *a)
        n = li_total_channels;
        if (n > MIXER_MAX_DUMP_CHANNELS)
                n = MIXER_MAX_DUMP_CHANNELS;
+
        p = hex_line;
        for (j = 0; j < n; j++)
        {
                if ((j & 0x7) == 0)
                        *(p++) = ' ';
-               *(p++) = hex_digit_table[li_config_table[j].curchnl >> 4];
-               *(p++) = hex_digit_table[li_config_table[j].curchnl & 0xf];
+               p = hex_byte_pack(p, li_config_table[j].curchnl);
        }
        *p = '\0';
        dbug(1, dprintf("[%06lx] CURRENT %s",
@@ -10706,8 +10703,7 @@ static void mixer_calculate_coefs(DIVA_CAPI_ADAPTER *a)
        {
                if ((j & 0x7) == 0)
                        *(p++) = ' ';
-               *(p++) = hex_digit_table[li_config_table[j].channel >> 4];
-               *(p++) = hex_digit_table[li_config_table[j].channel & 0xf];
+               p = hex_byte_pack(p, li_config_table[j].channel);
        }
        *p = '\0';
        dbug(1, dprintf("[%06lx] CHANNEL %s",
@@ -10717,8 +10713,7 @@ static void mixer_calculate_coefs(DIVA_CAPI_ADAPTER *a)
        {
                if ((j & 0x7) == 0)
                        *(p++) = ' ';
-               *(p++) = hex_digit_table[li_config_table[j].chflags >> 4];
-               *(p++) = hex_digit_table[li_config_table[j].chflags & 0xf];
+               p = hex_byte_pack(p, li_config_table[j].chflags);
        }
        *p = '\0';
        dbug(1, dprintf("[%06lx] CHFLAG  %s",
@@ -10730,8 +10725,7 @@ static void mixer_calculate_coefs(DIVA_CAPI_ADAPTER *a)
                {
                        if ((j & 0x7) == 0)
                                *(p++) = ' ';
-                       *(p++) = hex_digit_table[li_config_table[i].flag_table[j] >> 4];
-                       *(p++) = hex_digit_table[li_config_table[i].flag_table[j] & 0xf];
+                       p = hex_byte_pack(p, li_config_table[i].flag_table[j]);
                }
                *p = '\0';
                dbug(1, dprintf("[%06lx] FLAG[%02x]%s",
@@ -10744,8 +10738,7 @@ static void mixer_calculate_coefs(DIVA_CAPI_ADAPTER *a)
                {
                        if ((j & 0x7) == 0)
                                *(p++) = ' ';
-                       *(p++) = hex_digit_table[li_config_table[i].coef_table[j] >> 4];
-                       *(p++) = hex_digit_table[li_config_table[i].coef_table[j] & 0xf];
+                       p = hex_byte_pack(p, li_config_table[i].coef_table[j]);
                }
                *p = '\0';
                dbug(1, dprintf("[%06lx] COEF[%02x]%s",
index a0efb4c..5609dee 100644 (file)
@@ -127,7 +127,7 @@ net_send_packet(struct sk_buff *skb, struct net_device *dev)
        if (lp->in_idx >= MAX_SKB_BUFFERS)
                lp->in_idx = 0; /* wrap around */
        lp->sk_count++;         /* adjust counter */
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
 
        /* If we just used up the very last entry in the
         * TX ring on this device, tell the queueing
index aa5dd56..c151c6d 100644 (file)
@@ -1153,7 +1153,7 @@ static void isdn_net_tx_timeout(struct net_device *ndev)
                 * ever called   --KG
                 */
        }
-       ndev->trans_start = jiffies;
+       netif_trans_update(ndev);
        netif_wake_queue(ndev);
 }
 
@@ -1291,7 +1291,7 @@ isdn_net_start_xmit(struct sk_buff *skb, struct net_device *ndev)
                        }
                } else {
                        /* Device is connected to an ISDN channel */
-                       ndev->trans_start = jiffies;
+                       netif_trans_update(ndev);
                        if (!lp->dialstate) {
                                /* ISDN connection is established, try sending */
                                int ret;
index e2d4e58..0c5d8de 100644 (file)
@@ -278,7 +278,7 @@ static int isdn_x25iface_xmit(struct concap_proto *cprot, struct sk_buff *skb)
        case X25_IFACE_DATA:
                if (*state == WAN_CONNECTED) {
                        skb_pull(skb, 1);
-                       cprot->net_dev->trans_start = jiffies;
+                       netif_trans_update(cprot->net_dev);
                        ret = (cprot->dops->data_req(cprot, skb));
                        /* prepare for future retransmissions */
                        if (ret) skb_push(skb, 1);
index 194580f..14d3b37 100644 (file)
@@ -284,6 +284,8 @@ static blk_qc_t md_make_request(struct request_queue *q, struct bio *bio)
         * go away inside make_request
         */
        sectors = bio_sectors(bio);
+       /* bio could be mergeable after passing to underlayer */
+       bio->bi_rw &= ~REQ_NOMERGE;
        mddev->pers->make_request(mddev, bio);
 
        cpu = part_stat_lock();
index 2ea12c6..34783a3 100644 (file)
@@ -70,7 +70,6 @@ static void dump_zones(struct mddev *mddev)
                        (unsigned long long)zone_size>>1);
                zone_start = conf->strip_zone[j].zone_end;
        }
-       printk(KERN_INFO "\n");
 }
 
 static int create_strip_zones(struct mddev *mddev, struct r0conf **private_conf)
@@ -85,6 +84,7 @@ static int create_strip_zones(struct mddev *mddev, struct r0conf **private_conf)
        struct r0conf *conf = kzalloc(sizeof(*conf), GFP_KERNEL);
        unsigned short blksize = 512;
 
+       *private_conf = ERR_PTR(-ENOMEM);
        if (!conf)
                return -ENOMEM;
        rdev_for_each(rdev1, mddev) {
index 8ab8b65..e48c262 100644 (file)
@@ -3502,8 +3502,6 @@ returnbi:
                                dev = &sh->dev[i];
                        } else if (test_bit(R5_Discard, &dev->flags))
                                discard_pending = 1;
-                       WARN_ON(test_bit(R5_SkipCopy, &dev->flags));
-                       WARN_ON(dev->page != dev->orig_page);
                }
 
        r5l_stripe_write_finished(sh);
index 12f5ebb..ad2f3d2 100644 (file)
@@ -1452,13 +1452,6 @@ static int usbvision_probe(struct usb_interface *intf,
        printk(KERN_INFO "%s: %s found\n", __func__,
                                usbvision_device_data[model].model_string);
 
-       /*
-        * this is a security check.
-        * an exploit using an incorrect bInterfaceNumber is known
-        */
-       if (ifnum >= USB_MAXINTERFACES || !dev->actconfig->interface[ifnum])
-               return -ENODEV;
-
        if (usbvision_device_data[model].interface >= 0)
                interface = &dev->actconfig->interface[usbvision_device_data[model].interface]->altsetting[0];
        else if (ifnum < dev->actconfig->desc.bNumInterfaces)
index 5d016f4..9fbcb67 100644 (file)
@@ -1645,7 +1645,7 @@ static int __vb2_wait_for_done_vb(struct vb2_queue *q, int nonblocking)
  * Will sleep if required for nonblocking == false.
  */
 static int __vb2_get_done_vb(struct vb2_queue *q, struct vb2_buffer **vb,
-                               int nonblocking)
+                            void *pb, int nonblocking)
 {
        unsigned long flags;
        int ret;
@@ -1666,10 +1666,10 @@ static int __vb2_get_done_vb(struct vb2_queue *q, struct vb2_buffer **vb,
        /*
         * Only remove the buffer from done_list if v4l2_buffer can handle all
         * the planes.
-        * Verifying planes is NOT necessary since it already has been checked
-        * before the buffer is queued/prepared. So it can never fail.
         */
-       list_del(&(*vb)->done_entry);
+       ret = call_bufop(q, verify_planes_array, *vb, pb);
+       if (!ret)
+               list_del(&(*vb)->done_entry);
        spin_unlock_irqrestore(&q->done_lock, flags);
 
        return ret;
@@ -1748,7 +1748,7 @@ int vb2_core_dqbuf(struct vb2_queue *q, unsigned int *pindex, void *pb,
        struct vb2_buffer *vb = NULL;
        int ret;
 
-       ret = __vb2_get_done_vb(q, &vb, nonblocking);
+       ret = __vb2_get_done_vb(q, &vb, pb, nonblocking);
        if (ret < 0)
                return ret;
 
@@ -2297,6 +2297,16 @@ unsigned int vb2_core_poll(struct vb2_queue *q, struct file *file,
        if (!vb2_is_streaming(q) || q->error)
                return POLLERR;
 
+       /*
+        * If this quirk is set and QBUF hasn't been called yet then
+        * return POLLERR as well. This only affects capture queues, output
+        * queues will always initialize waiting_for_buffers to false.
+        * This quirk is set by V4L2 for backwards compatibility reasons.
+        */
+       if (q->quirk_poll_must_check_waiting_for_buffers &&
+           q->waiting_for_buffers && (req_events & (POLLIN | POLLRDNORM)))
+               return POLLERR;
+
        /*
         * For output streams you can call write() as long as there are fewer
         * buffers queued than there are buffers available.
index dbec592..3c3b517 100644 (file)
@@ -49,7 +49,7 @@ struct frame_vector *vb2_create_framevec(unsigned long start,
        vec = frame_vector_create(nr);
        if (!vec)
                return ERR_PTR(-ENOMEM);
-       ret = get_vaddr_frames(start, nr, write, 1, vec);
+       ret = get_vaddr_frames(start & PAGE_MASK, nr, write, true, vec);
        if (ret < 0)
                goto out_destroy;
        /* We accept only complete set of PFNs */
index 91f5521..7f366f1 100644 (file)
@@ -74,6 +74,11 @@ static int __verify_planes_array(struct vb2_buffer *vb, const struct v4l2_buffer
        return 0;
 }
 
+static int __verify_planes_array_core(struct vb2_buffer *vb, const void *pb)
+{
+       return __verify_planes_array(vb, pb);
+}
+
 /**
  * __verify_length() - Verify that the bytesused value for each plane fits in
  * the plane length and that the data offset doesn't exceed the bytesused value.
@@ -437,6 +442,7 @@ static int __fill_vb2_buffer(struct vb2_buffer *vb,
 }
 
 static const struct vb2_buf_ops v4l2_buf_ops = {
+       .verify_planes_array    = __verify_planes_array_core,
        .fill_user_buffer       = __fill_v4l2_buffer,
        .fill_vb2_buffer        = __fill_vb2_buffer,
        .copy_timestamp         = __copy_timestamp,
@@ -765,6 +771,12 @@ int vb2_queue_init(struct vb2_queue *q)
        q->is_output = V4L2_TYPE_IS_OUTPUT(q->type);
        q->copy_timestamp = (q->timestamp_flags & V4L2_BUF_FLAG_TIMESTAMP_MASK)
                        == V4L2_BUF_FLAG_TIMESTAMP_COPY;
+       /*
+        * For compatibility with vb1: if QBUF hasn't been called yet, then
+        * return POLLERR as well. This only affects capture queues, output
+        * queues will always initialize waiting_for_buffers to false.
+        */
+       q->quirk_poll_must_check_waiting_for_buffers = true;
 
        return vb2_core_queue_init(q);
 }
@@ -818,14 +830,6 @@ unsigned int vb2_poll(struct vb2_queue *q, struct file *file, poll_table *wait)
                        poll_wait(file, &fh->wait, wait);
        }
 
-       /*
-        * For compatibility with vb1: if QBUF hasn't been called yet, then
-        * return POLLERR as well. This only affects capture queues, output
-        * queues will always initialize waiting_for_buffers to false.
-        */
-       if (q->waiting_for_buffers && (req_events & (POLLIN | POLLRDNORM)))
-               return POLLERR;
-
        return res | vb2_core_poll(q, file, wait);
 }
 EXPORT_SYMBOL_GPL(vb2_poll);
index cbe9607..6955c9e 100644 (file)
@@ -791,7 +791,7 @@ mpt_lan_sdu_send (struct sk_buff *skb, struct net_device *dev)
                pSimple->Address.High = 0;
 
        mpt_put_msg_frame (LanCtx, mpt_dev, mf);
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
 
        dioprintk((KERN_INFO MYNAM ": %s/%s: Sending packet. FlagsLength = %08x.\n",
                        IOC_AND_NETDEV_NAMES_s_s(dev),
index 10370f2..7edea9c 100644 (file)
@@ -223,6 +223,13 @@ int __detach_context(struct cxl_context *ctx)
                cxl_ops->link_ok(ctx->afu->adapter, ctx->afu));
        flush_work(&ctx->fault_work); /* Only needed for dedicated process */
 
+       /*
+        * Wait until no further interrupts are presented by the PSL
+        * for this context.
+        */
+       if (cxl_ops->irq_wait)
+               cxl_ops->irq_wait(ctx);
+
        /* release the reference to the group leader and mm handling pid */
        put_pid(ctx->pid);
        put_pid(ctx->glpid);
index 38e21cf..73dc2a3 100644 (file)
@@ -274,6 +274,7 @@ static const cxl_p2n_reg_t CXL_PSL_WED_An     = {0x0A0};
 #define CXL_PSL_DSISR_An_PE (1ull << (63-4))  /* PSL Error (implementation specific) */
 #define CXL_PSL_DSISR_An_AE (1ull << (63-5))  /* AFU Error */
 #define CXL_PSL_DSISR_An_OC (1ull << (63-6))  /* OS Context Warning */
+#define CXL_PSL_DSISR_PENDING (CXL_PSL_DSISR_TRANS | CXL_PSL_DSISR_An_PE | CXL_PSL_DSISR_An_AE | CXL_PSL_DSISR_An_OC)
 /* NOTE: Bits 32:63 are undefined if DSISR[DS] = 1 */
 #define CXL_PSL_DSISR_An_M  DSISR_NOHPTE      /* PTE not found */
 #define CXL_PSL_DSISR_An_P  DSISR_PROTFAULT   /* Storage protection violation */
@@ -855,6 +856,7 @@ struct cxl_backend_ops {
                                        u64 dsisr, u64 errstat);
        irqreturn_t (*psl_interrupt)(int irq, void *data);
        int (*ack_irq)(struct cxl_context *ctx, u64 tfc, u64 psl_reset_mask);
+       void (*irq_wait)(struct cxl_context *ctx);
        int (*attach_process)(struct cxl_context *ctx, bool kernel,
                        u64 wed, u64 amr);
        int (*detach_process)(struct cxl_context *ctx);
index be646dc..8def455 100644 (file)
@@ -203,7 +203,6 @@ unsigned int cxl_map_irq(struct cxl *adapter, irq_hw_number_t hwirq,
 void cxl_unmap_irq(unsigned int virq, void *cookie)
 {
        free_irq(virq, cookie);
-       irq_dispose_mapping(virq);
 }
 
 int cxl_register_one_irq(struct cxl *adapter,
index 387fcbd..ecf7557 100644 (file)
@@ -14,6 +14,7 @@
 #include <linux/mutex.h>
 #include <linux/mm.h>
 #include <linux/uaccess.h>
+#include <linux/delay.h>
 #include <asm/synch.h>
 #include <misc/cxl-base.h>
 
@@ -797,6 +798,35 @@ static irqreturn_t native_irq_multiplexed(int irq, void *data)
        return fail_psl_irq(afu, &irq_info);
 }
 
+void native_irq_wait(struct cxl_context *ctx)
+{
+       u64 dsisr;
+       int timeout = 1000;
+       int ph;
+
+       /*
+        * Wait until no further interrupts are presented by the PSL
+        * for this context.
+        */
+       while (timeout--) {
+               ph = cxl_p2n_read(ctx->afu, CXL_PSL_PEHandle_An) & 0xffff;
+               if (ph != ctx->pe)
+                       return;
+               dsisr = cxl_p2n_read(ctx->afu, CXL_PSL_DSISR_An);
+               if ((dsisr & CXL_PSL_DSISR_PENDING) == 0)
+                       return;
+               /*
+                * We are waiting for the workqueue to process our
+                * irq, so need to let that run here.
+                */
+               msleep(1);
+       }
+
+       dev_warn(&ctx->afu->dev, "WARNING: waiting on DSI for PE %i"
+                " DSISR %016llx!\n", ph, dsisr);
+       return;
+}
+
 static irqreturn_t native_slice_irq_err(int irq, void *data)
 {
        struct cxl_afu *afu = data;
@@ -1076,6 +1106,7 @@ const struct cxl_backend_ops cxl_native_ops = {
        .handle_psl_slice_error = native_handle_psl_slice_error,
        .psl_interrupt = NULL,
        .ack_irq = native_ack_irq,
+       .irq_wait = native_irq_wait,
        .attach_process = native_attach_process,
        .detach_process = native_detach_process,
        .support_attributes = native_support_attributes,
index 04feea8..e657af0 100644 (file)
@@ -97,6 +97,7 @@ config MMC_RICOH_MMC
 config MMC_SDHCI_ACPI
        tristate "SDHCI support for ACPI enumerated SDHCI controllers"
        depends on MMC_SDHCI && ACPI
+       select IOSF_MBI if X86
        help
          This selects support for ACPI enumerated SDHCI controllers,
          identified by ACPI Compatibility ID PNP0D40 or specific
index 6839e41..bed6a49 100644 (file)
 #include <linux/mmc/pm.h>
 #include <linux/mmc/slot-gpio.h>
 
+#ifdef CONFIG_X86
+#include <asm/cpu_device_id.h>
+#include <asm/iosf_mbi.h>
+#endif
+
 #include "sdhci.h"
 
 enum {
@@ -116,6 +121,75 @@ static const struct sdhci_acpi_chip sdhci_acpi_chip_int = {
        .ops = &sdhci_acpi_ops_int,
 };
 
+#ifdef CONFIG_X86
+
+static bool sdhci_acpi_byt(void)
+{
+       static const struct x86_cpu_id byt[] = {
+               { X86_VENDOR_INTEL, 6, 0x37 },
+               {}
+       };
+
+       return x86_match_cpu(byt);
+}
+
+#define BYT_IOSF_SCCEP                 0x63
+#define BYT_IOSF_OCP_NETCTRL0          0x1078
+#define BYT_IOSF_OCP_TIMEOUT_BASE      GENMASK(10, 8)
+
+static void sdhci_acpi_byt_setting(struct device *dev)
+{
+       u32 val = 0;
+
+       if (!sdhci_acpi_byt())
+               return;
+
+       if (iosf_mbi_read(BYT_IOSF_SCCEP, MBI_CR_READ, BYT_IOSF_OCP_NETCTRL0,
+                         &val)) {
+               dev_err(dev, "%s read error\n", __func__);
+               return;
+       }
+
+       if (!(val & BYT_IOSF_OCP_TIMEOUT_BASE))
+               return;
+
+       val &= ~BYT_IOSF_OCP_TIMEOUT_BASE;
+
+       if (iosf_mbi_write(BYT_IOSF_SCCEP, MBI_CR_WRITE, BYT_IOSF_OCP_NETCTRL0,
+                          val)) {
+               dev_err(dev, "%s write error\n", __func__);
+               return;
+       }
+
+       dev_dbg(dev, "%s completed\n", __func__);
+}
+
+static bool sdhci_acpi_byt_defer(struct device *dev)
+{
+       if (!sdhci_acpi_byt())
+               return false;
+
+       if (!iosf_mbi_available())
+               return true;
+
+       sdhci_acpi_byt_setting(dev);
+
+       return false;
+}
+
+#else
+
+static inline void sdhci_acpi_byt_setting(struct device *dev)
+{
+}
+
+static inline bool sdhci_acpi_byt_defer(struct device *dev)
+{
+       return false;
+}
+
+#endif
+
 static int bxt_get_cd(struct mmc_host *mmc)
 {
        int gpio_cd = mmc_gpio_get_cd(mmc);
@@ -322,6 +396,9 @@ static int sdhci_acpi_probe(struct platform_device *pdev)
        if (acpi_bus_get_status(device) || !device->status.present)
                return -ENODEV;
 
+       if (sdhci_acpi_byt_defer(dev))
+               return -EPROBE_DEFER;
+
        hid = acpi_device_hid(device);
        uid = device->pnp.unique_id;
 
@@ -447,6 +524,8 @@ static int sdhci_acpi_resume(struct device *dev)
 {
        struct sdhci_acpi_host *c = dev_get_drvdata(dev);
 
+       sdhci_acpi_byt_setting(&c->pdev->dev);
+
        return sdhci_resume_host(c->host);
 }
 
@@ -470,6 +549,8 @@ static int sdhci_acpi_runtime_resume(struct device *dev)
 {
        struct sdhci_acpi_host *c = dev_get_drvdata(dev);
 
+       sdhci_acpi_byt_setting(&c->pdev->dev);
+
        return sdhci_runtime_resume_host(c->host);
 }
 
index 8372a41..7fc8b7a 100644 (file)
@@ -1129,6 +1129,11 @@ static int sunxi_mmc_probe(struct platform_device *pdev)
                                  MMC_CAP_1_8V_DDR |
                                  MMC_CAP_ERASE | MMC_CAP_SDIO_IRQ;
 
+       /* TODO MMC DDR is not working on A80 */
+       if (of_device_is_compatible(pdev->dev.of_node,
+                                   "allwinner,sun9i-a80-mmc"))
+               mmc->caps &= ~MMC_CAP_1_8V_DDR;
+
        ret = mmc_of_parse(mmc);
        if (ret)
                goto error_free_dma;
index a24c18e..befd67d 100644 (file)
@@ -62,9 +62,8 @@ config DUMMY
          this device is consigned into oblivion) with a configurable IP
          address. It is most commonly used in order to make your currently
          inactive SLIP address seem like a real address for local programs.
-         If you use SLIP or PPP, you might want to say Y here. Since this
-         thing often comes in handy, the default is Y. It won't enlarge your
-         kernel either. What a deal. Read about it in the Network
+         If you use SLIP or PPP, you might want to say Y here. It won't
+         enlarge your kernel. What a deal. Read about it in the Network
          Administrator's Guide, available from
          <http://www.tldp.org/docs.html#guide>.
 
index 7f2a032..1b2e921 100644 (file)
@@ -861,7 +861,7 @@ static void cops_timeout(struct net_device *dev)
        }
        printk(KERN_WARNING "%s: Transmit timed out.\n", dev->name);
        cops_jumpstart(dev);    /* Restart the card. */
-       dev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(dev); /* prevent tx timeout */
        netif_wake_queue(dev);
 }
 
index e36b740..acb708f 100644 (file)
@@ -276,7 +276,7 @@ static netdev_tx_t mscan_start_xmit(struct sk_buff *skb, struct net_device *dev)
        out_8(&regs->cantflg, 1 << buf_id);
 
        if (!test_bit(F_TX_PROGRESS, &priv->flags))
-               dev->trans_start = jiffies;
+               netif_trans_update(dev);
 
        list_add_tail(&priv->tx_queue[buf_id].list, &priv->tx_head);
 
@@ -469,7 +469,7 @@ static irqreturn_t mscan_isr(int irq, void *dev_id)
                        clear_bit(F_TX_PROGRESS, &priv->flags);
                        priv->cur_pri = 0;
                } else {
-                       dev->trans_start = jiffies;
+                       netif_trans_update(dev);
                }
 
                if (!test_bit(F_TX_WAIT_ALL, &priv->flags))
index 3400fd1..71f0e79 100644 (file)
@@ -521,7 +521,7 @@ static void ems_usb_write_bulk_callback(struct urb *urb)
        if (urb->status)
                netdev_info(netdev, "Tx URB aborted (%d)\n", urb->status);
 
-       netdev->trans_start = jiffies;
+       netif_trans_update(netdev);
 
        /* transmission complete interrupt */
        netdev->stats.tx_packets++;
@@ -835,7 +835,7 @@ static netdev_tx_t ems_usb_start_xmit(struct sk_buff *skb, struct net_device *ne
                        stats->tx_dropped++;
                }
        } else {
-               netdev->trans_start = jiffies;
+               netif_trans_update(netdev);
 
                /* Slow down tx path */
                if (atomic_read(&dev->active_tx_urbs) >= MAX_TX_URBS ||
index 113e64f..784a900 100644 (file)
@@ -480,7 +480,7 @@ static void esd_usb2_write_bulk_callback(struct urb *urb)
        if (urb->status)
                netdev_info(netdev, "Tx URB aborted (%d)\n", urb->status);
 
-       netdev->trans_start = jiffies;
+       netif_trans_update(netdev);
 }
 
 static ssize_t show_firmware(struct device *d,
@@ -820,7 +820,7 @@ static netdev_tx_t esd_usb2_start_xmit(struct sk_buff *skb,
                goto releasebuf;
        }
 
-       netdev->trans_start = jiffies;
+       netif_trans_update(netdev);
 
        /*
         * Release our reference to this URB, the USB core will eventually free
index 5a2e341..bfb91d8 100644 (file)
@@ -274,7 +274,7 @@ static void peak_usb_write_bulk_callback(struct urb *urb)
                netdev->stats.tx_bytes += context->data_len;
 
                /* prevent tx timeout */
-               netdev->trans_start = jiffies;
+               netif_trans_update(netdev);
                break;
 
        default:
@@ -373,7 +373,7 @@ static netdev_tx_t peak_usb_ndo_start_xmit(struct sk_buff *skb,
                        stats->tx_dropped++;
                }
        } else {
-               netdev->trans_start = jiffies;
+               netif_trans_update(netdev);
 
                /* slow down tx path */
                if (atomic_read(&dev->active_tx_urbs) >= PCAN_USB_MAX_TX_URBS)
index 64c016a..221f5f0 100644 (file)
@@ -1106,7 +1106,7 @@ e100_send_packet(struct sk_buff *skb, struct net_device *dev)
 
        myNextTxDesc->skb = skb;
 
-       dev->trans_start = jiffies; /* NETIF_F_LLTX driver :( */
+       netif_trans_update(dev); /* NETIF_F_LLTX driver :( */
 
        e100_hardware_send_packet(np, buf, skb->len);
 
index 90ba003..200663c 100644 (file)
@@ -1,10 +1,6 @@
 menu "Distributed Switch Architecture drivers"
        depends on HAVE_NET_DSA
 
-config NET_DSA_MV88E6XXX
-       tristate
-       default n
-
 config NET_DSA_MV88E6060
        tristate "Marvell 88E6060 ethernet switch chip support"
        depends on NET_DSA
@@ -13,46 +9,13 @@ config NET_DSA_MV88E6060
          This enables support for the Marvell 88E6060 ethernet switch
          chip.
 
-config NET_DSA_MV88E6XXX_NEED_PPU
-       bool
-       default n
-
-config NET_DSA_MV88E6131
-       tristate "Marvell 88E6085/6095/6095F/6131 ethernet switch chip support"
-       depends on NET_DSA
-       select NET_DSA_MV88E6XXX
-       select NET_DSA_MV88E6XXX_NEED_PPU
-       select NET_DSA_TAG_DSA
-       ---help---
-         This enables support for the Marvell 88E6085/6095/6095F/6131
-         ethernet switch chips.
-
-config NET_DSA_MV88E6123
-       tristate "Marvell 88E6123/6161/6165 ethernet switch chip support"
-       depends on NET_DSA
-       select NET_DSA_MV88E6XXX
-       select NET_DSA_TAG_EDSA
-       ---help---
-         This enables support for the Marvell 88E6123/6161/6165
-         ethernet switch chips.
-
-config NET_DSA_MV88E6171
-       tristate "Marvell 88E6171/6175/6350/6351 ethernet switch chip support"
-       depends on NET_DSA
-       select NET_DSA_MV88E6XXX
-       select NET_DSA_TAG_EDSA
-       ---help---
-         This enables support for the Marvell 88E6171/6175/6350/6351
-         ethernet switches chips.
-
-config NET_DSA_MV88E6352
-       tristate "Marvell 88E6172/6176/6320/6321/6352 ethernet switch chip support"
+config NET_DSA_MV88E6XXX
+       tristate "Marvell 88E6xxx Ethernet switch chip support"
        depends on NET_DSA
-       select NET_DSA_MV88E6XXX
        select NET_DSA_TAG_EDSA
        ---help---
-         This enables support for the Marvell 88E6172, 88E6176, 88E6320,
-         88E6321 and 88E6352 ethernet switch chips.
+         This enables support for most of the Marvell 88E6xxx models of
+         Ethernet switch chips, except 88E6060.
 
 config NET_DSA_BCM_SF2
        tristate "Broadcom Starfighter 2 Ethernet switch support"
index a6e0993..76b751d 100644 (file)
@@ -1,16 +1,3 @@
 obj-$(CONFIG_NET_DSA_MV88E6060) += mv88e6060.o
-obj-$(CONFIG_NET_DSA_MV88E6XXX) += mv88e6xxx_drv.o
-mv88e6xxx_drv-y += mv88e6xxx.o
-ifdef CONFIG_NET_DSA_MV88E6123
-mv88e6xxx_drv-y += mv88e6123.o
-endif
-ifdef CONFIG_NET_DSA_MV88E6131
-mv88e6xxx_drv-y += mv88e6131.o
-endif
-ifdef CONFIG_NET_DSA_MV88E6352
-mv88e6xxx_drv-y += mv88e6352.o
-endif
-ifdef CONFIG_NET_DSA_MV88E6171
-mv88e6xxx_drv-y += mv88e6171.o
-endif
+obj-$(CONFIG_NET_DSA_MV88E6XXX) += mv88e6xxx.o
 obj-$(CONFIG_NET_DSA_BCM_SF2)  += bcm_sf2.o
diff --git a/drivers/net/dsa/mv88e6123.c b/drivers/net/dsa/mv88e6123.c
deleted file mode 100644 (file)
index 534ebc8..0000000
+++ /dev/null
@@ -1,126 +0,0 @@
-/*
- * net/dsa/mv88e6123_61_65.c - Marvell 88e6123/6161/6165 switch chip support
- * Copyright (c) 2008-2009 Marvell Semiconductor
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- */
-
-#include <linux/delay.h>
-#include <linux/jiffies.h>
-#include <linux/list.h>
-#include <linux/module.h>
-#include <linux/netdevice.h>
-#include <linux/phy.h>
-#include <net/dsa.h>
-#include "mv88e6xxx.h"
-
-static const struct mv88e6xxx_info mv88e6123_table[] = {
-       {
-               .prod_num = PORT_SWITCH_ID_PROD_NUM_6123,
-               .family = MV88E6XXX_FAMILY_6165,
-               .name = "Marvell 88E6123",
-               .num_databases = 4096,
-               .num_ports = 3,
-       }, {
-               .prod_num = PORT_SWITCH_ID_PROD_NUM_6161,
-               .family = MV88E6XXX_FAMILY_6165,
-               .name = "Marvell 88E6161",
-               .num_databases = 4096,
-               .num_ports = 6,
-       }, {
-               .prod_num = PORT_SWITCH_ID_PROD_NUM_6165,
-               .family = MV88E6XXX_FAMILY_6165,
-               .name = "Marvell 88E6165",
-               .num_databases = 4096,
-               .num_ports = 6,
-       }
-};
-
-static const char *mv88e6123_drv_probe(struct device *dsa_dev,
-                                      struct device *host_dev, int sw_addr,
-                                      void **priv)
-{
-       return mv88e6xxx_drv_probe(dsa_dev, host_dev, sw_addr, priv,
-                                  mv88e6123_table,
-                                  ARRAY_SIZE(mv88e6123_table));
-}
-
-static int mv88e6123_setup_global(struct dsa_switch *ds)
-{
-       u32 upstream_port = dsa_upstream_port(ds);
-       int ret;
-       u32 reg;
-
-       ret = mv88e6xxx_setup_global(ds);
-       if (ret)
-               return ret;
-
-       /* Disable the PHY polling unit (since there won't be any
-        * external PHYs to poll), don't discard packets with
-        * excessive collisions, and mask all interrupt sources.
-        */
-       ret = mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_CONTROL, 0x0000);
-       if (ret)
-               return ret;
-
-       /* Configure the upstream port, and configure the upstream
-        * port as the port to which ingress and egress monitor frames
-        * are to be sent.
-        */
-       reg = upstream_port << GLOBAL_MONITOR_CONTROL_INGRESS_SHIFT |
-               upstream_port << GLOBAL_MONITOR_CONTROL_EGRESS_SHIFT |
-               upstream_port << GLOBAL_MONITOR_CONTROL_ARP_SHIFT;
-       ret = mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_MONITOR_CONTROL, reg);
-       if (ret)
-               return ret;
-
-       /* Disable remote management for now, and set the switch's
-        * DSA device number.
-        */
-       return mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_CONTROL_2,
-                                  ds->index & 0x1f);
-}
-
-static int mv88e6123_setup(struct dsa_switch *ds)
-{
-       int ret;
-
-       ret = mv88e6xxx_setup_common(ds);
-       if (ret < 0)
-               return ret;
-
-       ret = mv88e6xxx_switch_reset(ds, false);
-       if (ret < 0)
-               return ret;
-
-       ret = mv88e6123_setup_global(ds);
-       if (ret < 0)
-               return ret;
-
-       return mv88e6xxx_setup_ports(ds);
-}
-
-struct dsa_switch_driver mv88e6123_switch_driver = {
-       .tag_protocol           = DSA_TAG_PROTO_EDSA,
-       .probe                  = mv88e6123_drv_probe,
-       .setup                  = mv88e6123_setup,
-       .set_addr               = mv88e6xxx_set_addr_indirect,
-       .phy_read               = mv88e6xxx_phy_read,
-       .phy_write              = mv88e6xxx_phy_write,
-       .get_strings            = mv88e6xxx_get_strings,
-       .get_ethtool_stats      = mv88e6xxx_get_ethtool_stats,
-       .get_sset_count         = mv88e6xxx_get_sset_count,
-       .adjust_link            = mv88e6xxx_adjust_link,
-#ifdef CONFIG_NET_DSA_HWMON
-       .get_temp               = mv88e6xxx_get_temp,
-#endif
-       .get_regs_len           = mv88e6xxx_get_regs_len,
-       .get_regs               = mv88e6xxx_get_regs,
-};
-
-MODULE_ALIAS("platform:mv88e6123");
-MODULE_ALIAS("platform:mv88e6161");
-MODULE_ALIAS("platform:mv88e6165");
diff --git a/drivers/net/dsa/mv88e6131.c b/drivers/net/dsa/mv88e6131.c
deleted file mode 100644 (file)
index c3eb9a8..0000000
+++ /dev/null
@@ -1,200 +0,0 @@
-/*
- * net/dsa/mv88e6131.c - Marvell 88e6095/6095f/6131 switch chip support
- * Copyright (c) 2008-2009 Marvell Semiconductor
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- */
-
-#include <linux/delay.h>
-#include <linux/jiffies.h>
-#include <linux/list.h>
-#include <linux/module.h>
-#include <linux/netdevice.h>
-#include <linux/phy.h>
-#include <net/dsa.h>
-#include "mv88e6xxx.h"
-
-static const struct mv88e6xxx_info mv88e6131_table[] = {
-       {
-               .prod_num = PORT_SWITCH_ID_PROD_NUM_6095,
-               .family = MV88E6XXX_FAMILY_6095,
-               .name = "Marvell 88E6095/88E6095F",
-               .num_databases = 256,
-               .num_ports = 11,
-       }, {
-               .prod_num = PORT_SWITCH_ID_PROD_NUM_6085,
-               .family = MV88E6XXX_FAMILY_6097,
-               .name = "Marvell 88E6085",
-               .num_databases = 4096,
-               .num_ports = 10,
-       }, {
-               .prod_num = PORT_SWITCH_ID_PROD_NUM_6131,
-               .family = MV88E6XXX_FAMILY_6185,
-               .name = "Marvell 88E6131",
-               .num_databases = 256,
-               .num_ports = 8,
-       }, {
-               .prod_num = PORT_SWITCH_ID_PROD_NUM_6185,
-               .family = MV88E6XXX_FAMILY_6185,
-               .name = "Marvell 88E6185",
-               .num_databases = 256,
-               .num_ports = 10,
-       }
-};
-
-static const char *mv88e6131_drv_probe(struct device *dsa_dev,
-                                      struct device *host_dev, int sw_addr,
-                                      void **priv)
-{
-       return mv88e6xxx_drv_probe(dsa_dev, host_dev, sw_addr, priv,
-                                  mv88e6131_table,
-                                  ARRAY_SIZE(mv88e6131_table));
-}
-
-static int mv88e6131_setup_global(struct dsa_switch *ds)
-{
-       u32 upstream_port = dsa_upstream_port(ds);
-       int ret;
-       u32 reg;
-
-       ret = mv88e6xxx_setup_global(ds);
-       if (ret)
-               return ret;
-
-       /* Enable the PHY polling unit, don't discard packets with
-        * excessive collisions, use a weighted fair queueing scheme
-        * to arbitrate between packet queues, set the maximum frame
-        * size to 1632, and mask all interrupt sources.
-        */
-       ret = mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_CONTROL,
-                                 GLOBAL_CONTROL_PPU_ENABLE |
-                                 GLOBAL_CONTROL_MAX_FRAME_1632);
-       if (ret)
-               return ret;
-
-       /* Set the VLAN ethertype to 0x8100. */
-       ret = mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_CORE_TAG_TYPE, 0x8100);
-       if (ret)
-               return ret;
-
-       /* Disable ARP mirroring, and configure the upstream port as
-        * the port to which ingress and egress monitor frames are to
-        * be sent.
-        */
-       reg = upstream_port << GLOBAL_MONITOR_CONTROL_INGRESS_SHIFT |
-               upstream_port << GLOBAL_MONITOR_CONTROL_EGRESS_SHIFT |
-               GLOBAL_MONITOR_CONTROL_ARP_DISABLED;
-       ret = mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_MONITOR_CONTROL, reg);
-       if (ret)
-               return ret;
-
-       /* Disable cascade port functionality unless this device
-        * is used in a cascade configuration, and set the switch's
-        * DSA device number.
-        */
-       if (ds->dst->pd->nr_chips > 1)
-               ret = mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_CONTROL_2,
-                                         GLOBAL_CONTROL_2_MULTIPLE_CASCADE |
-                                         (ds->index & 0x1f));
-       else
-               ret = mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_CONTROL_2,
-                                         GLOBAL_CONTROL_2_NO_CASCADE |
-                                         (ds->index & 0x1f));
-       if (ret)
-               return ret;
-
-       /* Force the priority of IGMP/MLD snoop frames and ARP frames
-        * to the highest setting.
-        */
-       return mv88e6xxx_reg_write(ds, REG_GLOBAL2, GLOBAL2_PRIO_OVERRIDE,
-                                  GLOBAL2_PRIO_OVERRIDE_FORCE_SNOOP |
-                                  7 << GLOBAL2_PRIO_OVERRIDE_SNOOP_SHIFT |
-                                  GLOBAL2_PRIO_OVERRIDE_FORCE_ARP |
-                                  7 << GLOBAL2_PRIO_OVERRIDE_ARP_SHIFT);
-}
-
-static int mv88e6131_setup(struct dsa_switch *ds)
-{
-       int ret;
-
-       ret = mv88e6xxx_setup_common(ds);
-       if (ret < 0)
-               return ret;
-
-       mv88e6xxx_ppu_state_init(ds);
-
-       ret = mv88e6xxx_switch_reset(ds, false);
-       if (ret < 0)
-               return ret;
-
-       ret = mv88e6131_setup_global(ds);
-       if (ret < 0)
-               return ret;
-
-       return mv88e6xxx_setup_ports(ds);
-}
-
-static int mv88e6131_port_to_phy_addr(struct dsa_switch *ds, int port)
-{
-       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
-
-       if (port >= 0 && port < ps->info->num_ports)
-               return port;
-
-       return -EINVAL;
-}
-
-static int
-mv88e6131_phy_read(struct dsa_switch *ds, int port, int regnum)
-{
-       int addr = mv88e6131_port_to_phy_addr(ds, port);
-
-       if (addr < 0)
-               return addr;
-
-       return mv88e6xxx_phy_read_ppu(ds, addr, regnum);
-}
-
-static int
-mv88e6131_phy_write(struct dsa_switch *ds,
-                             int port, int regnum, u16 val)
-{
-       int addr = mv88e6131_port_to_phy_addr(ds, port);
-
-       if (addr < 0)
-               return addr;
-
-       return mv88e6xxx_phy_write_ppu(ds, addr, regnum, val);
-}
-
-struct dsa_switch_driver mv88e6131_switch_driver = {
-       .tag_protocol           = DSA_TAG_PROTO_DSA,
-       .probe                  = mv88e6131_drv_probe,
-       .setup                  = mv88e6131_setup,
-       .set_addr               = mv88e6xxx_set_addr_direct,
-       .phy_read               = mv88e6131_phy_read,
-       .phy_write              = mv88e6131_phy_write,
-       .get_strings            = mv88e6xxx_get_strings,
-       .get_ethtool_stats      = mv88e6xxx_get_ethtool_stats,
-       .get_sset_count         = mv88e6xxx_get_sset_count,
-       .adjust_link            = mv88e6xxx_adjust_link,
-       .port_bridge_join       = mv88e6xxx_port_bridge_join,
-       .port_bridge_leave      = mv88e6xxx_port_bridge_leave,
-       .port_vlan_filtering    = mv88e6xxx_port_vlan_filtering,
-       .port_vlan_prepare      = mv88e6xxx_port_vlan_prepare,
-       .port_vlan_add          = mv88e6xxx_port_vlan_add,
-       .port_vlan_del          = mv88e6xxx_port_vlan_del,
-       .port_vlan_dump         = mv88e6xxx_port_vlan_dump,
-       .port_fdb_prepare       = mv88e6xxx_port_fdb_prepare,
-       .port_fdb_add           = mv88e6xxx_port_fdb_add,
-       .port_fdb_del           = mv88e6xxx_port_fdb_del,
-       .port_fdb_dump          = mv88e6xxx_port_fdb_dump,
-};
-
-MODULE_ALIAS("platform:mv88e6085");
-MODULE_ALIAS("platform:mv88e6095");
-MODULE_ALIAS("platform:mv88e6095f");
-MODULE_ALIAS("platform:mv88e6131");
diff --git a/drivers/net/dsa/mv88e6171.c b/drivers/net/dsa/mv88e6171.c
deleted file mode 100644 (file)
index 841ffe1..0000000
+++ /dev/null
@@ -1,147 +0,0 @@
-/* net/dsa/mv88e6171.c - Marvell 88e6171 switch chip support
- * Copyright (c) 2008-2009 Marvell Semiconductor
- * Copyright (c) 2014 Claudio Leite <leitec@staticky.com>
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- */
-
-#include <linux/delay.h>
-#include <linux/jiffies.h>
-#include <linux/list.h>
-#include <linux/module.h>
-#include <linux/netdevice.h>
-#include <linux/phy.h>
-#include <net/dsa.h>
-#include "mv88e6xxx.h"
-
-static const struct mv88e6xxx_info mv88e6171_table[] = {
-       {
-               .prod_num = PORT_SWITCH_ID_PROD_NUM_6171,
-               .family = MV88E6XXX_FAMILY_6351,
-               .name = "Marvell 88E6171",
-               .num_databases = 4096,
-               .num_ports = 7,
-       }, {
-               .prod_num = PORT_SWITCH_ID_PROD_NUM_6175,
-               .family = MV88E6XXX_FAMILY_6351,
-               .name = "Marvell 88E6175",
-               .num_databases = 4096,
-               .num_ports = 7,
-       }, {
-               .prod_num = PORT_SWITCH_ID_PROD_NUM_6350,
-               .family = MV88E6XXX_FAMILY_6351,
-               .name = "Marvell 88E6350",
-               .num_databases = 4096,
-               .num_ports = 7,
-       }, {
-               .prod_num = PORT_SWITCH_ID_PROD_NUM_6351,
-               .family = MV88E6XXX_FAMILY_6351,
-               .name = "Marvell 88E6351",
-               .num_databases = 4096,
-               .num_ports = 7,
-       }
-};
-
-static const char *mv88e6171_drv_probe(struct device *dsa_dev,
-                                      struct device *host_dev, int sw_addr,
-                                      void **priv)
-{
-       return mv88e6xxx_drv_probe(dsa_dev, host_dev, sw_addr, priv,
-                                  mv88e6171_table,
-                                  ARRAY_SIZE(mv88e6171_table));
-}
-
-static int mv88e6171_setup_global(struct dsa_switch *ds)
-{
-       u32 upstream_port = dsa_upstream_port(ds);
-       int ret;
-       u32 reg;
-
-       ret = mv88e6xxx_setup_global(ds);
-       if (ret)
-               return ret;
-
-       /* Discard packets with excessive collisions, mask all
-        * interrupt sources, enable PPU.
-        */
-       ret = mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_CONTROL,
-                                 GLOBAL_CONTROL_PPU_ENABLE |
-                                 GLOBAL_CONTROL_DISCARD_EXCESS);
-       if (ret)
-               return ret;
-
-       /* Configure the upstream port, and configure the upstream
-        * port as the port to which ingress and egress monitor frames
-        * are to be sent.
-        */
-       reg = upstream_port << GLOBAL_MONITOR_CONTROL_INGRESS_SHIFT |
-               upstream_port << GLOBAL_MONITOR_CONTROL_EGRESS_SHIFT |
-               upstream_port << GLOBAL_MONITOR_CONTROL_ARP_SHIFT |
-               upstream_port << GLOBAL_MONITOR_CONTROL_MIRROR_SHIFT;
-       ret = mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_MONITOR_CONTROL, reg);
-       if (ret)
-               return ret;
-
-       /* Disable remote management for now, and set the switch's
-        * DSA device number.
-        */
-       return mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_CONTROL_2,
-                                  ds->index & 0x1f);
-}
-
-static int mv88e6171_setup(struct dsa_switch *ds)
-{
-       int ret;
-
-       ret = mv88e6xxx_setup_common(ds);
-       if (ret < 0)
-               return ret;
-
-       ret = mv88e6xxx_switch_reset(ds, true);
-       if (ret < 0)
-               return ret;
-
-       ret = mv88e6171_setup_global(ds);
-       if (ret < 0)
-               return ret;
-
-       return mv88e6xxx_setup_ports(ds);
-}
-
-struct dsa_switch_driver mv88e6171_switch_driver = {
-       .tag_protocol           = DSA_TAG_PROTO_EDSA,
-       .probe                  = mv88e6171_drv_probe,
-       .setup                  = mv88e6171_setup,
-       .set_addr               = mv88e6xxx_set_addr_indirect,
-       .phy_read               = mv88e6xxx_phy_read_indirect,
-       .phy_write              = mv88e6xxx_phy_write_indirect,
-       .get_strings            = mv88e6xxx_get_strings,
-       .get_ethtool_stats      = mv88e6xxx_get_ethtool_stats,
-       .get_sset_count         = mv88e6xxx_get_sset_count,
-       .adjust_link            = mv88e6xxx_adjust_link,
-#ifdef CONFIG_NET_DSA_HWMON
-       .get_temp               = mv88e6xxx_get_temp,
-#endif
-       .get_regs_len           = mv88e6xxx_get_regs_len,
-       .get_regs               = mv88e6xxx_get_regs,
-       .port_bridge_join       = mv88e6xxx_port_bridge_join,
-       .port_bridge_leave      = mv88e6xxx_port_bridge_leave,
-       .port_stp_state_set     = mv88e6xxx_port_stp_state_set,
-       .port_vlan_filtering    = mv88e6xxx_port_vlan_filtering,
-       .port_vlan_prepare      = mv88e6xxx_port_vlan_prepare,
-       .port_vlan_add          = mv88e6xxx_port_vlan_add,
-       .port_vlan_del          = mv88e6xxx_port_vlan_del,
-       .port_vlan_dump         = mv88e6xxx_port_vlan_dump,
-       .port_fdb_prepare       = mv88e6xxx_port_fdb_prepare,
-       .port_fdb_add           = mv88e6xxx_port_fdb_add,
-       .port_fdb_del           = mv88e6xxx_port_fdb_del,
-       .port_fdb_dump          = mv88e6xxx_port_fdb_dump,
-};
-
-MODULE_ALIAS("platform:mv88e6171");
-MODULE_ALIAS("platform:mv88e6175");
-MODULE_ALIAS("platform:mv88e6350");
-MODULE_ALIAS("platform:mv88e6351");
diff --git a/drivers/net/dsa/mv88e6352.c b/drivers/net/dsa/mv88e6352.c
deleted file mode 100644 (file)
index 4afc24d..0000000
+++ /dev/null
@@ -1,373 +0,0 @@
-/*
- * net/dsa/mv88e6352.c - Marvell 88e6352 switch chip support
- *
- * Copyright (c) 2014 Guenter Roeck
- *
- * Derived from mv88e6123_61_65.c
- * Copyright (c) 2008-2009 Marvell Semiconductor
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- */
-
-#include <linux/delay.h>
-#include <linux/jiffies.h>
-#include <linux/list.h>
-#include <linux/module.h>
-#include <linux/netdevice.h>
-#include <linux/platform_device.h>
-#include <linux/phy.h>
-#include <net/dsa.h>
-#include "mv88e6xxx.h"
-
-static const struct mv88e6xxx_info mv88e6352_table[] = {
-       {
-               .prod_num = PORT_SWITCH_ID_PROD_NUM_6320,
-               .family = MV88E6XXX_FAMILY_6320,
-               .name = "Marvell 88E6320",
-               .num_databases = 4096,
-               .num_ports = 7,
-       }, {
-               .prod_num = PORT_SWITCH_ID_PROD_NUM_6321,
-               .family = MV88E6XXX_FAMILY_6320,
-               .name = "Marvell 88E6321",
-               .num_databases = 4096,
-               .num_ports = 7,
-       }, {
-               .prod_num = PORT_SWITCH_ID_PROD_NUM_6172,
-               .family = MV88E6XXX_FAMILY_6352,
-               .name = "Marvell 88E6172",
-               .num_databases = 4096,
-               .num_ports = 7,
-       }, {
-               .prod_num = PORT_SWITCH_ID_PROD_NUM_6176,
-               .family = MV88E6XXX_FAMILY_6352,
-               .name = "Marvell 88E6176",
-               .num_databases = 4096,
-               .num_ports = 7,
-       }, {
-               .prod_num = PORT_SWITCH_ID_PROD_NUM_6240,
-               .family = MV88E6XXX_FAMILY_6352,
-               .name = "Marvell 88E6240",
-               .num_databases = 4096,
-               .num_ports = 7,
-       }, {
-               .prod_num = PORT_SWITCH_ID_PROD_NUM_6352,
-               .family = MV88E6XXX_FAMILY_6352,
-               .name = "Marvell 88E6352",
-               .num_databases = 4096,
-               .num_ports = 7,
-       }
-};
-
-static const char *mv88e6352_drv_probe(struct device *dsa_dev,
-                                      struct device *host_dev, int sw_addr,
-                                      void **priv)
-{
-       return mv88e6xxx_drv_probe(dsa_dev, host_dev, sw_addr, priv,
-                                  mv88e6352_table,
-                                  ARRAY_SIZE(mv88e6352_table));
-}
-
-static int mv88e6352_setup_global(struct dsa_switch *ds)
-{
-       u32 upstream_port = dsa_upstream_port(ds);
-       int ret;
-       u32 reg;
-
-       ret = mv88e6xxx_setup_global(ds);
-       if (ret)
-               return ret;
-
-       /* Discard packets with excessive collisions,
-        * mask all interrupt sources, enable PPU (bit 14, undocumented).
-        */
-       ret = mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_CONTROL,
-                                 GLOBAL_CONTROL_PPU_ENABLE |
-                                 GLOBAL_CONTROL_DISCARD_EXCESS);
-       if (ret)
-               return ret;
-
-       /* Configure the upstream port, and configure the upstream
-        * port as the port to which ingress and egress monitor frames
-        * are to be sent.
-        */
-       reg = upstream_port << GLOBAL_MONITOR_CONTROL_INGRESS_SHIFT |
-               upstream_port << GLOBAL_MONITOR_CONTROL_EGRESS_SHIFT |
-               upstream_port << GLOBAL_MONITOR_CONTROL_ARP_SHIFT;
-       ret = mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_MONITOR_CONTROL, reg);
-       if (ret)
-               return ret;
-
-       /* Disable remote management for now, and set the switch's
-        * DSA device number.
-        */
-       return mv88e6xxx_reg_write(ds, REG_GLOBAL, 0x1c, ds->index & 0x1f);
-}
-
-static int mv88e6352_setup(struct dsa_switch *ds)
-{
-       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
-       int ret;
-
-       ret = mv88e6xxx_setup_common(ds);
-       if (ret < 0)
-               return ret;
-
-       mutex_init(&ps->eeprom_mutex);
-
-       ret = mv88e6xxx_switch_reset(ds, true);
-       if (ret < 0)
-               return ret;
-
-       ret = mv88e6352_setup_global(ds);
-       if (ret < 0)
-               return ret;
-
-       return mv88e6xxx_setup_ports(ds);
-}
-
-static int mv88e6352_read_eeprom_word(struct dsa_switch *ds, int addr)
-{
-       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
-       int ret;
-
-       mutex_lock(&ps->eeprom_mutex);
-
-       ret = mv88e6xxx_reg_write(ds, REG_GLOBAL2, GLOBAL2_EEPROM_OP,
-                                 GLOBAL2_EEPROM_OP_READ |
-                                 (addr & GLOBAL2_EEPROM_OP_ADDR_MASK));
-       if (ret < 0)
-               goto error;
-
-       ret = mv88e6xxx_eeprom_busy_wait(ds);
-       if (ret < 0)
-               goto error;
-
-       ret = mv88e6xxx_reg_read(ds, REG_GLOBAL2, GLOBAL2_EEPROM_DATA);
-error:
-       mutex_unlock(&ps->eeprom_mutex);
-       return ret;
-}
-
-static int mv88e6352_get_eeprom(struct dsa_switch *ds,
-                               struct ethtool_eeprom *eeprom, u8 *data)
-{
-       int offset;
-       int len;
-       int ret;
-
-       offset = eeprom->offset;
-       len = eeprom->len;
-       eeprom->len = 0;
-
-       eeprom->magic = 0xc3ec4951;
-
-       ret = mv88e6xxx_eeprom_load_wait(ds);
-       if (ret < 0)
-               return ret;
-
-       if (offset & 1) {
-               int word;
-
-               word = mv88e6352_read_eeprom_word(ds, offset >> 1);
-               if (word < 0)
-                       return word;
-
-               *data++ = (word >> 8) & 0xff;
-
-               offset++;
-               len--;
-               eeprom->len++;
-       }
-
-       while (len >= 2) {
-               int word;
-
-               word = mv88e6352_read_eeprom_word(ds, offset >> 1);
-               if (word < 0)
-                       return word;
-
-               *data++ = word & 0xff;
-               *data++ = (word >> 8) & 0xff;
-
-               offset += 2;
-               len -= 2;
-               eeprom->len += 2;
-       }
-
-       if (len) {
-               int word;
-
-               word = mv88e6352_read_eeprom_word(ds, offset >> 1);
-               if (word < 0)
-                       return word;
-
-               *data++ = word & 0xff;
-
-               offset++;
-               len--;
-               eeprom->len++;
-       }
-
-       return 0;
-}
-
-static int mv88e6352_eeprom_is_readonly(struct dsa_switch *ds)
-{
-       int ret;
-
-       ret = mv88e6xxx_reg_read(ds, REG_GLOBAL2, GLOBAL2_EEPROM_OP);
-       if (ret < 0)
-               return ret;
-
-       if (!(ret & GLOBAL2_EEPROM_OP_WRITE_EN))
-               return -EROFS;
-
-       return 0;
-}
-
-static int mv88e6352_write_eeprom_word(struct dsa_switch *ds, int addr,
-                                      u16 data)
-{
-       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
-       int ret;
-
-       mutex_lock(&ps->eeprom_mutex);
-
-       ret = mv88e6xxx_reg_write(ds, REG_GLOBAL2, GLOBAL2_EEPROM_DATA, data);
-       if (ret < 0)
-               goto error;
-
-       ret = mv88e6xxx_reg_write(ds, REG_GLOBAL2, GLOBAL2_EEPROM_OP,
-                                 GLOBAL2_EEPROM_OP_WRITE |
-                                 (addr & GLOBAL2_EEPROM_OP_ADDR_MASK));
-       if (ret < 0)
-               goto error;
-
-       ret = mv88e6xxx_eeprom_busy_wait(ds);
-error:
-       mutex_unlock(&ps->eeprom_mutex);
-       return ret;
-}
-
-static int mv88e6352_set_eeprom(struct dsa_switch *ds,
-                               struct ethtool_eeprom *eeprom, u8 *data)
-{
-       int offset;
-       int ret;
-       int len;
-
-       if (eeprom->magic != 0xc3ec4951)
-               return -EINVAL;
-
-       ret = mv88e6352_eeprom_is_readonly(ds);
-       if (ret)
-               return ret;
-
-       offset = eeprom->offset;
-       len = eeprom->len;
-       eeprom->len = 0;
-
-       ret = mv88e6xxx_eeprom_load_wait(ds);
-       if (ret < 0)
-               return ret;
-
-       if (offset & 1) {
-               int word;
-
-               word = mv88e6352_read_eeprom_word(ds, offset >> 1);
-               if (word < 0)
-                       return word;
-
-               word = (*data++ << 8) | (word & 0xff);
-
-               ret = mv88e6352_write_eeprom_word(ds, offset >> 1, word);
-               if (ret < 0)
-                       return ret;
-
-               offset++;
-               len--;
-               eeprom->len++;
-       }
-
-       while (len >= 2) {
-               int word;
-
-               word = *data++;
-               word |= *data++ << 8;
-
-               ret = mv88e6352_write_eeprom_word(ds, offset >> 1, word);
-               if (ret < 0)
-                       return ret;
-
-               offset += 2;
-               len -= 2;
-               eeprom->len += 2;
-       }
-
-       if (len) {
-               int word;
-
-               word = mv88e6352_read_eeprom_word(ds, offset >> 1);
-               if (word < 0)
-                       return word;
-
-               word = (word & 0xff00) | *data++;
-
-               ret = mv88e6352_write_eeprom_word(ds, offset >> 1, word);
-               if (ret < 0)
-                       return ret;
-
-               offset++;
-               len--;
-               eeprom->len++;
-       }
-
-       return 0;
-}
-
-struct dsa_switch_driver mv88e6352_switch_driver = {
-       .tag_protocol           = DSA_TAG_PROTO_EDSA,
-       .probe                  = mv88e6352_drv_probe,
-       .setup                  = mv88e6352_setup,
-       .set_addr               = mv88e6xxx_set_addr_indirect,
-       .phy_read               = mv88e6xxx_phy_read_indirect,
-       .phy_write              = mv88e6xxx_phy_write_indirect,
-       .get_strings            = mv88e6xxx_get_strings,
-       .get_ethtool_stats      = mv88e6xxx_get_ethtool_stats,
-       .get_sset_count         = mv88e6xxx_get_sset_count,
-       .adjust_link            = mv88e6xxx_adjust_link,
-       .set_eee                = mv88e6xxx_set_eee,
-       .get_eee                = mv88e6xxx_get_eee,
-#ifdef CONFIG_NET_DSA_HWMON
-       .get_temp               = mv88e6xxx_get_temp,
-       .get_temp_limit         = mv88e6xxx_get_temp_limit,
-       .set_temp_limit         = mv88e6xxx_set_temp_limit,
-       .get_temp_alarm         = mv88e6xxx_get_temp_alarm,
-#endif
-       .get_eeprom             = mv88e6352_get_eeprom,
-       .set_eeprom             = mv88e6352_set_eeprom,
-       .get_regs_len           = mv88e6xxx_get_regs_len,
-       .get_regs               = mv88e6xxx_get_regs,
-       .port_bridge_join       = mv88e6xxx_port_bridge_join,
-       .port_bridge_leave      = mv88e6xxx_port_bridge_leave,
-       .port_stp_state_set     = mv88e6xxx_port_stp_state_set,
-       .port_vlan_filtering    = mv88e6xxx_port_vlan_filtering,
-       .port_vlan_prepare      = mv88e6xxx_port_vlan_prepare,
-       .port_vlan_add          = mv88e6xxx_port_vlan_add,
-       .port_vlan_del          = mv88e6xxx_port_vlan_del,
-       .port_vlan_dump         = mv88e6xxx_port_vlan_dump,
-       .port_fdb_prepare       = mv88e6xxx_port_fdb_prepare,
-       .port_fdb_add           = mv88e6xxx_port_fdb_add,
-       .port_fdb_del           = mv88e6xxx_port_fdb_del,
-       .port_fdb_dump          = mv88e6xxx_port_fdb_dump,
-};
-
-MODULE_ALIAS("platform:mv88e6172");
-MODULE_ALIAS("platform:mv88e6176");
-MODULE_ALIAS("platform:mv88e6320");
-MODULE_ALIAS("platform:mv88e6321");
-MODULE_ALIAS("platform:mv88e6352");
index 028f92f..1e5ca8e 100644 (file)
 #include <net/switchdev.h>
 #include "mv88e6xxx.h"
 
-static void assert_smi_lock(struct dsa_switch *ds)
+static void assert_smi_lock(struct mv88e6xxx_priv_state *ps)
 {
-       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
-
        if (unlikely(!mutex_is_locked(&ps->smi_mutex))) {
-               dev_err(ds->master_dev, "SMI lock not held!\n");
+               dev_err(ps->dev, "SMI lock not held!\n");
                dump_stack();
        }
 }
@@ -92,30 +90,29 @@ static int __mv88e6xxx_reg_read(struct mii_bus *bus, int sw_addr, int addr,
        return ret & 0xffff;
 }
 
-static int _mv88e6xxx_reg_read(struct dsa_switch *ds, int addr, int reg)
+static int _mv88e6xxx_reg_read(struct mv88e6xxx_priv_state *ps,
+                              int addr, int reg)
 {
-       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
        int ret;
 
-       assert_smi_lock(ds);
+       assert_smi_lock(ps);
 
        ret = __mv88e6xxx_reg_read(ps->bus, ps->sw_addr, addr, reg);
        if (ret < 0)
                return ret;
 
-       dev_dbg(ds->master_dev, "<- addr: 0x%.2x reg: 0x%.2x val: 0x%.4x\n",
+       dev_dbg(ps->dev, "<- addr: 0x%.2x reg: 0x%.2x val: 0x%.4x\n",
                addr, reg, ret);
 
        return ret;
 }
 
-int mv88e6xxx_reg_read(struct dsa_switch *ds, int addr, int reg)
+int mv88e6xxx_reg_read(struct mv88e6xxx_priv_state *ps, int addr, int reg)
 {
-       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
        int ret;
 
        mutex_lock(&ps->smi_mutex);
-       ret = _mv88e6xxx_reg_read(ds, addr, reg);
+       ret = _mv88e6xxx_reg_read(ps, addr, reg);
        mutex_unlock(&ps->smi_mutex);
 
        return ret;
@@ -153,51 +150,51 @@ static int __mv88e6xxx_reg_write(struct mii_bus *bus, int sw_addr, int addr,
        return 0;
 }
 
-static int _mv88e6xxx_reg_write(struct dsa_switch *ds, int addr, int reg,
-                               u16 val)
+static int _mv88e6xxx_reg_write(struct mv88e6xxx_priv_state *ps, int addr,
+                               int reg, u16 val)
 {
-       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
-
-       assert_smi_lock(ds);
+       assert_smi_lock(ps);
 
-       dev_dbg(ds->master_dev, "-> addr: 0x%.2x reg: 0x%.2x val: 0x%.4x\n",
+       dev_dbg(ps->dev, "-> addr: 0x%.2x reg: 0x%.2x val: 0x%.4x\n",
                addr, reg, val);
 
        return __mv88e6xxx_reg_write(ps->bus, ps->sw_addr, addr, reg, val);
 }
 
-int mv88e6xxx_reg_write(struct dsa_switch *ds, int addr, int reg, u16 val)
+int mv88e6xxx_reg_write(struct mv88e6xxx_priv_state *ps, int addr,
+                       int reg, u16 val)
 {
-       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
        int ret;
 
        mutex_lock(&ps->smi_mutex);
-       ret = _mv88e6xxx_reg_write(ds, addr, reg, val);
+       ret = _mv88e6xxx_reg_write(ps, addr, reg, val);
        mutex_unlock(&ps->smi_mutex);
 
        return ret;
 }
 
-int mv88e6xxx_set_addr_direct(struct dsa_switch *ds, u8 *addr)
+static int mv88e6xxx_set_addr_direct(struct dsa_switch *ds, u8 *addr)
 {
+       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
        int err;
 
-       err = mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_MAC_01,
+       err = mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_MAC_01,
                                  (addr[0] << 8) | addr[1]);
        if (err)
                return err;
 
-       err = mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_MAC_23,
+       err = mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_MAC_23,
                                  (addr[2] << 8) | addr[3]);
        if (err)
                return err;
 
-       return mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_MAC_45,
+       return mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_MAC_45,
                                   (addr[4] << 8) | addr[5]);
 }
 
-int mv88e6xxx_set_addr_indirect(struct dsa_switch *ds, u8 *addr)
+static int mv88e6xxx_set_addr_indirect(struct dsa_switch *ds, u8 *addr)
 {
+       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
        int ret;
        int i;
 
@@ -205,7 +202,7 @@ int mv88e6xxx_set_addr_indirect(struct dsa_switch *ds, u8 *addr)
                int j;
 
                /* Write the MAC address byte. */
-               ret = mv88e6xxx_reg_write(ds, REG_GLOBAL2, GLOBAL2_SWITCH_MAC,
+               ret = mv88e6xxx_reg_write(ps, REG_GLOBAL2, GLOBAL2_SWITCH_MAC,
                                          GLOBAL2_SWITCH_MAC_BUSY |
                                          (i << 8) | addr[i]);
                if (ret)
@@ -213,7 +210,7 @@ int mv88e6xxx_set_addr_indirect(struct dsa_switch *ds, u8 *addr)
 
                /* Wait for the write to complete. */
                for (j = 0; j < 16; j++) {
-                       ret = mv88e6xxx_reg_read(ds, REG_GLOBAL2,
+                       ret = mv88e6xxx_reg_read(ps, REG_GLOBAL2,
                                                 GLOBAL2_SWITCH_MAC);
                        if (ret < 0)
                                return ret;
@@ -228,39 +225,49 @@ int mv88e6xxx_set_addr_indirect(struct dsa_switch *ds, u8 *addr)
        return 0;
 }
 
-static int _mv88e6xxx_phy_read(struct dsa_switch *ds, int addr, int regnum)
+int mv88e6xxx_set_addr(struct dsa_switch *ds, u8 *addr)
+{
+       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
+
+       if (mv88e6xxx_has(ps, MV88E6XXX_FLAG_SWITCH_MAC))
+               return mv88e6xxx_set_addr_indirect(ds, addr);
+       else
+               return mv88e6xxx_set_addr_direct(ds, addr);
+}
+
+static int _mv88e6xxx_phy_read(struct mv88e6xxx_priv_state *ps, int addr,
+                              int regnum)
 {
        if (addr >= 0)
-               return _mv88e6xxx_reg_read(ds, addr, regnum);
+               return _mv88e6xxx_reg_read(ps, addr, regnum);
        return 0xffff;
 }
 
-static int _mv88e6xxx_phy_write(struct dsa_switch *ds, int addr, int regnum,
-                               u16 val)
+static int _mv88e6xxx_phy_write(struct mv88e6xxx_priv_state *ps, int addr,
+                               int regnum, u16 val)
 {
        if (addr >= 0)
-               return _mv88e6xxx_reg_write(ds, addr, regnum, val);
+               return _mv88e6xxx_reg_write(ps, addr, regnum, val);
        return 0;
 }
 
-#ifdef CONFIG_NET_DSA_MV88E6XXX_NEED_PPU
-static int mv88e6xxx_ppu_disable(struct dsa_switch *ds)
+static int mv88e6xxx_ppu_disable(struct mv88e6xxx_priv_state *ps)
 {
        int ret;
        unsigned long timeout;
 
-       ret = mv88e6xxx_reg_read(ds, REG_GLOBAL, GLOBAL_CONTROL);
+       ret = _mv88e6xxx_reg_read(ps, REG_GLOBAL, GLOBAL_CONTROL);
        if (ret < 0)
                return ret;
 
-       ret = mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_CONTROL,
-                                 ret & ~GLOBAL_CONTROL_PPU_ENABLE);
+       ret = _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_CONTROL,
+                                  ret & ~GLOBAL_CONTROL_PPU_ENABLE);
        if (ret)
                return ret;
 
        timeout = jiffies + 1 * HZ;
        while (time_before(jiffies, timeout)) {
-               ret = mv88e6xxx_reg_read(ds, REG_GLOBAL, GLOBAL_STATUS);
+               ret = _mv88e6xxx_reg_read(ps, REG_GLOBAL, GLOBAL_STATUS);
                if (ret < 0)
                        return ret;
 
@@ -273,23 +280,23 @@ static int mv88e6xxx_ppu_disable(struct dsa_switch *ds)
        return -ETIMEDOUT;
 }
 
-static int mv88e6xxx_ppu_enable(struct dsa_switch *ds)
+static int mv88e6xxx_ppu_enable(struct mv88e6xxx_priv_state *ps)
 {
        int ret, err;
        unsigned long timeout;
 
-       ret = mv88e6xxx_reg_read(ds, REG_GLOBAL, GLOBAL_CONTROL);
+       ret = mv88e6xxx_reg_read(ps, REG_GLOBAL, GLOBAL_CONTROL);
        if (ret < 0)
                return ret;
 
-       err = mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_CONTROL,
+       err = mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_CONTROL,
                                  ret | GLOBAL_CONTROL_PPU_ENABLE);
        if (err)
                return err;
 
        timeout = jiffies + 1 * HZ;
        while (time_before(jiffies, timeout)) {
-               ret = mv88e6xxx_reg_read(ds, REG_GLOBAL, GLOBAL_STATUS);
+               ret = mv88e6xxx_reg_read(ps, REG_GLOBAL, GLOBAL_STATUS);
                if (ret < 0)
                        return ret;
 
@@ -308,9 +315,7 @@ static void mv88e6xxx_ppu_reenable_work(struct work_struct *ugly)
 
        ps = container_of(ugly, struct mv88e6xxx_priv_state, ppu_work);
        if (mutex_trylock(&ps->ppu_mutex)) {
-               struct dsa_switch *ds = ps->ds;
-
-               if (mv88e6xxx_ppu_enable(ds) == 0)
+               if (mv88e6xxx_ppu_enable(ps) == 0)
                        ps->ppu_disabled = 0;
                mutex_unlock(&ps->ppu_mutex);
        }
@@ -323,9 +328,8 @@ static void mv88e6xxx_ppu_reenable_timer(unsigned long _ps)
        schedule_work(&ps->ppu_work);
 }
 
-static int mv88e6xxx_ppu_access_get(struct dsa_switch *ds)
+static int mv88e6xxx_ppu_access_get(struct mv88e6xxx_priv_state *ps)
 {
-       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
        int ret;
 
        mutex_lock(&ps->ppu_mutex);
@@ -336,7 +340,7 @@ static int mv88e6xxx_ppu_access_get(struct dsa_switch *ds)
         * it.
         */
        if (!ps->ppu_disabled) {
-               ret = mv88e6xxx_ppu_disable(ds);
+               ret = mv88e6xxx_ppu_disable(ps);
                if (ret < 0) {
                        mutex_unlock(&ps->ppu_mutex);
                        return ret;
@@ -350,19 +354,15 @@ static int mv88e6xxx_ppu_access_get(struct dsa_switch *ds)
        return ret;
 }
 
-static void mv88e6xxx_ppu_access_put(struct dsa_switch *ds)
+static void mv88e6xxx_ppu_access_put(struct mv88e6xxx_priv_state *ps)
 {
-       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
-
        /* Schedule a timer to re-enable the PHY polling unit. */
        mod_timer(&ps->ppu_timer, jiffies + msecs_to_jiffies(10));
        mutex_unlock(&ps->ppu_mutex);
 }
 
-void mv88e6xxx_ppu_state_init(struct dsa_switch *ds)
+void mv88e6xxx_ppu_state_init(struct mv88e6xxx_priv_state *ps)
 {
-       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
-
        mutex_init(&ps->ppu_mutex);
        INIT_WORK(&ps->ppu_work, mv88e6xxx_ppu_reenable_work);
        init_timer(&ps->ppu_timer);
@@ -370,112 +370,94 @@ void mv88e6xxx_ppu_state_init(struct dsa_switch *ds)
        ps->ppu_timer.function = mv88e6xxx_ppu_reenable_timer;
 }
 
-int mv88e6xxx_phy_read_ppu(struct dsa_switch *ds, int addr, int regnum)
+static int mv88e6xxx_phy_read_ppu(struct mv88e6xxx_priv_state *ps, int addr,
+                                 int regnum)
 {
        int ret;
 
-       ret = mv88e6xxx_ppu_access_get(ds);
+       ret = mv88e6xxx_ppu_access_get(ps);
        if (ret >= 0) {
-               ret = mv88e6xxx_reg_read(ds, addr, regnum);
-               mv88e6xxx_ppu_access_put(ds);
+               ret = _mv88e6xxx_reg_read(ps, addr, regnum);
+               mv88e6xxx_ppu_access_put(ps);
        }
 
        return ret;
 }
 
-int mv88e6xxx_phy_write_ppu(struct dsa_switch *ds, int addr,
-                           int regnum, u16 val)
+static int mv88e6xxx_phy_write_ppu(struct mv88e6xxx_priv_state *ps, int addr,
+                                  int regnum, u16 val)
 {
        int ret;
 
-       ret = mv88e6xxx_ppu_access_get(ds);
+       ret = mv88e6xxx_ppu_access_get(ps);
        if (ret >= 0) {
-               ret = mv88e6xxx_reg_write(ds, addr, regnum, val);
-               mv88e6xxx_ppu_access_put(ds);
+               ret = _mv88e6xxx_reg_write(ps, addr, regnum, val);
+               mv88e6xxx_ppu_access_put(ps);
        }
 
        return ret;
 }
-#endif
 
-static bool mv88e6xxx_6065_family(struct dsa_switch *ds)
+static bool mv88e6xxx_6065_family(struct mv88e6xxx_priv_state *ps)
 {
-       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
-
        return ps->info->family == MV88E6XXX_FAMILY_6065;
 }
 
-static bool mv88e6xxx_6095_family(struct dsa_switch *ds)
+static bool mv88e6xxx_6095_family(struct mv88e6xxx_priv_state *ps)
 {
-       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
-
        return ps->info->family == MV88E6XXX_FAMILY_6095;
 }
 
-static bool mv88e6xxx_6097_family(struct dsa_switch *ds)
+static bool mv88e6xxx_6097_family(struct mv88e6xxx_priv_state *ps)
 {
-       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
-
        return ps->info->family == MV88E6XXX_FAMILY_6097;
 }
 
-static bool mv88e6xxx_6165_family(struct dsa_switch *ds)
+static bool mv88e6xxx_6165_family(struct mv88e6xxx_priv_state *ps)
 {
-       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
-
        return ps->info->family == MV88E6XXX_FAMILY_6165;
 }
 
-static bool mv88e6xxx_6185_family(struct dsa_switch *ds)
+static bool mv88e6xxx_6185_family(struct mv88e6xxx_priv_state *ps)
 {
-       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
-
        return ps->info->family == MV88E6XXX_FAMILY_6185;
 }
 
-static bool mv88e6xxx_6320_family(struct dsa_switch *ds)
+static bool mv88e6xxx_6320_family(struct mv88e6xxx_priv_state *ps)
 {
-       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
-
        return ps->info->family == MV88E6XXX_FAMILY_6320;
 }
 
-static bool mv88e6xxx_6351_family(struct dsa_switch *ds)
+static bool mv88e6xxx_6351_family(struct mv88e6xxx_priv_state *ps)
 {
-       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
-
        return ps->info->family == MV88E6XXX_FAMILY_6351;
 }
 
-static bool mv88e6xxx_6352_family(struct dsa_switch *ds)
+static bool mv88e6xxx_6352_family(struct mv88e6xxx_priv_state *ps)
 {
-       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
-
        return ps->info->family == MV88E6XXX_FAMILY_6352;
 }
 
-static unsigned int mv88e6xxx_num_databases(struct dsa_switch *ds)
+static unsigned int mv88e6xxx_num_databases(struct mv88e6xxx_priv_state *ps)
 {
-       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
-
        return ps->info->num_databases;
 }
 
-static bool mv88e6xxx_has_fid_reg(struct dsa_switch *ds)
+static bool mv88e6xxx_has_fid_reg(struct mv88e6xxx_priv_state *ps)
 {
        /* Does the device have dedicated FID registers for ATU and VTU ops? */
-       if (mv88e6xxx_6097_family(ds) || mv88e6xxx_6165_family(ds) ||
-           mv88e6xxx_6351_family(ds) || mv88e6xxx_6352_family(ds))
+       if (mv88e6xxx_6097_family(ps) || mv88e6xxx_6165_family(ps) ||
+           mv88e6xxx_6351_family(ps) || mv88e6xxx_6352_family(ps))
                return true;
 
        return false;
 }
 
-static bool mv88e6xxx_has_stu(struct dsa_switch *ds)
+static bool mv88e6xxx_has_stu(struct mv88e6xxx_priv_state *ps)
 {
        /* Does the device have STU and dedicated SID registers for VTU ops? */
-       if (mv88e6xxx_6097_family(ds) || mv88e6xxx_6165_family(ds) ||
-           mv88e6xxx_6351_family(ds) || mv88e6xxx_6352_family(ds))
+       if (mv88e6xxx_6097_family(ps) || mv88e6xxx_6165_family(ps) ||
+           mv88e6xxx_6351_family(ps) || mv88e6xxx_6352_family(ps))
                return true;
 
        return false;
@@ -485,8 +467,8 @@ static bool mv88e6xxx_has_stu(struct dsa_switch *ds)
  * phy. However, in the case of a fixed link phy, we force the port
  * settings from the fixed link settings.
  */
-void mv88e6xxx_adjust_link(struct dsa_switch *ds, int port,
-                          struct phy_device *phydev)
+static void mv88e6xxx_adjust_link(struct dsa_switch *ds, int port,
+                                 struct phy_device *phydev)
 {
        struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
        u32 reg;
@@ -497,7 +479,7 @@ void mv88e6xxx_adjust_link(struct dsa_switch *ds, int port,
 
        mutex_lock(&ps->smi_mutex);
 
-       ret = _mv88e6xxx_reg_read(ds, REG_PORT(port), PORT_PCS_CTRL);
+       ret = _mv88e6xxx_reg_read(ps, REG_PORT(port), PORT_PCS_CTRL);
        if (ret < 0)
                goto out;
 
@@ -511,7 +493,7 @@ void mv88e6xxx_adjust_link(struct dsa_switch *ds, int port,
        if (phydev->link)
                        reg |= PORT_PCS_CTRL_LINK_UP;
 
-       if (mv88e6xxx_6065_family(ds) && phydev->speed > SPEED_100)
+       if (mv88e6xxx_6065_family(ps) && phydev->speed > SPEED_100)
                goto out;
 
        switch (phydev->speed) {
@@ -533,7 +515,7 @@ void mv88e6xxx_adjust_link(struct dsa_switch *ds, int port,
        if (phydev->duplex == DUPLEX_FULL)
                reg |= PORT_PCS_CTRL_DUPLEX_FULL;
 
-       if ((mv88e6xxx_6352_family(ds) || mv88e6xxx_6351_family(ds)) &&
+       if ((mv88e6xxx_6352_family(ps) || mv88e6xxx_6351_family(ps)) &&
            (port >= ps->info->num_ports - 2)) {
                if (phydev->interface == PHY_INTERFACE_MODE_RGMII_RXID)
                        reg |= PORT_PCS_CTRL_RGMII_DELAY_RXCLK;
@@ -543,19 +525,19 @@ void mv88e6xxx_adjust_link(struct dsa_switch *ds, int port,
                        reg |= (PORT_PCS_CTRL_RGMII_DELAY_RXCLK |
                                PORT_PCS_CTRL_RGMII_DELAY_TXCLK);
        }
-       _mv88e6xxx_reg_write(ds, REG_PORT(port), PORT_PCS_CTRL, reg);
+       _mv88e6xxx_reg_write(ps, REG_PORT(port), PORT_PCS_CTRL, reg);
 
 out:
        mutex_unlock(&ps->smi_mutex);
 }
 
-static int _mv88e6xxx_stats_wait(struct dsa_switch *ds)
+static int _mv88e6xxx_stats_wait(struct mv88e6xxx_priv_state *ps)
 {
        int ret;
        int i;
 
        for (i = 0; i < 10; i++) {
-               ret = _mv88e6xxx_reg_read(ds, REG_GLOBAL, GLOBAL_STATS_OP);
+               ret = _mv88e6xxx_reg_read(ps, REG_GLOBAL, GLOBAL_STATS_OP);
                if ((ret & GLOBAL_STATS_OP_BUSY) == 0)
                        return 0;
        }
@@ -563,52 +545,54 @@ static int _mv88e6xxx_stats_wait(struct dsa_switch *ds)
        return -ETIMEDOUT;
 }
 
-static int _mv88e6xxx_stats_snapshot(struct dsa_switch *ds, int port)
+static int _mv88e6xxx_stats_snapshot(struct mv88e6xxx_priv_state *ps,
+                                    int port)
 {
        int ret;
 
-       if (mv88e6xxx_6320_family(ds) || mv88e6xxx_6352_family(ds))
+       if (mv88e6xxx_6320_family(ps) || mv88e6xxx_6352_family(ps))
                port = (port + 1) << 5;
 
        /* Snapshot the hardware statistics counters for this port. */
-       ret = _mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_STATS_OP,
+       ret = _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_STATS_OP,
                                   GLOBAL_STATS_OP_CAPTURE_PORT |
                                   GLOBAL_STATS_OP_HIST_RX_TX | port);
        if (ret < 0)
                return ret;
 
        /* Wait for the snapshotting to complete. */
-       ret = _mv88e6xxx_stats_wait(ds);
+       ret = _mv88e6xxx_stats_wait(ps);
        if (ret < 0)
                return ret;
 
        return 0;
 }
 
-static void _mv88e6xxx_stats_read(struct dsa_switch *ds, int stat, u32 *val)
+static void _mv88e6xxx_stats_read(struct mv88e6xxx_priv_state *ps,
+                                 int stat, u32 *val)
 {
        u32 _val;
        int ret;
 
        *val = 0;
 
-       ret = _mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_STATS_OP,
+       ret = _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_STATS_OP,
                                   GLOBAL_STATS_OP_READ_CAPTURED |
                                   GLOBAL_STATS_OP_HIST_RX_TX | stat);
        if (ret < 0)
                return;
 
-       ret = _mv88e6xxx_stats_wait(ds);
+       ret = _mv88e6xxx_stats_wait(ps);
        if (ret < 0)
                return;
 
-       ret = _mv88e6xxx_reg_read(ds, REG_GLOBAL, GLOBAL_STATS_COUNTER_32);
+       ret = _mv88e6xxx_reg_read(ps, REG_GLOBAL, GLOBAL_STATS_COUNTER_32);
        if (ret < 0)
                return;
 
        _val = ret << 16;
 
-       ret = _mv88e6xxx_reg_read(ds, REG_GLOBAL, GLOBAL_STATS_COUNTER_01);
+       ret = _mv88e6xxx_reg_read(ps, REG_GLOBAL, GLOBAL_STATS_COUNTER_01);
        if (ret < 0)
                return;
 
@@ -677,26 +661,26 @@ static struct mv88e6xxx_hw_stat mv88e6xxx_hw_stats[] = {
        { "out_management",     4, 0x1f | GLOBAL_STATS_OP_BANK_1, BANK1, },
 };
 
-static bool mv88e6xxx_has_stat(struct dsa_switch *ds,
+static bool mv88e6xxx_has_stat(struct mv88e6xxx_priv_state *ps,
                               struct mv88e6xxx_hw_stat *stat)
 {
        switch (stat->type) {
        case BANK0:
                return true;
        case BANK1:
-               return mv88e6xxx_6320_family(ds);
+               return mv88e6xxx_6320_family(ps);
        case PORT:
-               return mv88e6xxx_6095_family(ds) ||
-                       mv88e6xxx_6185_family(ds) ||
-                       mv88e6xxx_6097_family(ds) ||
-                       mv88e6xxx_6165_family(ds) ||
-                       mv88e6xxx_6351_family(ds) ||
-                       mv88e6xxx_6352_family(ds);
+               return mv88e6xxx_6095_family(ps) ||
+                       mv88e6xxx_6185_family(ps) ||
+                       mv88e6xxx_6097_family(ps) ||
+                       mv88e6xxx_6165_family(ps) ||
+                       mv88e6xxx_6351_family(ps) ||
+                       mv88e6xxx_6352_family(ps);
        }
        return false;
 }
 
-static uint64_t _mv88e6xxx_get_ethtool_stat(struct dsa_switch *ds,
+static uint64_t _mv88e6xxx_get_ethtool_stat(struct mv88e6xxx_priv_state *ps,
                                            struct mv88e6xxx_hw_stat *s,
                                            int port)
 {
@@ -707,13 +691,13 @@ static uint64_t _mv88e6xxx_get_ethtool_stat(struct dsa_switch *ds,
 
        switch (s->type) {
        case PORT:
-               ret = _mv88e6xxx_reg_read(ds, REG_PORT(port), s->reg);
+               ret = _mv88e6xxx_reg_read(ps, REG_PORT(port), s->reg);
                if (ret < 0)
                        return UINT64_MAX;
 
                low = ret;
                if (s->sizeof_stat == 4) {
-                       ret = _mv88e6xxx_reg_read(ds, REG_PORT(port),
+                       ret = _mv88e6xxx_reg_read(ps, REG_PORT(port),
                                                  s->reg + 1);
                        if (ret < 0)
                                return UINT64_MAX;
@@ -722,22 +706,24 @@ static uint64_t _mv88e6xxx_get_ethtool_stat(struct dsa_switch *ds,
                break;
        case BANK0:
        case BANK1:
-               _mv88e6xxx_stats_read(ds, s->reg, &low);
+               _mv88e6xxx_stats_read(ps, s->reg, &low);
                if (s->sizeof_stat == 8)
-                       _mv88e6xxx_stats_read(ds, s->reg + 1, &high);
+                       _mv88e6xxx_stats_read(ps, s->reg + 1, &high);
        }
        value = (((u64)high) << 16) | low;
        return value;
 }
 
-void mv88e6xxx_get_strings(struct dsa_switch *ds, int port, uint8_t *data)
+static void mv88e6xxx_get_strings(struct dsa_switch *ds, int port,
+                                 uint8_t *data)
 {
+       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
        struct mv88e6xxx_hw_stat *stat;
        int i, j;
 
        for (i = 0, j = 0; i < ARRAY_SIZE(mv88e6xxx_hw_stats); i++) {
                stat = &mv88e6xxx_hw_stats[i];
-               if (mv88e6xxx_has_stat(ds, stat)) {
+               if (mv88e6xxx_has_stat(ps, stat)) {
                        memcpy(data + j * ETH_GSTRING_LEN, stat->string,
                               ETH_GSTRING_LEN);
                        j++;
@@ -745,22 +731,22 @@ void mv88e6xxx_get_strings(struct dsa_switch *ds, int port, uint8_t *data)
        }
 }
 
-int mv88e6xxx_get_sset_count(struct dsa_switch *ds)
+static int mv88e6xxx_get_sset_count(struct dsa_switch *ds)
 {
+       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
        struct mv88e6xxx_hw_stat *stat;
        int i, j;
 
        for (i = 0, j = 0; i < ARRAY_SIZE(mv88e6xxx_hw_stats); i++) {
                stat = &mv88e6xxx_hw_stats[i];
-               if (mv88e6xxx_has_stat(ds, stat))
+               if (mv88e6xxx_has_stat(ps, stat))
                        j++;
        }
        return j;
 }
 
-void
-mv88e6xxx_get_ethtool_stats(struct dsa_switch *ds,
-                           int port, uint64_t *data)
+static void mv88e6xxx_get_ethtool_stats(struct dsa_switch *ds, int port,
+                                       uint64_t *data)
 {
        struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
        struct mv88e6xxx_hw_stat *stat;
@@ -769,15 +755,15 @@ mv88e6xxx_get_ethtool_stats(struct dsa_switch *ds,
 
        mutex_lock(&ps->smi_mutex);
 
-       ret = _mv88e6xxx_stats_snapshot(ds, port);
+       ret = _mv88e6xxx_stats_snapshot(ps, port);
        if (ret < 0) {
                mutex_unlock(&ps->smi_mutex);
                return;
        }
        for (i = 0, j = 0; i < ARRAY_SIZE(mv88e6xxx_hw_stats); i++) {
                stat = &mv88e6xxx_hw_stats[i];
-               if (mv88e6xxx_has_stat(ds, stat)) {
-                       data[j] = _mv88e6xxx_get_ethtool_stat(ds, stat, port);
+               if (mv88e6xxx_has_stat(ps, stat)) {
+                       data[j] = _mv88e6xxx_get_ethtool_stat(ps, stat, port);
                        j++;
                }
        }
@@ -785,14 +771,15 @@ mv88e6xxx_get_ethtool_stats(struct dsa_switch *ds,
        mutex_unlock(&ps->smi_mutex);
 }
 
-int mv88e6xxx_get_regs_len(struct dsa_switch *ds, int port)
+static int mv88e6xxx_get_regs_len(struct dsa_switch *ds, int port)
 {
        return 32 * sizeof(u16);
 }
 
-void mv88e6xxx_get_regs(struct dsa_switch *ds, int port,
-                       struct ethtool_regs *regs, void *_p)
+static void mv88e6xxx_get_regs(struct dsa_switch *ds, int port,
+                              struct ethtool_regs *regs, void *_p)
 {
+       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
        u16 *p = _p;
        int i;
 
@@ -800,16 +787,20 @@ void mv88e6xxx_get_regs(struct dsa_switch *ds, int port,
 
        memset(p, 0xff, 32 * sizeof(u16));
 
+       mutex_lock(&ps->smi_mutex);
+
        for (i = 0; i < 32; i++) {
                int ret;
 
-               ret = mv88e6xxx_reg_read(ds, REG_PORT(port), i);
+               ret = _mv88e6xxx_reg_read(ps, REG_PORT(port), i);
                if (ret >= 0)
                        p[i] = ret;
        }
+
+       mutex_unlock(&ps->smi_mutex);
 }
 
-static int _mv88e6xxx_wait(struct dsa_switch *ds, int reg, int offset,
+static int _mv88e6xxx_wait(struct mv88e6xxx_priv_state *ps, int reg, int offset,
                           u16 mask)
 {
        unsigned long timeout = jiffies + HZ / 10;
@@ -817,7 +808,7 @@ static int _mv88e6xxx_wait(struct dsa_switch *ds, int reg, int offset,
        while (time_before(jiffies, timeout)) {
                int ret;
 
-               ret = _mv88e6xxx_reg_read(ds, reg, offset);
+               ret = _mv88e6xxx_reg_read(ps, reg, offset);
                if (ret < 0)
                        return ret;
                if (!(ret & mask))
@@ -828,91 +819,310 @@ static int _mv88e6xxx_wait(struct dsa_switch *ds, int reg, int offset,
        return -ETIMEDOUT;
 }
 
-static int mv88e6xxx_wait(struct dsa_switch *ds, int reg, int offset, u16 mask)
+static int mv88e6xxx_wait(struct mv88e6xxx_priv_state *ps, int reg,
+                         int offset, u16 mask)
 {
-       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
        int ret;
 
        mutex_lock(&ps->smi_mutex);
-       ret = _mv88e6xxx_wait(ds, reg, offset, mask);
+       ret = _mv88e6xxx_wait(ps, reg, offset, mask);
        mutex_unlock(&ps->smi_mutex);
 
        return ret;
 }
 
-static int _mv88e6xxx_phy_wait(struct dsa_switch *ds)
+static int _mv88e6xxx_phy_wait(struct mv88e6xxx_priv_state *ps)
 {
-       return _mv88e6xxx_wait(ds, REG_GLOBAL2, GLOBAL2_SMI_OP,
+       return _mv88e6xxx_wait(ps, REG_GLOBAL2, GLOBAL2_SMI_OP,
                               GLOBAL2_SMI_OP_BUSY);
 }
 
-int mv88e6xxx_eeprom_load_wait(struct dsa_switch *ds)
+static int mv88e6xxx_eeprom_load_wait(struct dsa_switch *ds)
 {
-       return mv88e6xxx_wait(ds, REG_GLOBAL2, GLOBAL2_EEPROM_OP,
+       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
+
+       return mv88e6xxx_wait(ps, REG_GLOBAL2, GLOBAL2_EEPROM_OP,
                              GLOBAL2_EEPROM_OP_LOAD);
 }
 
-int mv88e6xxx_eeprom_busy_wait(struct dsa_switch *ds)
+static int mv88e6xxx_eeprom_busy_wait(struct dsa_switch *ds)
 {
-       return mv88e6xxx_wait(ds, REG_GLOBAL2, GLOBAL2_EEPROM_OP,
+       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
+
+       return mv88e6xxx_wait(ps, REG_GLOBAL2, GLOBAL2_EEPROM_OP,
                              GLOBAL2_EEPROM_OP_BUSY);
 }
 
-static int _mv88e6xxx_atu_wait(struct dsa_switch *ds)
+static int mv88e6xxx_read_eeprom_word(struct dsa_switch *ds, int addr)
 {
-       return _mv88e6xxx_wait(ds, REG_GLOBAL, GLOBAL_ATU_OP,
+       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
+       int ret;
+
+       mutex_lock(&ps->eeprom_mutex);
+
+       ret = mv88e6xxx_reg_write(ps, REG_GLOBAL2, GLOBAL2_EEPROM_OP,
+                                 GLOBAL2_EEPROM_OP_READ |
+                                 (addr & GLOBAL2_EEPROM_OP_ADDR_MASK));
+       if (ret < 0)
+               goto error;
+
+       ret = mv88e6xxx_eeprom_busy_wait(ds);
+       if (ret < 0)
+               goto error;
+
+       ret = mv88e6xxx_reg_read(ps, REG_GLOBAL2, GLOBAL2_EEPROM_DATA);
+error:
+       mutex_unlock(&ps->eeprom_mutex);
+       return ret;
+}
+
+static int mv88e6xxx_get_eeprom(struct dsa_switch *ds,
+                               struct ethtool_eeprom *eeprom, u8 *data)
+{
+       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
+       int offset;
+       int len;
+       int ret;
+
+       if (!mv88e6xxx_has(ps, MV88E6XXX_FLAG_EEPROM))
+               return -EOPNOTSUPP;
+
+       offset = eeprom->offset;
+       len = eeprom->len;
+       eeprom->len = 0;
+
+       eeprom->magic = 0xc3ec4951;
+
+       ret = mv88e6xxx_eeprom_load_wait(ds);
+       if (ret < 0)
+               return ret;
+
+       if (offset & 1) {
+               int word;
+
+               word = mv88e6xxx_read_eeprom_word(ds, offset >> 1);
+               if (word < 0)
+                       return word;
+
+               *data++ = (word >> 8) & 0xff;
+
+               offset++;
+               len--;
+               eeprom->len++;
+       }
+
+       while (len >= 2) {
+               int word;
+
+               word = mv88e6xxx_read_eeprom_word(ds, offset >> 1);
+               if (word < 0)
+                       return word;
+
+               *data++ = word & 0xff;
+               *data++ = (word >> 8) & 0xff;
+
+               offset += 2;
+               len -= 2;
+               eeprom->len += 2;
+       }
+
+       if (len) {
+               int word;
+
+               word = mv88e6xxx_read_eeprom_word(ds, offset >> 1);
+               if (word < 0)
+                       return word;
+
+               *data++ = word & 0xff;
+
+               offset++;
+               len--;
+               eeprom->len++;
+       }
+
+       return 0;
+}
+
+static int mv88e6xxx_eeprom_is_readonly(struct dsa_switch *ds)
+{
+       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
+       int ret;
+
+       ret = mv88e6xxx_reg_read(ps, REG_GLOBAL2, GLOBAL2_EEPROM_OP);
+       if (ret < 0)
+               return ret;
+
+       if (!(ret & GLOBAL2_EEPROM_OP_WRITE_EN))
+               return -EROFS;
+
+       return 0;
+}
+
+static int mv88e6xxx_write_eeprom_word(struct dsa_switch *ds, int addr,
+                                      u16 data)
+{
+       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
+       int ret;
+
+       mutex_lock(&ps->eeprom_mutex);
+
+       ret = mv88e6xxx_reg_write(ps, REG_GLOBAL2, GLOBAL2_EEPROM_DATA, data);
+       if (ret < 0)
+               goto error;
+
+       ret = mv88e6xxx_reg_write(ps, REG_GLOBAL2, GLOBAL2_EEPROM_OP,
+                                 GLOBAL2_EEPROM_OP_WRITE |
+                                 (addr & GLOBAL2_EEPROM_OP_ADDR_MASK));
+       if (ret < 0)
+               goto error;
+
+       ret = mv88e6xxx_eeprom_busy_wait(ds);
+error:
+       mutex_unlock(&ps->eeprom_mutex);
+       return ret;
+}
+
+static int mv88e6xxx_set_eeprom(struct dsa_switch *ds,
+                               struct ethtool_eeprom *eeprom, u8 *data)
+{
+       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
+       int offset;
+       int ret;
+       int len;
+
+       if (!mv88e6xxx_has(ps, MV88E6XXX_FLAG_EEPROM))
+               return -EOPNOTSUPP;
+
+       if (eeprom->magic != 0xc3ec4951)
+               return -EINVAL;
+
+       ret = mv88e6xxx_eeprom_is_readonly(ds);
+       if (ret)
+               return ret;
+
+       offset = eeprom->offset;
+       len = eeprom->len;
+       eeprom->len = 0;
+
+       ret = mv88e6xxx_eeprom_load_wait(ds);
+       if (ret < 0)
+               return ret;
+
+       if (offset & 1) {
+               int word;
+
+               word = mv88e6xxx_read_eeprom_word(ds, offset >> 1);
+               if (word < 0)
+                       return word;
+
+               word = (*data++ << 8) | (word & 0xff);
+
+               ret = mv88e6xxx_write_eeprom_word(ds, offset >> 1, word);
+               if (ret < 0)
+                       return ret;
+
+               offset++;
+               len--;
+               eeprom->len++;
+       }
+
+       while (len >= 2) {
+               int word;
+
+               word = *data++;
+               word |= *data++ << 8;
+
+               ret = mv88e6xxx_write_eeprom_word(ds, offset >> 1, word);
+               if (ret < 0)
+                       return ret;
+
+               offset += 2;
+               len -= 2;
+               eeprom->len += 2;
+       }
+
+       if (len) {
+               int word;
+
+               word = mv88e6xxx_read_eeprom_word(ds, offset >> 1);
+               if (word < 0)
+                       return word;
+
+               word = (word & 0xff00) | *data++;
+
+               ret = mv88e6xxx_write_eeprom_word(ds, offset >> 1, word);
+               if (ret < 0)
+                       return ret;
+
+               offset++;
+               len--;
+               eeprom->len++;
+       }
+
+       return 0;
+}
+
+static int _mv88e6xxx_atu_wait(struct mv88e6xxx_priv_state *ps)
+{
+       return _mv88e6xxx_wait(ps, REG_GLOBAL, GLOBAL_ATU_OP,
                               GLOBAL_ATU_OP_BUSY);
 }
 
-static int _mv88e6xxx_phy_read_indirect(struct dsa_switch *ds, int addr,
-                                       int regnum)
+static int _mv88e6xxx_phy_read_indirect(struct mv88e6xxx_priv_state *ps,
+                                       int addr, int regnum)
 {
        int ret;
 
-       ret = _mv88e6xxx_reg_write(ds, REG_GLOBAL2, GLOBAL2_SMI_OP,
+       ret = _mv88e6xxx_reg_write(ps, REG_GLOBAL2, GLOBAL2_SMI_OP,
                                   GLOBAL2_SMI_OP_22_READ | (addr << 5) |
                                   regnum);
        if (ret < 0)
                return ret;
 
-       ret = _mv88e6xxx_phy_wait(ds);
+       ret = _mv88e6xxx_phy_wait(ps);
        if (ret < 0)
                return ret;
 
-       return _mv88e6xxx_reg_read(ds, REG_GLOBAL2, GLOBAL2_SMI_DATA);
+       ret = _mv88e6xxx_reg_read(ps, REG_GLOBAL2, GLOBAL2_SMI_DATA);
+
+       return ret;
 }
 
-static int _mv88e6xxx_phy_write_indirect(struct dsa_switch *ds, int addr,
-                                        int regnum, u16 val)
+static int _mv88e6xxx_phy_write_indirect(struct mv88e6xxx_priv_state *ps,
+                                        int addr, int regnum, u16 val)
 {
        int ret;
 
-       ret = _mv88e6xxx_reg_write(ds, REG_GLOBAL2, GLOBAL2_SMI_DATA, val);
+       ret = _mv88e6xxx_reg_write(ps, REG_GLOBAL2, GLOBAL2_SMI_DATA, val);
        if (ret < 0)
                return ret;
 
-       ret = _mv88e6xxx_reg_write(ds, REG_GLOBAL2, GLOBAL2_SMI_OP,
+       ret = _mv88e6xxx_reg_write(ps, REG_GLOBAL2, GLOBAL2_SMI_OP,
                                   GLOBAL2_SMI_OP_22_WRITE | (addr << 5) |
                                   regnum);
 
-       return _mv88e6xxx_phy_wait(ds);
+       return _mv88e6xxx_phy_wait(ps);
 }
 
-int mv88e6xxx_get_eee(struct dsa_switch *ds, int port, struct ethtool_eee *e)
+static int mv88e6xxx_get_eee(struct dsa_switch *ds, int port,
+                            struct ethtool_eee *e)
 {
        struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
        int reg;
 
+       if (!mv88e6xxx_has(ps, MV88E6XXX_FLAG_EEE))
+               return -EOPNOTSUPP;
+
        mutex_lock(&ps->smi_mutex);
 
-       reg = _mv88e6xxx_phy_read_indirect(ds, port, 16);
+       reg = _mv88e6xxx_phy_read_indirect(ps, port, 16);
        if (reg < 0)
                goto out;
 
        e->eee_enabled = !!(reg & 0x0200);
        e->tx_lpi_enabled = !!(reg & 0x0100);
 
-       reg = _mv88e6xxx_reg_read(ds, REG_PORT(port), PORT_STATUS);
+       reg = _mv88e6xxx_reg_read(ps, REG_PORT(port), PORT_STATUS);
        if (reg < 0)
                goto out;
 
@@ -924,16 +1134,19 @@ out:
        return reg;
 }
 
-int mv88e6xxx_set_eee(struct dsa_switch *ds, int port,
-                     struct phy_device *phydev, struct ethtool_eee *e)
+static int mv88e6xxx_set_eee(struct dsa_switch *ds, int port,
+                            struct phy_device *phydev, struct ethtool_eee *e)
 {
        struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
        int reg;
        int ret;
 
+       if (!mv88e6xxx_has(ps, MV88E6XXX_FLAG_EEE))
+               return -EOPNOTSUPP;
+
        mutex_lock(&ps->smi_mutex);
 
-       ret = _mv88e6xxx_phy_read_indirect(ds, port, 16);
+       ret = _mv88e6xxx_phy_read_indirect(ps, port, 16);
        if (ret < 0)
                goto out;
 
@@ -943,28 +1156,28 @@ int mv88e6xxx_set_eee(struct dsa_switch *ds, int port,
        if (e->tx_lpi_enabled)
                reg |= 0x0100;
 
-       ret = _mv88e6xxx_phy_write_indirect(ds, port, 16, reg);
+       ret = _mv88e6xxx_phy_write_indirect(ps, port, 16, reg);
 out:
        mutex_unlock(&ps->smi_mutex);
 
        return ret;
 }
 
-static int _mv88e6xxx_atu_cmd(struct dsa_switch *ds, u16 fid, u16 cmd)
+static int _mv88e6xxx_atu_cmd(struct mv88e6xxx_priv_state *ps, u16 fid, u16 cmd)
 {
        int ret;
 
-       if (mv88e6xxx_has_fid_reg(ds)) {
-               ret = _mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_ATU_FID, fid);
+       if (mv88e6xxx_has_fid_reg(ps)) {
+               ret = _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_ATU_FID, fid);
                if (ret < 0)
                        return ret;
-       } else if (mv88e6xxx_num_databases(ds) == 256) {
+       } else if (mv88e6xxx_num_databases(ps) == 256) {
                /* ATU DBNum[7:4] are located in ATU Control 15:12 */
-               ret = _mv88e6xxx_reg_read(ds, REG_GLOBAL, GLOBAL_ATU_CONTROL);
+               ret = _mv88e6xxx_reg_read(ps, REG_GLOBAL, GLOBAL_ATU_CONTROL);
                if (ret < 0)
                        return ret;
 
-               ret = _mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_ATU_CONTROL,
+               ret = _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_ATU_CONTROL,
                                           (ret & 0xfff) |
                                           ((fid << 8) & 0xf000));
                if (ret < 0)
@@ -974,14 +1187,14 @@ static int _mv88e6xxx_atu_cmd(struct dsa_switch *ds, u16 fid, u16 cmd)
                cmd |= fid & 0xf;
        }
 
-       ret = _mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_ATU_OP, cmd);
+       ret = _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_ATU_OP, cmd);
        if (ret < 0)
                return ret;
 
-       return _mv88e6xxx_atu_wait(ds);
+       return _mv88e6xxx_atu_wait(ps);
 }
 
-static int _mv88e6xxx_atu_data_write(struct dsa_switch *ds,
+static int _mv88e6xxx_atu_data_write(struct mv88e6xxx_priv_state *ps,
                                     struct mv88e6xxx_atu_entry *entry)
 {
        u16 data = entry->state & GLOBAL_ATU_DATA_STATE_MASK;
@@ -1001,21 +1214,21 @@ static int _mv88e6xxx_atu_data_write(struct dsa_switch *ds,
                data |= (entry->portv_trunkid << shift) & mask;
        }
 
-       return _mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_ATU_DATA, data);
+       return _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_ATU_DATA, data);
 }
 
-static int _mv88e6xxx_atu_flush_move(struct dsa_switch *ds,
+static int _mv88e6xxx_atu_flush_move(struct mv88e6xxx_priv_state *ps,
                                     struct mv88e6xxx_atu_entry *entry,
                                     bool static_too)
 {
        int op;
        int err;
 
-       err = _mv88e6xxx_atu_wait(ds);
+       err = _mv88e6xxx_atu_wait(ps);
        if (err)
                return err;
 
-       err = _mv88e6xxx_atu_data_write(ds, entry);
+       err = _mv88e6xxx_atu_data_write(ps, entry);
        if (err)
                return err;
 
@@ -1027,21 +1240,22 @@ static int _mv88e6xxx_atu_flush_move(struct dsa_switch *ds,
                        GLOBAL_ATU_OP_FLUSH_MOVE_NON_STATIC;
        }
 
-       return _mv88e6xxx_atu_cmd(ds, entry->fid, op);
+       return _mv88e6xxx_atu_cmd(ps, entry->fid, op);
 }
 
-static int _mv88e6xxx_atu_flush(struct dsa_switch *ds, u16 fid, bool static_too)
+static int _mv88e6xxx_atu_flush(struct mv88e6xxx_priv_state *ps,
+                               u16 fid, bool static_too)
 {
        struct mv88e6xxx_atu_entry entry = {
                .fid = fid,
                .state = 0, /* EntryState bits must be 0 */
        };
 
-       return _mv88e6xxx_atu_flush_move(ds, &entry, static_too);
+       return _mv88e6xxx_atu_flush_move(ps, &entry, static_too);
 }
 
-static int _mv88e6xxx_atu_move(struct dsa_switch *ds, u16 fid, int from_port,
-                              int to_port, bool static_too)
+static int _mv88e6xxx_atu_move(struct mv88e6xxx_priv_state *ps, u16 fid,
+                              int from_port, int to_port, bool static_too)
 {
        struct mv88e6xxx_atu_entry entry = {
                .trunk = false,
@@ -1055,14 +1269,14 @@ static int _mv88e6xxx_atu_move(struct dsa_switch *ds, u16 fid, int from_port,
        entry.portv_trunkid = (to_port & 0x0f) << 4;
        entry.portv_trunkid |= from_port & 0x0f;
 
-       return _mv88e6xxx_atu_flush_move(ds, &entry, static_too);
+       return _mv88e6xxx_atu_flush_move(ps, &entry, static_too);
 }
 
-static int _mv88e6xxx_atu_remove(struct dsa_switch *ds, u16 fid, int port,
-                                bool static_too)
+static int _mv88e6xxx_atu_remove(struct mv88e6xxx_priv_state *ps, u16 fid,
+                                int port, bool static_too)
 {
        /* Destination port 0xF means remove the entries */
-       return _mv88e6xxx_atu_move(ds, fid, port, 0x0f, static_too);
+       return _mv88e6xxx_atu_move(ps, fid, port, 0x0f, static_too);
 }
 
 static const char * const mv88e6xxx_port_state_names[] = {
@@ -1072,12 +1286,14 @@ static const char * const mv88e6xxx_port_state_names[] = {
        [PORT_CONTROL_STATE_FORWARDING] = "Forwarding",
 };
 
-static int _mv88e6xxx_port_state(struct dsa_switch *ds, int port, u8 state)
+static int _mv88e6xxx_port_state(struct mv88e6xxx_priv_state *ps, int port,
+                                u8 state)
 {
+       struct dsa_switch *ds = ps->ds;
        int reg, ret = 0;
        u8 oldstate;
 
-       reg = _mv88e6xxx_reg_read(ds, REG_PORT(port), PORT_CONTROL);
+       reg = _mv88e6xxx_reg_read(ps, REG_PORT(port), PORT_CONTROL);
        if (reg < 0)
                return reg;
 
@@ -1092,13 +1308,13 @@ static int _mv88e6xxx_port_state(struct dsa_switch *ds, int port, u8 state)
                     oldstate == PORT_CONTROL_STATE_FORWARDING)
                    && (state == PORT_CONTROL_STATE_DISABLED ||
                        state == PORT_CONTROL_STATE_BLOCKING)) {
-                       ret = _mv88e6xxx_atu_remove(ds, 0, port, false);
+                       ret = _mv88e6xxx_atu_remove(ps, 0, port, false);
                        if (ret)
                                return ret;
                }
 
                reg = (reg & ~PORT_CONTROL_STATE_MASK) | state;
-               ret = _mv88e6xxx_reg_write(ds, REG_PORT(port), PORT_CONTROL,
+               ret = _mv88e6xxx_reg_write(ps, REG_PORT(port), PORT_CONTROL,
                                           reg);
                if (ret)
                        return ret;
@@ -1111,11 +1327,12 @@ static int _mv88e6xxx_port_state(struct dsa_switch *ds, int port, u8 state)
        return ret;
 }
 
-static int _mv88e6xxx_port_based_vlan_map(struct dsa_switch *ds, int port)
+static int _mv88e6xxx_port_based_vlan_map(struct mv88e6xxx_priv_state *ps,
+                                         int port)
 {
-       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
        struct net_device *bridge = ps->ports[port].bridge_dev;
        const u16 mask = (1 << ps->info->num_ports) - 1;
+       struct dsa_switch *ds = ps->ds;
        u16 output_ports = 0;
        int reg;
        int i;
@@ -1138,21 +1355,25 @@ static int _mv88e6xxx_port_based_vlan_map(struct dsa_switch *ds, int port)
        /* prevent frames from going back out of the port they came in on */
        output_ports &= ~BIT(port);
 
-       reg = _mv88e6xxx_reg_read(ds, REG_PORT(port), PORT_BASE_VLAN);
+       reg = _mv88e6xxx_reg_read(ps, REG_PORT(port), PORT_BASE_VLAN);
        if (reg < 0)
                return reg;
 
        reg &= ~mask;
        reg |= output_ports & mask;
 
-       return _mv88e6xxx_reg_write(ds, REG_PORT(port), PORT_BASE_VLAN, reg);
+       return _mv88e6xxx_reg_write(ps, REG_PORT(port), PORT_BASE_VLAN, reg);
 }
 
-void mv88e6xxx_port_stp_state_set(struct dsa_switch *ds, int port, u8 state)
+static void mv88e6xxx_port_stp_state_set(struct dsa_switch *ds, int port,
+                                        u8 state)
 {
        struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
        int stp_state;
 
+       if (!mv88e6xxx_has(ps, MV88E6XXX_FLAG_PORTSTATE))
+               return;
+
        switch (state) {
        case BR_STATE_DISABLED:
                stp_state = PORT_CONTROL_STATE_DISABLED;
@@ -1178,13 +1399,14 @@ void mv88e6xxx_port_stp_state_set(struct dsa_switch *ds, int port, u8 state)
        schedule_work(&ps->bridge_work);
 }
 
-static int _mv88e6xxx_port_pvid(struct dsa_switch *ds, int port, u16 *new,
-                               u16 *old)
+static int _mv88e6xxx_port_pvid(struct mv88e6xxx_priv_state *ps, int port,
+                               u16 *new, u16 *old)
 {
+       struct dsa_switch *ds = ps->ds;
        u16 pvid;
        int ret;
 
-       ret = _mv88e6xxx_reg_read(ds, REG_PORT(port), PORT_DEFAULT_VLAN);
+       ret = _mv88e6xxx_reg_read(ps, REG_PORT(port), PORT_DEFAULT_VLAN);
        if (ret < 0)
                return ret;
 
@@ -1194,7 +1416,7 @@ static int _mv88e6xxx_port_pvid(struct dsa_switch *ds, int port, u16 *new,
                ret &= ~PORT_DEFAULT_VLAN_MASK;
                ret |= *new & PORT_DEFAULT_VLAN_MASK;
 
-               ret = _mv88e6xxx_reg_write(ds, REG_PORT(port),
+               ret = _mv88e6xxx_reg_write(ps, REG_PORT(port),
                                           PORT_DEFAULT_VLAN, ret);
                if (ret < 0)
                        return ret;
@@ -1209,55 +1431,56 @@ static int _mv88e6xxx_port_pvid(struct dsa_switch *ds, int port, u16 *new,
        return 0;
 }
 
-static int _mv88e6xxx_port_pvid_get(struct dsa_switch *ds, int port, u16 *pvid)
+static int _mv88e6xxx_port_pvid_get(struct mv88e6xxx_priv_state *ps,
+                                   int port, u16 *pvid)
 {
-       return _mv88e6xxx_port_pvid(ds, port, NULL, pvid);
+       return _mv88e6xxx_port_pvid(ps, port, NULL, pvid);
 }
 
-static int _mv88e6xxx_port_pvid_set(struct dsa_switch *ds, int port, u16 pvid)
+static int _mv88e6xxx_port_pvid_set(struct mv88e6xxx_priv_state *ps,
+                                   int port, u16 pvid)
 {
-       return _mv88e6xxx_port_pvid(ds, port, &pvid, NULL);
+       return _mv88e6xxx_port_pvid(ps, port, &pvid, NULL);
 }
 
-static int _mv88e6xxx_vtu_wait(struct dsa_switch *ds)
+static int _mv88e6xxx_vtu_wait(struct mv88e6xxx_priv_state *ps)
 {
-       return _mv88e6xxx_wait(ds, REG_GLOBAL, GLOBAL_VTU_OP,
+       return _mv88e6xxx_wait(ps, REG_GLOBAL, GLOBAL_VTU_OP,
                               GLOBAL_VTU_OP_BUSY);
 }
 
-static int _mv88e6xxx_vtu_cmd(struct dsa_switch *ds, u16 op)
+static int _mv88e6xxx_vtu_cmd(struct mv88e6xxx_priv_state *ps, u16 op)
 {
        int ret;
 
-       ret = _mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_VTU_OP, op);
+       ret = _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_VTU_OP, op);
        if (ret < 0)
                return ret;
 
-       return _mv88e6xxx_vtu_wait(ds);
+       return _mv88e6xxx_vtu_wait(ps);
 }
 
-static int _mv88e6xxx_vtu_stu_flush(struct dsa_switch *ds)
+static int _mv88e6xxx_vtu_stu_flush(struct mv88e6xxx_priv_state *ps)
 {
        int ret;
 
-       ret = _mv88e6xxx_vtu_wait(ds);
+       ret = _mv88e6xxx_vtu_wait(ps);
        if (ret < 0)
                return ret;
 
-       return _mv88e6xxx_vtu_cmd(ds, GLOBAL_VTU_OP_FLUSH_ALL);
+       return _mv88e6xxx_vtu_cmd(ps, GLOBAL_VTU_OP_FLUSH_ALL);
 }
 
-static int _mv88e6xxx_vtu_stu_data_read(struct dsa_switch *ds,
+static int _mv88e6xxx_vtu_stu_data_read(struct mv88e6xxx_priv_state *ps,
                                        struct mv88e6xxx_vtu_stu_entry *entry,
                                        unsigned int nibble_offset)
 {
-       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
        u16 regs[3];
        int i;
        int ret;
 
        for (i = 0; i < 3; ++i) {
-               ret = _mv88e6xxx_reg_read(ds, REG_GLOBAL,
+               ret = _mv88e6xxx_reg_read(ps, REG_GLOBAL,
                                          GLOBAL_VTU_DATA_0_3 + i);
                if (ret < 0)
                        return ret;
@@ -1275,11 +1498,10 @@ static int _mv88e6xxx_vtu_stu_data_read(struct dsa_switch *ds,
        return 0;
 }
 
-static int _mv88e6xxx_vtu_stu_data_write(struct dsa_switch *ds,
+static int _mv88e6xxx_vtu_stu_data_write(struct mv88e6xxx_priv_state *ps,
                                         struct mv88e6xxx_vtu_stu_entry *entry,
                                         unsigned int nibble_offset)
 {
-       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
        u16 regs[3] = { 0 };
        int i;
        int ret;
@@ -1292,7 +1514,7 @@ static int _mv88e6xxx_vtu_stu_data_write(struct dsa_switch *ds,
        }
 
        for (i = 0; i < 3; ++i) {
-               ret = _mv88e6xxx_reg_write(ds, REG_GLOBAL,
+               ret = _mv88e6xxx_reg_write(ps, REG_GLOBAL,
                                           GLOBAL_VTU_DATA_0_3 + i, regs[i]);
                if (ret < 0)
                        return ret;
@@ -1301,27 +1523,27 @@ static int _mv88e6xxx_vtu_stu_data_write(struct dsa_switch *ds,
        return 0;
 }
 
-static int _mv88e6xxx_vtu_vid_write(struct dsa_switch *ds, u16 vid)
+static int _mv88e6xxx_vtu_vid_write(struct mv88e6xxx_priv_state *ps, u16 vid)
 {
-       return _mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_VTU_VID,
+       return _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_VTU_VID,
                                    vid & GLOBAL_VTU_VID_MASK);
 }
 
-static int _mv88e6xxx_vtu_getnext(struct dsa_switch *ds,
+static int _mv88e6xxx_vtu_getnext(struct mv88e6xxx_priv_state *ps,
                                  struct mv88e6xxx_vtu_stu_entry *entry)
 {
        struct mv88e6xxx_vtu_stu_entry next = { 0 };
        int ret;
 
-       ret = _mv88e6xxx_vtu_wait(ds);
+       ret = _mv88e6xxx_vtu_wait(ps);
        if (ret < 0)
                return ret;
 
-       ret = _mv88e6xxx_vtu_cmd(ds, GLOBAL_VTU_OP_VTU_GET_NEXT);
+       ret = _mv88e6xxx_vtu_cmd(ps, GLOBAL_VTU_OP_VTU_GET_NEXT);
        if (ret < 0)
                return ret;
 
-       ret = _mv88e6xxx_reg_read(ds, REG_GLOBAL, GLOBAL_VTU_VID);
+       ret = _mv88e6xxx_reg_read(ps, REG_GLOBAL, GLOBAL_VTU_VID);
        if (ret < 0)
                return ret;
 
@@ -1329,22 +1551,22 @@ static int _mv88e6xxx_vtu_getnext(struct dsa_switch *ds,
        next.valid = !!(ret & GLOBAL_VTU_VID_VALID);
 
        if (next.valid) {
-               ret = _mv88e6xxx_vtu_stu_data_read(ds, &next, 0);
+               ret = _mv88e6xxx_vtu_stu_data_read(ps, &next, 0);
                if (ret < 0)
                        return ret;
 
-               if (mv88e6xxx_has_fid_reg(ds)) {
-                       ret = _mv88e6xxx_reg_read(ds, REG_GLOBAL,
+               if (mv88e6xxx_has_fid_reg(ps)) {
+                       ret = _mv88e6xxx_reg_read(ps, REG_GLOBAL,
                                                  GLOBAL_VTU_FID);
                        if (ret < 0)
                                return ret;
 
                        next.fid = ret & GLOBAL_VTU_FID_MASK;
-               } else if (mv88e6xxx_num_databases(ds) == 256) {
+               } else if (mv88e6xxx_num_databases(ps) == 256) {
                        /* VTU DBNum[7:4] are located in VTU Operation 11:8, and
                         * VTU DBNum[3:0] are located in VTU Operation 3:0
                         */
-                       ret = _mv88e6xxx_reg_read(ds, REG_GLOBAL,
+                       ret = _mv88e6xxx_reg_read(ps, REG_GLOBAL,
                                                  GLOBAL_VTU_OP);
                        if (ret < 0)
                                return ret;
@@ -1353,8 +1575,8 @@ static int _mv88e6xxx_vtu_getnext(struct dsa_switch *ds,
                        next.fid |= ret & 0xf;
                }
 
-               if (mv88e6xxx_has_stu(ds)) {
-                       ret = _mv88e6xxx_reg_read(ds, REG_GLOBAL,
+               if (mv88e6xxx_has_stu(ps)) {
+                       ret = _mv88e6xxx_reg_read(ps, REG_GLOBAL,
                                                  GLOBAL_VTU_SID);
                        if (ret < 0)
                                return ret;
@@ -1367,27 +1589,30 @@ static int _mv88e6xxx_vtu_getnext(struct dsa_switch *ds,
        return 0;
 }
 
-int mv88e6xxx_port_vlan_dump(struct dsa_switch *ds, int port,
-                            struct switchdev_obj_port_vlan *vlan,
-                            int (*cb)(struct switchdev_obj *obj))
+static int mv88e6xxx_port_vlan_dump(struct dsa_switch *ds, int port,
+                                   struct switchdev_obj_port_vlan *vlan,
+                                   int (*cb)(struct switchdev_obj *obj))
 {
        struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
        struct mv88e6xxx_vtu_stu_entry next;
        u16 pvid;
        int err;
 
+       if (!mv88e6xxx_has(ps, MV88E6XXX_FLAG_VTU))
+               return -EOPNOTSUPP;
+
        mutex_lock(&ps->smi_mutex);
 
-       err = _mv88e6xxx_port_pvid_get(ds, port, &pvid);
+       err = _mv88e6xxx_port_pvid_get(ps, port, &pvid);
        if (err)
                goto unlock;
 
-       err = _mv88e6xxx_vtu_vid_write(ds, GLOBAL_VTU_VID_MASK);
+       err = _mv88e6xxx_vtu_vid_write(ps, GLOBAL_VTU_VID_MASK);
        if (err)
                goto unlock;
 
        do {
-               err = _mv88e6xxx_vtu_getnext(ds, &next);
+               err = _mv88e6xxx_vtu_getnext(ps, &next);
                if (err)
                        break;
 
@@ -1418,14 +1643,14 @@ unlock:
        return err;
 }
 
-static int _mv88e6xxx_vtu_loadpurge(struct dsa_switch *ds,
+static int _mv88e6xxx_vtu_loadpurge(struct mv88e6xxx_priv_state *ps,
                                    struct mv88e6xxx_vtu_stu_entry *entry)
 {
        u16 op = GLOBAL_VTU_OP_VTU_LOAD_PURGE;
        u16 reg = 0;
        int ret;
 
-       ret = _mv88e6xxx_vtu_wait(ds);
+       ret = _mv88e6xxx_vtu_wait(ps);
        if (ret < 0)
                return ret;
 
@@ -1433,23 +1658,23 @@ static int _mv88e6xxx_vtu_loadpurge(struct dsa_switch *ds,
                goto loadpurge;
 
        /* Write port member tags */
-       ret = _mv88e6xxx_vtu_stu_data_write(ds, entry, 0);
+       ret = _mv88e6xxx_vtu_stu_data_write(ps, entry, 0);
        if (ret < 0)
                return ret;
 
-       if (mv88e6xxx_has_stu(ds)) {
+       if (mv88e6xxx_has_stu(ps)) {
                reg = entry->sid & GLOBAL_VTU_SID_MASK;
-               ret = _mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_VTU_SID, reg);
+               ret = _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_VTU_SID, reg);
                if (ret < 0)
                        return ret;
        }
 
-       if (mv88e6xxx_has_fid_reg(ds)) {
+       if (mv88e6xxx_has_fid_reg(ps)) {
                reg = entry->fid & GLOBAL_VTU_FID_MASK;
-               ret = _mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_VTU_FID, reg);
+               ret = _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_VTU_FID, reg);
                if (ret < 0)
                        return ret;
-       } else if (mv88e6xxx_num_databases(ds) == 256) {
+       } else if (mv88e6xxx_num_databases(ps) == 256) {
                /* VTU DBNum[7:4] are located in VTU Operation 11:8, and
                 * VTU DBNum[3:0] are located in VTU Operation 3:0
                 */
@@ -1460,46 +1685,46 @@ static int _mv88e6xxx_vtu_loadpurge(struct dsa_switch *ds,
        reg = GLOBAL_VTU_VID_VALID;
 loadpurge:
        reg |= entry->vid & GLOBAL_VTU_VID_MASK;
-       ret = _mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_VTU_VID, reg);
+       ret = _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_VTU_VID, reg);
        if (ret < 0)
                return ret;
 
-       return _mv88e6xxx_vtu_cmd(ds, op);
+       return _mv88e6xxx_vtu_cmd(ps, op);
 }
 
-static int _mv88e6xxx_stu_getnext(struct dsa_switch *ds, u8 sid,
+static int _mv88e6xxx_stu_getnext(struct mv88e6xxx_priv_state *ps, u8 sid,
                                  struct mv88e6xxx_vtu_stu_entry *entry)
 {
        struct mv88e6xxx_vtu_stu_entry next = { 0 };
        int ret;
 
-       ret = _mv88e6xxx_vtu_wait(ds);
+       ret = _mv88e6xxx_vtu_wait(ps);
        if (ret < 0)
                return ret;
 
-       ret = _mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_VTU_SID,
+       ret = _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_VTU_SID,
                                   sid & GLOBAL_VTU_SID_MASK);
        if (ret < 0)
                return ret;
 
-       ret = _mv88e6xxx_vtu_cmd(ds, GLOBAL_VTU_OP_STU_GET_NEXT);
+       ret = _mv88e6xxx_vtu_cmd(ps, GLOBAL_VTU_OP_STU_GET_NEXT);
        if (ret < 0)
                return ret;
 
-       ret = _mv88e6xxx_reg_read(ds, REG_GLOBAL, GLOBAL_VTU_SID);
+       ret = _mv88e6xxx_reg_read(ps, REG_GLOBAL, GLOBAL_VTU_SID);
        if (ret < 0)
                return ret;
 
        next.sid = ret & GLOBAL_VTU_SID_MASK;
 
-       ret = _mv88e6xxx_reg_read(ds, REG_GLOBAL, GLOBAL_VTU_VID);
+       ret = _mv88e6xxx_reg_read(ps, REG_GLOBAL, GLOBAL_VTU_VID);
        if (ret < 0)
                return ret;
 
        next.valid = !!(ret & GLOBAL_VTU_VID_VALID);
 
        if (next.valid) {
-               ret = _mv88e6xxx_vtu_stu_data_read(ds, &next, 2);
+               ret = _mv88e6xxx_vtu_stu_data_read(ps, &next, 2);
                if (ret < 0)
                        return ret;
        }
@@ -1508,13 +1733,13 @@ static int _mv88e6xxx_stu_getnext(struct dsa_switch *ds, u8 sid,
        return 0;
 }
 
-static int _mv88e6xxx_stu_loadpurge(struct dsa_switch *ds,
+static int _mv88e6xxx_stu_loadpurge(struct mv88e6xxx_priv_state *ps,
                                    struct mv88e6xxx_vtu_stu_entry *entry)
 {
        u16 reg = 0;
        int ret;
 
-       ret = _mv88e6xxx_vtu_wait(ds);
+       ret = _mv88e6xxx_vtu_wait(ps);
        if (ret < 0)
                return ret;
 
@@ -1522,40 +1747,41 @@ static int _mv88e6xxx_stu_loadpurge(struct dsa_switch *ds,
                goto loadpurge;
 
        /* Write port states */
-       ret = _mv88e6xxx_vtu_stu_data_write(ds, entry, 2);
+       ret = _mv88e6xxx_vtu_stu_data_write(ps, entry, 2);
        if (ret < 0)
                return ret;
 
        reg = GLOBAL_VTU_VID_VALID;
 loadpurge:
-       ret = _mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_VTU_VID, reg);
+       ret = _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_VTU_VID, reg);
        if (ret < 0)
                return ret;
 
        reg = entry->sid & GLOBAL_VTU_SID_MASK;
-       ret = _mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_VTU_SID, reg);
+       ret = _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_VTU_SID, reg);
        if (ret < 0)
                return ret;
 
-       return _mv88e6xxx_vtu_cmd(ds, GLOBAL_VTU_OP_STU_LOAD_PURGE);
+       return _mv88e6xxx_vtu_cmd(ps, GLOBAL_VTU_OP_STU_LOAD_PURGE);
 }
 
-static int _mv88e6xxx_port_fid(struct dsa_switch *ds, int port, u16 *new,
-                              u16 *old)
+static int _mv88e6xxx_port_fid(struct mv88e6xxx_priv_state *ps, int port,
+                              u16 *new, u16 *old)
 {
+       struct dsa_switch *ds = ps->ds;
        u16 upper_mask;
        u16 fid;
        int ret;
 
-       if (mv88e6xxx_num_databases(ds) == 4096)
+       if (mv88e6xxx_num_databases(ps) == 4096)
                upper_mask = 0xff;
-       else if (mv88e6xxx_num_databases(ds) == 256)
+       else if (mv88e6xxx_num_databases(ps) == 256)
                upper_mask = 0xf;
        else
                return -EOPNOTSUPP;
 
        /* Port's default FID bits 3:0 are located in reg 0x06, offset 12 */
-       ret = _mv88e6xxx_reg_read(ds, REG_PORT(port), PORT_BASE_VLAN);
+       ret = _mv88e6xxx_reg_read(ps, REG_PORT(port), PORT_BASE_VLAN);
        if (ret < 0)
                return ret;
 
@@ -1565,14 +1791,14 @@ static int _mv88e6xxx_port_fid(struct dsa_switch *ds, int port, u16 *new,
                ret &= ~PORT_BASE_VLAN_FID_3_0_MASK;
                ret |= (*new << 12) & PORT_BASE_VLAN_FID_3_0_MASK;
 
-               ret = _mv88e6xxx_reg_write(ds, REG_PORT(port), PORT_BASE_VLAN,
+               ret = _mv88e6xxx_reg_write(ps, REG_PORT(port), PORT_BASE_VLAN,
                                           ret);
                if (ret < 0)
                        return ret;
        }
 
        /* Port's default FID bits 11:4 are located in reg 0x05, offset 0 */
-       ret = _mv88e6xxx_reg_read(ds, REG_PORT(port), PORT_CONTROL_1);
+       ret = _mv88e6xxx_reg_read(ps, REG_PORT(port), PORT_CONTROL_1);
        if (ret < 0)
                return ret;
 
@@ -1582,7 +1808,7 @@ static int _mv88e6xxx_port_fid(struct dsa_switch *ds, int port, u16 *new,
                ret &= ~upper_mask;
                ret |= (*new >> 4) & upper_mask;
 
-               ret = _mv88e6xxx_reg_write(ds, REG_PORT(port), PORT_CONTROL_1,
+               ret = _mv88e6xxx_reg_write(ps, REG_PORT(port), PORT_CONTROL_1,
                                           ret);
                if (ret < 0)
                        return ret;
@@ -1596,19 +1822,20 @@ static int _mv88e6xxx_port_fid(struct dsa_switch *ds, int port, u16 *new,
        return 0;
 }
 
-static int _mv88e6xxx_port_fid_get(struct dsa_switch *ds, int port, u16 *fid)
+static int _mv88e6xxx_port_fid_get(struct mv88e6xxx_priv_state *ps,
+                                  int port, u16 *fid)
 {
-       return _mv88e6xxx_port_fid(ds, port, NULL, fid);
+       return _mv88e6xxx_port_fid(ps, port, NULL, fid);
 }
 
-static int _mv88e6xxx_port_fid_set(struct dsa_switch *ds, int port, u16 fid)
+static int _mv88e6xxx_port_fid_set(struct mv88e6xxx_priv_state *ps,
+                                  int port, u16 fid)
 {
-       return _mv88e6xxx_port_fid(ds, port, &fid, NULL);
+       return _mv88e6xxx_port_fid(ps, port, &fid, NULL);
 }
 
-static int _mv88e6xxx_fid_new(struct dsa_switch *ds, u16 *fid)
+static int _mv88e6xxx_fid_new(struct mv88e6xxx_priv_state *ps, u16 *fid)
 {
-       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
        DECLARE_BITMAP(fid_bitmap, MV88E6XXX_N_FID);
        struct mv88e6xxx_vtu_stu_entry vlan;
        int i, err;
@@ -1617,7 +1844,7 @@ static int _mv88e6xxx_fid_new(struct dsa_switch *ds, u16 *fid)
 
        /* Set every FID bit used by the (un)bridged ports */
        for (i = 0; i < ps->info->num_ports; ++i) {
-               err = _mv88e6xxx_port_fid_get(ds, i, fid);
+               err = _mv88e6xxx_port_fid_get(ps, i, fid);
                if (err)
                        return err;
 
@@ -1625,12 +1852,12 @@ static int _mv88e6xxx_fid_new(struct dsa_switch *ds, u16 *fid)
        }
 
        /* Set every FID bit used by the VLAN entries */
-       err = _mv88e6xxx_vtu_vid_write(ds, GLOBAL_VTU_VID_MASK);
+       err = _mv88e6xxx_vtu_vid_write(ps, GLOBAL_VTU_VID_MASK);
        if (err)
                return err;
 
        do {
-               err = _mv88e6xxx_vtu_getnext(ds, &vlan);
+               err = _mv88e6xxx_vtu_getnext(ps, &vlan);
                if (err)
                        return err;
 
@@ -1644,24 +1871,24 @@ static int _mv88e6xxx_fid_new(struct dsa_switch *ds, u16 *fid)
         * databases are not needed. Return the next positive available.
         */
        *fid = find_next_zero_bit(fid_bitmap, MV88E6XXX_N_FID, 1);
-       if (unlikely(*fid >= mv88e6xxx_num_databases(ds)))
+       if (unlikely(*fid >= mv88e6xxx_num_databases(ps)))
                return -ENOSPC;
 
        /* Clear the database */
-       return _mv88e6xxx_atu_flush(ds, *fid, true);
+       return _mv88e6xxx_atu_flush(ps, *fid, true);
 }
 
-static int _mv88e6xxx_vtu_new(struct dsa_switch *ds, u16 vid,
+static int _mv88e6xxx_vtu_new(struct mv88e6xxx_priv_state *ps, u16 vid,
                              struct mv88e6xxx_vtu_stu_entry *entry)
 {
-       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
+       struct dsa_switch *ds = ps->ds;
        struct mv88e6xxx_vtu_stu_entry vlan = {
                .valid = true,
                .vid = vid,
        };
        int i, err;
 
-       err = _mv88e6xxx_fid_new(ds, &vlan.fid);
+       err = _mv88e6xxx_fid_new(ps, &vlan.fid);
        if (err)
                return err;
 
@@ -1671,8 +1898,8 @@ static int _mv88e6xxx_vtu_new(struct dsa_switch *ds, u16 vid,
                        ? GLOBAL_VTU_DATA_MEMBER_TAG_UNMODIFIED
                        : GLOBAL_VTU_DATA_MEMBER_TAG_NON_MEMBER;
 
-       if (mv88e6xxx_6097_family(ds) || mv88e6xxx_6165_family(ds) ||
-           mv88e6xxx_6351_family(ds) || mv88e6xxx_6352_family(ds)) {
+       if (mv88e6xxx_6097_family(ps) || mv88e6xxx_6165_family(ps) ||
+           mv88e6xxx_6351_family(ps) || mv88e6xxx_6352_family(ps)) {
                struct mv88e6xxx_vtu_stu_entry vstp;
 
                /* Adding a VTU entry requires a valid STU entry. As VSTP is not
@@ -1680,7 +1907,7 @@ static int _mv88e6xxx_vtu_new(struct dsa_switch *ds, u16 vid,
                 * entries. Thus, validate the SID 0.
                 */
                vlan.sid = 0;
-               err = _mv88e6xxx_stu_getnext(ds, GLOBAL_VTU_SID_MASK, &vstp);
+               err = _mv88e6xxx_stu_getnext(ps, GLOBAL_VTU_SID_MASK, &vstp);
                if (err)
                        return err;
 
@@ -1689,7 +1916,7 @@ static int _mv88e6xxx_vtu_new(struct dsa_switch *ds, u16 vid,
                        vstp.valid = true;
                        vstp.sid = vlan.sid;
 
-                       err = _mv88e6xxx_stu_loadpurge(ds, &vstp);
+                       err = _mv88e6xxx_stu_loadpurge(ps, &vstp);
                        if (err)
                                return err;
                }
@@ -1699,7 +1926,7 @@ static int _mv88e6xxx_vtu_new(struct dsa_switch *ds, u16 vid,
        return 0;
 }
 
-static int _mv88e6xxx_vtu_get(struct dsa_switch *ds, u16 vid,
+static int _mv88e6xxx_vtu_get(struct mv88e6xxx_priv_state *ps, u16 vid,
                              struct mv88e6xxx_vtu_stu_entry *entry, bool creat)
 {
        int err;
@@ -1707,11 +1934,11 @@ static int _mv88e6xxx_vtu_get(struct dsa_switch *ds, u16 vid,
        if (!vid)
                return -EINVAL;
 
-       err = _mv88e6xxx_vtu_vid_write(ds, vid - 1);
+       err = _mv88e6xxx_vtu_vid_write(ps, vid - 1);
        if (err)
                return err;
 
-       err = _mv88e6xxx_vtu_getnext(ds, entry);
+       err = _mv88e6xxx_vtu_getnext(ps, entry);
        if (err)
                return err;
 
@@ -1722,7 +1949,7 @@ static int _mv88e6xxx_vtu_get(struct dsa_switch *ds, u16 vid,
                 * -EOPNOTSUPP to inform bridge about an eventual software VLAN.
                 */
 
-               err = _mv88e6xxx_vtu_new(ds, vid, entry);
+               err = _mv88e6xxx_vtu_new(ps, vid, entry);
        }
 
        return err;
@@ -1740,12 +1967,12 @@ static int mv88e6xxx_port_check_hw_vlan(struct dsa_switch *ds, int port,
 
        mutex_lock(&ps->smi_mutex);
 
-       err = _mv88e6xxx_vtu_vid_write(ds, vid_begin - 1);
+       err = _mv88e6xxx_vtu_vid_write(ps, vid_begin - 1);
        if (err)
                goto unlock;
 
        do {
-               err = _mv88e6xxx_vtu_getnext(ds, &vlan);
+               err = _mv88e6xxx_vtu_getnext(ps, &vlan);
                if (err)
                        goto unlock;
 
@@ -1789,17 +2016,20 @@ static const char * const mv88e6xxx_port_8021q_mode_names[] = {
        [PORT_CONTROL_2_8021Q_SECURE] = "Secure",
 };
 
-int mv88e6xxx_port_vlan_filtering(struct dsa_switch *ds, int port,
-                                 bool vlan_filtering)
+static int mv88e6xxx_port_vlan_filtering(struct dsa_switch *ds, int port,
+                                        bool vlan_filtering)
 {
        struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
        u16 old, new = vlan_filtering ? PORT_CONTROL_2_8021Q_SECURE :
                PORT_CONTROL_2_8021Q_DISABLED;
        int ret;
 
+       if (!mv88e6xxx_has(ps, MV88E6XXX_FLAG_VTU))
+               return -EOPNOTSUPP;
+
        mutex_lock(&ps->smi_mutex);
 
-       ret = _mv88e6xxx_reg_read(ds, REG_PORT(port), PORT_CONTROL_2);
+       ret = _mv88e6xxx_reg_read(ps, REG_PORT(port), PORT_CONTROL_2);
        if (ret < 0)
                goto unlock;
 
@@ -1809,7 +2039,7 @@ int mv88e6xxx_port_vlan_filtering(struct dsa_switch *ds, int port,
                ret &= ~PORT_CONTROL_2_8021Q_MASK;
                ret |= new & PORT_CONTROL_2_8021Q_MASK;
 
-               ret = _mv88e6xxx_reg_write(ds, REG_PORT(port), PORT_CONTROL_2,
+               ret = _mv88e6xxx_reg_write(ps, REG_PORT(port), PORT_CONTROL_2,
                                           ret);
                if (ret < 0)
                        goto unlock;
@@ -1826,12 +2056,16 @@ unlock:
        return ret;
 }
 
-int mv88e6xxx_port_vlan_prepare(struct dsa_switch *ds, int port,
-                               const struct switchdev_obj_port_vlan *vlan,
-                               struct switchdev_trans *trans)
+static int mv88e6xxx_port_vlan_prepare(struct dsa_switch *ds, int port,
+                                      const struct switchdev_obj_port_vlan *vlan,
+                                      struct switchdev_trans *trans)
 {
+       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
        int err;
 
+       if (!mv88e6xxx_has(ps, MV88E6XXX_FLAG_VTU))
+               return -EOPNOTSUPP;
+
        /* If the requested port doesn't belong to the same bridge as the VLAN
         * members, do not support it (yet) and fallback to software VLAN.
         */
@@ -1846,13 +2080,13 @@ int mv88e6xxx_port_vlan_prepare(struct dsa_switch *ds, int port,
        return 0;
 }
 
-static int _mv88e6xxx_port_vlan_add(struct dsa_switch *ds, int port, u16 vid,
-                                   bool untagged)
+static int _mv88e6xxx_port_vlan_add(struct mv88e6xxx_priv_state *ps, int port,
+                                   u16 vid, bool untagged)
 {
        struct mv88e6xxx_vtu_stu_entry vlan;
        int err;
 
-       err = _mv88e6xxx_vtu_get(ds, vid, &vlan, true);
+       err = _mv88e6xxx_vtu_get(ps, vid, &vlan, true);
        if (err)
                return err;
 
@@ -1860,39 +2094,43 @@ static int _mv88e6xxx_port_vlan_add(struct dsa_switch *ds, int port, u16 vid,
                GLOBAL_VTU_DATA_MEMBER_TAG_UNTAGGED :
                GLOBAL_VTU_DATA_MEMBER_TAG_TAGGED;
 
-       return _mv88e6xxx_vtu_loadpurge(ds, &vlan);
+       return _mv88e6xxx_vtu_loadpurge(ps, &vlan);
 }
 
-void mv88e6xxx_port_vlan_add(struct dsa_switch *ds, int port,
-                            const struct switchdev_obj_port_vlan *vlan,
-                            struct switchdev_trans *trans)
+static void mv88e6xxx_port_vlan_add(struct dsa_switch *ds, int port,
+                                   const struct switchdev_obj_port_vlan *vlan,
+                                   struct switchdev_trans *trans)
 {
        struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
        bool untagged = vlan->flags & BRIDGE_VLAN_INFO_UNTAGGED;
        bool pvid = vlan->flags & BRIDGE_VLAN_INFO_PVID;
        u16 vid;
 
+       if (!mv88e6xxx_has(ps, MV88E6XXX_FLAG_VTU))
+               return;
+
        mutex_lock(&ps->smi_mutex);
 
        for (vid = vlan->vid_begin; vid <= vlan->vid_end; ++vid)
-               if (_mv88e6xxx_port_vlan_add(ds, port, vid, untagged))
+               if (_mv88e6xxx_port_vlan_add(ps, port, vid, untagged))
                        netdev_err(ds->ports[port], "failed to add VLAN %d%c\n",
                                   vid, untagged ? 'u' : 't');
 
-       if (pvid && _mv88e6xxx_port_pvid_set(ds, port, vlan->vid_end))
+       if (pvid && _mv88e6xxx_port_pvid_set(ps, port, vlan->vid_end))
                netdev_err(ds->ports[port], "failed to set PVID %d\n",
                           vlan->vid_end);
 
        mutex_unlock(&ps->smi_mutex);
 }
 
-static int _mv88e6xxx_port_vlan_del(struct dsa_switch *ds, int port, u16 vid)
+static int _mv88e6xxx_port_vlan_del(struct mv88e6xxx_priv_state *ps,
+                                   int port, u16 vid)
 {
-       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
+       struct dsa_switch *ds = ps->ds;
        struct mv88e6xxx_vtu_stu_entry vlan;
        int i, err;
 
-       err = _mv88e6xxx_vtu_get(ds, vid, &vlan, false);
+       err = _mv88e6xxx_vtu_get(ps, vid, &vlan, false);
        if (err)
                return err;
 
@@ -1914,33 +2152,36 @@ static int _mv88e6xxx_port_vlan_del(struct dsa_switch *ds, int port, u16 vid)
                }
        }
 
-       err = _mv88e6xxx_vtu_loadpurge(ds, &vlan);
+       err = _mv88e6xxx_vtu_loadpurge(ps, &vlan);
        if (err)
                return err;
 
-       return _mv88e6xxx_atu_remove(ds, vlan.fid, port, false);
+       return _mv88e6xxx_atu_remove(ps, vlan.fid, port, false);
 }
 
-int mv88e6xxx_port_vlan_del(struct dsa_switch *ds, int port,
-                           const struct switchdev_obj_port_vlan *vlan)
+static int mv88e6xxx_port_vlan_del(struct dsa_switch *ds, int port,
+                                  const struct switchdev_obj_port_vlan *vlan)
 {
        struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
        u16 pvid, vid;
        int err = 0;
 
+       if (!mv88e6xxx_has(ps, MV88E6XXX_FLAG_VTU))
+               return -EOPNOTSUPP;
+
        mutex_lock(&ps->smi_mutex);
 
-       err = _mv88e6xxx_port_pvid_get(ds, port, &pvid);
+       err = _mv88e6xxx_port_pvid_get(ps, port, &pvid);
        if (err)
                goto unlock;
 
        for (vid = vlan->vid_begin; vid <= vlan->vid_end; ++vid) {
-               err = _mv88e6xxx_port_vlan_del(ds, port, vid);
+               err = _mv88e6xxx_port_vlan_del(ps, port, vid);
                if (err)
                        goto unlock;
 
                if (vid == pvid) {
-                       err = _mv88e6xxx_port_pvid_set(ds, port, 0);
+                       err = _mv88e6xxx_port_pvid_set(ps, port, 0);
                        if (err)
                                goto unlock;
                }
@@ -1952,14 +2193,14 @@ unlock:
        return err;
 }
 
-static int _mv88e6xxx_atu_mac_write(struct dsa_switch *ds,
+static int _mv88e6xxx_atu_mac_write(struct mv88e6xxx_priv_state *ps,
                                    const unsigned char *addr)
 {
        int i, ret;
 
        for (i = 0; i < 3; i++) {
                ret = _mv88e6xxx_reg_write(
-                       ds, REG_GLOBAL, GLOBAL_ATU_MAC_01 + i,
+                       ps, REG_GLOBAL, GLOBAL_ATU_MAC_01 + i,
                        (addr[i * 2] << 8) | addr[i * 2 + 1]);
                if (ret < 0)
                        return ret;
@@ -1968,12 +2209,13 @@ static int _mv88e6xxx_atu_mac_write(struct dsa_switch *ds,
        return 0;
 }
 
-static int _mv88e6xxx_atu_mac_read(struct dsa_switch *ds, unsigned char *addr)
+static int _mv88e6xxx_atu_mac_read(struct mv88e6xxx_priv_state *ps,
+                                  unsigned char *addr)
 {
        int i, ret;
 
        for (i = 0; i < 3; i++) {
-               ret = _mv88e6xxx_reg_read(ds, REG_GLOBAL,
+               ret = _mv88e6xxx_reg_read(ps, REG_GLOBAL,
                                          GLOBAL_ATU_MAC_01 + i);
                if (ret < 0)
                        return ret;
@@ -1984,27 +2226,27 @@ static int _mv88e6xxx_atu_mac_read(struct dsa_switch *ds, unsigned char *addr)
        return 0;
 }
 
-static int _mv88e6xxx_atu_load(struct dsa_switch *ds,
+static int _mv88e6xxx_atu_load(struct mv88e6xxx_priv_state *ps,
                               struct mv88e6xxx_atu_entry *entry)
 {
        int ret;
 
-       ret = _mv88e6xxx_atu_wait(ds);
+       ret = _mv88e6xxx_atu_wait(ps);
        if (ret < 0)
                return ret;
 
-       ret = _mv88e6xxx_atu_mac_write(ds, entry->mac);
+       ret = _mv88e6xxx_atu_mac_write(ps, entry->mac);
        if (ret < 0)
                return ret;
 
-       ret = _mv88e6xxx_atu_data_write(ds, entry);
+       ret = _mv88e6xxx_atu_data_write(ps, entry);
        if (ret < 0)
                return ret;
 
-       return _mv88e6xxx_atu_cmd(ds, entry->fid, GLOBAL_ATU_OP_LOAD_DB);
+       return _mv88e6xxx_atu_cmd(ps, entry->fid, GLOBAL_ATU_OP_LOAD_DB);
 }
 
-static int _mv88e6xxx_port_fdb_load(struct dsa_switch *ds, int port,
+static int _mv88e6xxx_port_fdb_load(struct mv88e6xxx_priv_state *ps, int port,
                                    const unsigned char *addr, u16 vid,
                                    u8 state)
 {
@@ -2014,9 +2256,9 @@ static int _mv88e6xxx_port_fdb_load(struct dsa_switch *ds, int port,
 
        /* Null VLAN ID corresponds to the port private database */
        if (vid == 0)
-               err = _mv88e6xxx_port_fid_get(ds, port, &vlan.fid);
+               err = _mv88e6xxx_port_fid_get(ps, port, &vlan.fid);
        else
-               err = _mv88e6xxx_vtu_get(ds, vid, &vlan, false);
+               err = _mv88e6xxx_vtu_get(ps, vid, &vlan, false);
        if (err)
                return err;
 
@@ -2028,49 +2270,60 @@ static int _mv88e6xxx_port_fdb_load(struct dsa_switch *ds, int port,
                entry.portv_trunkid = BIT(port);
        }
 
-       return _mv88e6xxx_atu_load(ds, &entry);
+       return _mv88e6xxx_atu_load(ps, &entry);
 }
 
-int mv88e6xxx_port_fdb_prepare(struct dsa_switch *ds, int port,
-                              const struct switchdev_obj_port_fdb *fdb,
-                              struct switchdev_trans *trans)
+static int mv88e6xxx_port_fdb_prepare(struct dsa_switch *ds, int port,
+                                     const struct switchdev_obj_port_fdb *fdb,
+                                     struct switchdev_trans *trans)
 {
+       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
+
+       if (!mv88e6xxx_has(ps, MV88E6XXX_FLAG_ATU))
+               return -EOPNOTSUPP;
+
        /* We don't need any dynamic resource from the kernel (yet),
         * so skip the prepare phase.
         */
        return 0;
 }
 
-void mv88e6xxx_port_fdb_add(struct dsa_switch *ds, int port,
-                           const struct switchdev_obj_port_fdb *fdb,
-                           struct switchdev_trans *trans)
+static void mv88e6xxx_port_fdb_add(struct dsa_switch *ds, int port,
+                                  const struct switchdev_obj_port_fdb *fdb,
+                                  struct switchdev_trans *trans)
 {
        int state = is_multicast_ether_addr(fdb->addr) ?
                GLOBAL_ATU_DATA_STATE_MC_STATIC :
                GLOBAL_ATU_DATA_STATE_UC_STATIC;
        struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
 
+       if (!mv88e6xxx_has(ps, MV88E6XXX_FLAG_ATU))
+               return;
+
        mutex_lock(&ps->smi_mutex);
-       if (_mv88e6xxx_port_fdb_load(ds, port, fdb->addr, fdb->vid, state))
+       if (_mv88e6xxx_port_fdb_load(ps, port, fdb->addr, fdb->vid, state))
                netdev_err(ds->ports[port], "failed to load MAC address\n");
        mutex_unlock(&ps->smi_mutex);
 }
 
-int mv88e6xxx_port_fdb_del(struct dsa_switch *ds, int port,
-                          const struct switchdev_obj_port_fdb *fdb)
+static int mv88e6xxx_port_fdb_del(struct dsa_switch *ds, int port,
+                                 const struct switchdev_obj_port_fdb *fdb)
 {
        struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
        int ret;
 
+       if (!mv88e6xxx_has(ps, MV88E6XXX_FLAG_ATU))
+               return -EOPNOTSUPP;
+
        mutex_lock(&ps->smi_mutex);
-       ret = _mv88e6xxx_port_fdb_load(ds, port, fdb->addr, fdb->vid,
+       ret = _mv88e6xxx_port_fdb_load(ps, port, fdb->addr, fdb->vid,
                                       GLOBAL_ATU_DATA_STATE_UNUSED);
        mutex_unlock(&ps->smi_mutex);
 
        return ret;
 }
 
-static int _mv88e6xxx_atu_getnext(struct dsa_switch *ds, u16 fid,
+static int _mv88e6xxx_atu_getnext(struct mv88e6xxx_priv_state *ps, u16 fid,
                                  struct mv88e6xxx_atu_entry *entry)
 {
        struct mv88e6xxx_atu_entry next = { 0 };
@@ -2078,19 +2331,19 @@ static int _mv88e6xxx_atu_getnext(struct dsa_switch *ds, u16 fid,
 
        next.fid = fid;
 
-       ret = _mv88e6xxx_atu_wait(ds);
+       ret = _mv88e6xxx_atu_wait(ps);
        if (ret < 0)
                return ret;
 
-       ret = _mv88e6xxx_atu_cmd(ds, fid, GLOBAL_ATU_OP_GET_NEXT_DB);
+       ret = _mv88e6xxx_atu_cmd(ps, fid, GLOBAL_ATU_OP_GET_NEXT_DB);
        if (ret < 0)
                return ret;
 
-       ret = _mv88e6xxx_atu_mac_read(ds, next.mac);
+       ret = _mv88e6xxx_atu_mac_read(ps, next.mac);
        if (ret < 0)
                return ret;
 
-       ret = _mv88e6xxx_reg_read(ds, REG_GLOBAL, GLOBAL_ATU_DATA);
+       ret = _mv88e6xxx_reg_read(ps, REG_GLOBAL, GLOBAL_ATU_DATA);
        if (ret < 0)
                return ret;
 
@@ -2115,8 +2368,8 @@ static int _mv88e6xxx_atu_getnext(struct dsa_switch *ds, u16 fid,
        return 0;
 }
 
-static int _mv88e6xxx_port_fdb_dump_one(struct dsa_switch *ds, u16 fid, u16 vid,
-                                       int port,
+static int _mv88e6xxx_port_fdb_dump_one(struct mv88e6xxx_priv_state *ps,
+                                       u16 fid, u16 vid, int port,
                                        struct switchdev_obj_port_fdb *fdb,
                                        int (*cb)(struct switchdev_obj *obj))
 {
@@ -2125,12 +2378,12 @@ static int _mv88e6xxx_port_fdb_dump_one(struct dsa_switch *ds, u16 fid, u16 vid,
        };
        int err;
 
-       err = _mv88e6xxx_atu_mac_write(ds, addr.mac);
+       err = _mv88e6xxx_atu_mac_write(ps, addr.mac);
        if (err)
                return err;
 
        do {
-               err = _mv88e6xxx_atu_getnext(ds, fid, &addr);
+               err = _mv88e6xxx_atu_getnext(ps, fid, &addr);
                if (err)
                        break;
 
@@ -2156,9 +2409,9 @@ static int _mv88e6xxx_port_fdb_dump_one(struct dsa_switch *ds, u16 fid, u16 vid,
        return err;
 }
 
-int mv88e6xxx_port_fdb_dump(struct dsa_switch *ds, int port,
-                           struct switchdev_obj_port_fdb *fdb,
-                           int (*cb)(struct switchdev_obj *obj))
+static int mv88e6xxx_port_fdb_dump(struct dsa_switch *ds, int port,
+                                  struct switchdev_obj_port_fdb *fdb,
+                                  int (*cb)(struct switchdev_obj *obj))
 {
        struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
        struct mv88e6xxx_vtu_stu_entry vlan = {
@@ -2167,31 +2420,34 @@ int mv88e6xxx_port_fdb_dump(struct dsa_switch *ds, int port,
        u16 fid;
        int err;
 
+       if (!mv88e6xxx_has(ps, MV88E6XXX_FLAG_ATU))
+               return -EOPNOTSUPP;
+
        mutex_lock(&ps->smi_mutex);
 
        /* Dump port's default Filtering Information Database (VLAN ID 0) */
-       err = _mv88e6xxx_port_fid_get(ds, port, &fid);
+       err = _mv88e6xxx_port_fid_get(ps, port, &fid);
        if (err)
                goto unlock;
 
-       err = _mv88e6xxx_port_fdb_dump_one(ds, fid, 0, port, fdb, cb);
+       err = _mv88e6xxx_port_fdb_dump_one(ps, fid, 0, port, fdb, cb);
        if (err)
                goto unlock;
 
        /* Dump VLANs' Filtering Information Databases */
-       err = _mv88e6xxx_vtu_vid_write(ds, vlan.vid);
+       err = _mv88e6xxx_vtu_vid_write(ps, vlan.vid);
        if (err)
                goto unlock;
 
        do {
-               err = _mv88e6xxx_vtu_getnext(ds, &vlan);
+               err = _mv88e6xxx_vtu_getnext(ps, &vlan);
                if (err)
                        break;
 
                if (!vlan.valid)
                        break;
 
-               err = _mv88e6xxx_port_fdb_dump_one(ds, vlan.fid, vlan.vid, port,
+               err = _mv88e6xxx_port_fdb_dump_one(ps, vlan.fid, vlan.vid, port,
                                                   fdb, cb);
                if (err)
                        break;
@@ -2203,11 +2459,14 @@ unlock:
        return err;
 }
 
-int mv88e6xxx_port_bridge_join(struct dsa_switch *ds, int port,
-                              struct net_device *bridge)
+static int mv88e6xxx_port_bridge_join(struct dsa_switch *ds, int port,
+                                     struct net_device *bridge)
 {
        struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
-       int i, err;
+       int i, err = 0;
+
+       if (!mv88e6xxx_has(ps, MV88E6XXX_FLAG_VLANTABLE))
+               return -EOPNOTSUPP;
 
        mutex_lock(&ps->smi_mutex);
 
@@ -2216,7 +2475,7 @@ int mv88e6xxx_port_bridge_join(struct dsa_switch *ds, int port,
 
        for (i = 0; i < ps->info->num_ports; ++i) {
                if (ps->ports[i].bridge_dev == bridge) {
-                       err = _mv88e6xxx_port_based_vlan_map(ds, i);
+                       err = _mv88e6xxx_port_based_vlan_map(ps, i);
                        if (err)
                                break;
                }
@@ -2227,12 +2486,15 @@ int mv88e6xxx_port_bridge_join(struct dsa_switch *ds, int port,
        return err;
 }
 
-void mv88e6xxx_port_bridge_leave(struct dsa_switch *ds, int port)
+static void mv88e6xxx_port_bridge_leave(struct dsa_switch *ds, int port)
 {
        struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
        struct net_device *bridge = ps->ports[port].bridge_dev;
        int i;
 
+       if (!mv88e6xxx_has(ps, MV88E6XXX_FLAG_VLANTABLE))
+               return;
+
        mutex_lock(&ps->smi_mutex);
 
        /* Unassign the bridge and remap each port's VLANTable */
@@ -2240,7 +2502,7 @@ void mv88e6xxx_port_bridge_leave(struct dsa_switch *ds, int port)
 
        for (i = 0; i < ps->info->num_ports; ++i)
                if (i == port || ps->ports[i].bridge_dev == bridge)
-                       if (_mv88e6xxx_port_based_vlan_map(ds, i))
+                       if (_mv88e6xxx_port_based_vlan_map(ps, i))
                                netdev_warn(ds->ports[i], "failed to remap\n");
 
        mutex_unlock(&ps->smi_mutex);
@@ -2259,57 +2521,120 @@ static void mv88e6xxx_bridge_work(struct work_struct *work)
 
        for (port = 0; port < ps->info->num_ports; ++port)
                if (test_and_clear_bit(port, ps->port_state_update_mask) &&
-                   _mv88e6xxx_port_state(ds, port, ps->ports[port].state))
-                       netdev_warn(ds->ports[port], "failed to update state to %s\n",
+                   _mv88e6xxx_port_state(ps, port, ps->ports[port].state))
+                       netdev_warn(ds->ports[port],
+                                   "failed to update state to %s\n",
                                    mv88e6xxx_port_state_names[ps->ports[port].state]);
 
        mutex_unlock(&ps->smi_mutex);
 }
 
-static int _mv88e6xxx_phy_page_write(struct dsa_switch *ds, int port, int page,
-                                    int reg, int val)
+static int _mv88e6xxx_phy_page_write(struct mv88e6xxx_priv_state *ps,
+                                    int port, int page, int reg, int val)
 {
        int ret;
 
-       ret = _mv88e6xxx_phy_write_indirect(ds, port, 0x16, page);
+       ret = _mv88e6xxx_phy_write_indirect(ps, port, 0x16, page);
        if (ret < 0)
                goto restore_page_0;
 
-       ret = _mv88e6xxx_phy_write_indirect(ds, port, reg, val);
+       ret = _mv88e6xxx_phy_write_indirect(ps, port, reg, val);
 restore_page_0:
-       _mv88e6xxx_phy_write_indirect(ds, port, 0x16, 0x0);
+       _mv88e6xxx_phy_write_indirect(ps, port, 0x16, 0x0);
 
        return ret;
 }
 
-static int _mv88e6xxx_phy_page_read(struct dsa_switch *ds, int port, int page,
-                                   int reg)
+static int _mv88e6xxx_phy_page_read(struct mv88e6xxx_priv_state *ps,
+                                   int port, int page, int reg)
 {
        int ret;
 
-       ret = _mv88e6xxx_phy_write_indirect(ds, port, 0x16, page);
+       ret = _mv88e6xxx_phy_write_indirect(ps, port, 0x16, page);
        if (ret < 0)
                goto restore_page_0;
 
-       ret = _mv88e6xxx_phy_read_indirect(ds, port, reg);
+       ret = _mv88e6xxx_phy_read_indirect(ps, port, reg);
 restore_page_0:
-       _mv88e6xxx_phy_write_indirect(ds, port, 0x16, 0x0);
+       _mv88e6xxx_phy_write_indirect(ps, port, 0x16, 0x0);
+
+       return ret;
+}
+
+static int mv88e6xxx_switch_reset(struct mv88e6xxx_priv_state *ps)
+{
+       bool ppu_active = mv88e6xxx_has(ps, MV88E6XXX_FLAG_PPU_ACTIVE);
+       u16 is_reset = (ppu_active ? 0x8800 : 0xc800);
+       struct gpio_desc *gpiod = ps->ds->pd->reset;
+       unsigned long timeout;
+       int ret;
+       int i;
+
+       /* Set all ports to the disabled state. */
+       for (i = 0; i < ps->info->num_ports; i++) {
+               ret = _mv88e6xxx_reg_read(ps, REG_PORT(i), PORT_CONTROL);
+               if (ret < 0)
+                       return ret;
+
+               ret = _mv88e6xxx_reg_write(ps, REG_PORT(i), PORT_CONTROL,
+                                          ret & 0xfffc);
+               if (ret)
+                       return ret;
+       }
+
+       /* Wait for transmit queues to drain. */
+       usleep_range(2000, 4000);
+
+       /* If there is a gpio connected to the reset pin, toggle it */
+       if (gpiod) {
+               gpiod_set_value_cansleep(gpiod, 1);
+               usleep_range(10000, 20000);
+               gpiod_set_value_cansleep(gpiod, 0);
+               usleep_range(10000, 20000);
+       }
+
+       /* Reset the switch. Keep the PPU active if requested. The PPU
+        * needs to be active to support indirect phy register access
+        * through global registers 0x18 and 0x19.
+        */
+       if (ppu_active)
+               ret = _mv88e6xxx_reg_write(ps, REG_GLOBAL, 0x04, 0xc000);
+       else
+               ret = _mv88e6xxx_reg_write(ps, REG_GLOBAL, 0x04, 0xc400);
+       if (ret)
+               return ret;
+
+       /* Wait up to one second for reset to complete. */
+       timeout = jiffies + 1 * HZ;
+       while (time_before(jiffies, timeout)) {
+               ret = _mv88e6xxx_reg_read(ps, REG_GLOBAL, 0x00);
+               if (ret < 0)
+                       return ret;
+
+               if ((ret & is_reset) == is_reset)
+                       break;
+               usleep_range(1000, 2000);
+       }
+       if (time_after(jiffies, timeout))
+               ret = -ETIMEDOUT;
+       else
+               ret = 0;
 
        return ret;
 }
 
-static int mv88e6xxx_power_on_serdes(struct dsa_switch *ds)
+static int mv88e6xxx_power_on_serdes(struct mv88e6xxx_priv_state *ps)
 {
        int ret;
 
-       ret = _mv88e6xxx_phy_page_read(ds, REG_FIBER_SERDES, PAGE_FIBER_SERDES,
+       ret = _mv88e6xxx_phy_page_read(ps, REG_FIBER_SERDES, PAGE_FIBER_SERDES,
                                       MII_BMCR);
        if (ret < 0)
                return ret;
 
        if (ret & BMCR_PDOWN) {
                ret &= ~BMCR_PDOWN;
-               ret = _mv88e6xxx_phy_page_write(ds, REG_FIBER_SERDES,
+               ret = _mv88e6xxx_phy_page_write(ps, REG_FIBER_SERDES,
                                                PAGE_FIBER_SERDES, MII_BMCR,
                                                ret);
        }
@@ -2317,32 +2642,30 @@ static int mv88e6xxx_power_on_serdes(struct dsa_switch *ds)
        return ret;
 }
 
-static int mv88e6xxx_setup_port(struct dsa_switch *ds, int port)
+static int mv88e6xxx_setup_port(struct mv88e6xxx_priv_state *ps, int port)
 {
-       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
+       struct dsa_switch *ds = ps->ds;
        int ret;
        u16 reg;
 
-       mutex_lock(&ps->smi_mutex);
-
-       if (mv88e6xxx_6352_family(ds) || mv88e6xxx_6351_family(ds) ||
-           mv88e6xxx_6165_family(ds) || mv88e6xxx_6097_family(ds) ||
-           mv88e6xxx_6185_family(ds) || mv88e6xxx_6095_family(ds) ||
-           mv88e6xxx_6065_family(ds) || mv88e6xxx_6320_family(ds)) {
+       if (mv88e6xxx_6352_family(ps) || mv88e6xxx_6351_family(ps) ||
+           mv88e6xxx_6165_family(ps) || mv88e6xxx_6097_family(ps) ||
+           mv88e6xxx_6185_family(ps) || mv88e6xxx_6095_family(ps) ||
+           mv88e6xxx_6065_family(ps) || mv88e6xxx_6320_family(ps)) {
                /* MAC Forcing register: don't force link, speed,
                 * duplex or flow control state to any particular
                 * values on physical ports, but force the CPU port
                 * and all DSA ports to their maximum bandwidth and
                 * full duplex.
                 */
-               reg = _mv88e6xxx_reg_read(ds, REG_PORT(port), PORT_PCS_CTRL);
+               reg = _mv88e6xxx_reg_read(ps, REG_PORT(port), PORT_PCS_CTRL);
                if (dsa_is_cpu_port(ds, port) || dsa_is_dsa_port(ds, port)) {
                        reg &= ~PORT_PCS_CTRL_UNFORCED;
                        reg |= PORT_PCS_CTRL_FORCE_LINK |
                                PORT_PCS_CTRL_LINK_UP |
                                PORT_PCS_CTRL_DUPLEX_FULL |
                                PORT_PCS_CTRL_FORCE_DUPLEX;
-                       if (mv88e6xxx_6065_family(ds))
+                       if (mv88e6xxx_6065_family(ps))
                                reg |= PORT_PCS_CTRL_100;
                        else
                                reg |= PORT_PCS_CTRL_1000;
@@ -2350,10 +2673,10 @@ static int mv88e6xxx_setup_port(struct dsa_switch *ds, int port)
                        reg |= PORT_PCS_CTRL_UNFORCED;
                }
 
-               ret = _mv88e6xxx_reg_write(ds, REG_PORT(port),
+               ret = _mv88e6xxx_reg_write(ps, REG_PORT(port),
                                           PORT_PCS_CTRL, reg);
                if (ret)
-                       goto abort;
+                       return ret;
        }
 
        /* Port Control: disable Drop-on-Unlock, disable Drop-on-Lock,
@@ -2371,19 +2694,19 @@ static int mv88e6xxx_setup_port(struct dsa_switch *ds, int port)
         * forwarding of unknown unicasts and multicasts.
         */
        reg = 0;
-       if (mv88e6xxx_6352_family(ds) || mv88e6xxx_6351_family(ds) ||
-           mv88e6xxx_6165_family(ds) || mv88e6xxx_6097_family(ds) ||
-           mv88e6xxx_6095_family(ds) || mv88e6xxx_6065_family(ds) ||
-           mv88e6xxx_6185_family(ds) || mv88e6xxx_6320_family(ds))
+       if (mv88e6xxx_6352_family(ps) || mv88e6xxx_6351_family(ps) ||
+           mv88e6xxx_6165_family(ps) || mv88e6xxx_6097_family(ps) ||
+           mv88e6xxx_6095_family(ps) || mv88e6xxx_6065_family(ps) ||
+           mv88e6xxx_6185_family(ps) || mv88e6xxx_6320_family(ps))
                reg = PORT_CONTROL_IGMP_MLD_SNOOP |
                PORT_CONTROL_USE_TAG | PORT_CONTROL_USE_IP |
                PORT_CONTROL_STATE_FORWARDING;
        if (dsa_is_cpu_port(ds, port)) {
-               if (mv88e6xxx_6095_family(ds) || mv88e6xxx_6185_family(ds))
+               if (mv88e6xxx_6095_family(ps) || mv88e6xxx_6185_family(ps))
                        reg |= PORT_CONTROL_DSA_TAG;
-               if (mv88e6xxx_6352_family(ds) || mv88e6xxx_6351_family(ds) ||
-                   mv88e6xxx_6165_family(ds) || mv88e6xxx_6097_family(ds) ||
-                   mv88e6xxx_6320_family(ds)) {
+               if (mv88e6xxx_6352_family(ps) || mv88e6xxx_6351_family(ps) ||
+                   mv88e6xxx_6165_family(ps) || mv88e6xxx_6097_family(ps) ||
+                   mv88e6xxx_6320_family(ps)) {
                        if (ds->dst->tag_protocol == DSA_TAG_PROTO_EDSA)
                                reg |= PORT_CONTROL_FRAME_ETHER_TYPE_DSA;
                        else
@@ -2392,20 +2715,20 @@ static int mv88e6xxx_setup_port(struct dsa_switch *ds, int port)
                                PORT_CONTROL_FORWARD_UNKNOWN_MC;
                }
 
-               if (mv88e6xxx_6352_family(ds) || mv88e6xxx_6351_family(ds) ||
-                   mv88e6xxx_6165_family(ds) || mv88e6xxx_6097_family(ds) ||
-                   mv88e6xxx_6095_family(ds) || mv88e6xxx_6065_family(ds) ||
-                   mv88e6xxx_6185_family(ds) || mv88e6xxx_6320_family(ds)) {
+               if (mv88e6xxx_6352_family(ps) || mv88e6xxx_6351_family(ps) ||
+                   mv88e6xxx_6165_family(ps) || mv88e6xxx_6097_family(ps) ||
+                   mv88e6xxx_6095_family(ps) || mv88e6xxx_6065_family(ps) ||
+                   mv88e6xxx_6185_family(ps) || mv88e6xxx_6320_family(ps)) {
                        if (ds->dst->tag_protocol == DSA_TAG_PROTO_EDSA)
                                reg |= PORT_CONTROL_EGRESS_ADD_TAG;
                }
        }
        if (dsa_is_dsa_port(ds, port)) {
-               if (mv88e6xxx_6095_family(ds) || mv88e6xxx_6185_family(ds))
+               if (mv88e6xxx_6095_family(ps) || mv88e6xxx_6185_family(ps))
                        reg |= PORT_CONTROL_DSA_TAG;
-               if (mv88e6xxx_6352_family(ds) || mv88e6xxx_6351_family(ds) ||
-                   mv88e6xxx_6165_family(ds) || mv88e6xxx_6097_family(ds) ||
-                   mv88e6xxx_6320_family(ds)) {
+               if (mv88e6xxx_6352_family(ps) || mv88e6xxx_6351_family(ps) ||
+                   mv88e6xxx_6165_family(ps) || mv88e6xxx_6097_family(ps) ||
+                   mv88e6xxx_6320_family(ps)) {
                        reg |= PORT_CONTROL_FRAME_MODE_DSA;
                }
 
@@ -2414,26 +2737,26 @@ static int mv88e6xxx_setup_port(struct dsa_switch *ds, int port)
                                PORT_CONTROL_FORWARD_UNKNOWN_MC;
        }
        if (reg) {
-               ret = _mv88e6xxx_reg_write(ds, REG_PORT(port),
+               ret = _mv88e6xxx_reg_write(ps, REG_PORT(port),
                                           PORT_CONTROL, reg);
                if (ret)
-                       goto abort;
+                       return ret;
        }
 
        /* If this port is connected to a SerDes, make sure the SerDes is not
         * powered down.
         */
-       if (mv88e6xxx_6352_family(ds)) {
-               ret = _mv88e6xxx_reg_read(ds, REG_PORT(port), PORT_STATUS);
+       if (mv88e6xxx_6352_family(ps)) {
+               ret = _mv88e6xxx_reg_read(ps, REG_PORT(port), PORT_STATUS);
                if (ret < 0)
-                       goto abort;
+                       return ret;
                ret &= PORT_STATUS_CMODE_MASK;
                if ((ret == PORT_STATUS_CMODE_100BASE_X) ||
                    (ret == PORT_STATUS_CMODE_1000BASE_X) ||
                    (ret == PORT_STATUS_CMODE_SGMII)) {
-                       ret = mv88e6xxx_power_on_serdes(ds);
+                       ret = mv88e6xxx_power_on_serdes(ps);
                        if (ret < 0)
-                               goto abort;
+                               return ret;
                }
        }
 
@@ -2444,17 +2767,17 @@ static int mv88e6xxx_setup_port(struct dsa_switch *ds, int port)
         * copy of all transmitted/received frames on this port to the CPU.
         */
        reg = 0;
-       if (mv88e6xxx_6352_family(ds) || mv88e6xxx_6351_family(ds) ||
-           mv88e6xxx_6165_family(ds) || mv88e6xxx_6097_family(ds) ||
-           mv88e6xxx_6095_family(ds) || mv88e6xxx_6320_family(ds) ||
-           mv88e6xxx_6185_family(ds))
+       if (mv88e6xxx_6352_family(ps) || mv88e6xxx_6351_family(ps) ||
+           mv88e6xxx_6165_family(ps) || mv88e6xxx_6097_family(ps) ||
+           mv88e6xxx_6095_family(ps) || mv88e6xxx_6320_family(ps) ||
+           mv88e6xxx_6185_family(ps))
                reg = PORT_CONTROL_2_MAP_DA;
 
-       if (mv88e6xxx_6352_family(ds) || mv88e6xxx_6351_family(ds) ||
-           mv88e6xxx_6165_family(ds) || mv88e6xxx_6320_family(ds))
+       if (mv88e6xxx_6352_family(ps) || mv88e6xxx_6351_family(ps) ||
+           mv88e6xxx_6165_family(ps) || mv88e6xxx_6320_family(ps))
                reg |= PORT_CONTROL_2_JUMBO_10240;
 
-       if (mv88e6xxx_6095_family(ds) || mv88e6xxx_6185_family(ds)) {
+       if (mv88e6xxx_6095_family(ps) || mv88e6xxx_6185_family(ps)) {
                /* Set the upstream port this port should use */
                reg |= dsa_upstream_port(ds);
                /* enable forwarding of unknown multicast addresses to
@@ -2467,10 +2790,10 @@ static int mv88e6xxx_setup_port(struct dsa_switch *ds, int port)
        reg |= PORT_CONTROL_2_8021Q_DISABLED;
 
        if (reg) {
-               ret = _mv88e6xxx_reg_write(ds, REG_PORT(port),
+               ret = _mv88e6xxx_reg_write(ps, REG_PORT(port),
                                           PORT_CONTROL_2, reg);
                if (ret)
-                       goto abort;
+                       return ret;
        }
 
        /* Port Association Vector: when learning source addresses
@@ -2483,369 +2806,348 @@ static int mv88e6xxx_setup_port(struct dsa_switch *ds, int port)
        if (dsa_is_cpu_port(ds, port))
                reg = 0;
 
-       ret = _mv88e6xxx_reg_write(ds, REG_PORT(port), PORT_ASSOC_VECTOR, reg);
+       ret = _mv88e6xxx_reg_write(ps, REG_PORT(port), PORT_ASSOC_VECTOR, reg);
        if (ret)
-               goto abort;
+               return ret;
 
        /* Egress rate control 2: disable egress rate control. */
-       ret = _mv88e6xxx_reg_write(ds, REG_PORT(port), PORT_RATE_CONTROL_2,
+       ret = _mv88e6xxx_reg_write(ps, REG_PORT(port), PORT_RATE_CONTROL_2,
                                   0x0000);
        if (ret)
-               goto abort;
+               return ret;
 
-       if (mv88e6xxx_6352_family(ds) || mv88e6xxx_6351_family(ds) ||
-           mv88e6xxx_6165_family(ds) || mv88e6xxx_6097_family(ds) ||
-           mv88e6xxx_6320_family(ds)) {
+       if (mv88e6xxx_6352_family(ps) || mv88e6xxx_6351_family(ps) ||
+           mv88e6xxx_6165_family(ps) || mv88e6xxx_6097_family(ps) ||
+           mv88e6xxx_6320_family(ps)) {
                /* Do not limit the period of time that this port can
                 * be paused for by the remote end or the period of
                 * time that this port can pause the remote end.
                 */
-               ret = _mv88e6xxx_reg_write(ds, REG_PORT(port),
+               ret = _mv88e6xxx_reg_write(ps, REG_PORT(port),
                                           PORT_PAUSE_CTRL, 0x0000);
                if (ret)
-                       goto abort;
+                       return ret;
 
                /* Port ATU control: disable limiting the number of
                 * address database entries that this port is allowed
                 * to use.
                 */
-               ret = _mv88e6xxx_reg_write(ds, REG_PORT(port),
+               ret = _mv88e6xxx_reg_write(ps, REG_PORT(port),
                                           PORT_ATU_CONTROL, 0x0000);
                /* Priority Override: disable DA, SA and VTU priority
                 * override.
                 */
-               ret = _mv88e6xxx_reg_write(ds, REG_PORT(port),
+               ret = _mv88e6xxx_reg_write(ps, REG_PORT(port),
                                           PORT_PRI_OVERRIDE, 0x0000);
                if (ret)
-                       goto abort;
+                       return ret;
 
                /* Port Ethertype: use the Ethertype DSA Ethertype
                 * value.
                 */
-               ret = _mv88e6xxx_reg_write(ds, REG_PORT(port),
+               ret = _mv88e6xxx_reg_write(ps, REG_PORT(port),
                                           PORT_ETH_TYPE, ETH_P_EDSA);
                if (ret)
-                       goto abort;
+                       return ret;
                /* Tag Remap: use an identity 802.1p prio -> switch
                 * prio mapping.
                 */
-               ret = _mv88e6xxx_reg_write(ds, REG_PORT(port),
+               ret = _mv88e6xxx_reg_write(ps, REG_PORT(port),
                                           PORT_TAG_REGMAP_0123, 0x3210);
                if (ret)
-                       goto abort;
+                       return ret;
 
                /* Tag Remap 2: use an identity 802.1p prio -> switch
                 * prio mapping.
                 */
-               ret = _mv88e6xxx_reg_write(ds, REG_PORT(port),
+               ret = _mv88e6xxx_reg_write(ps, REG_PORT(port),
                                           PORT_TAG_REGMAP_4567, 0x7654);
                if (ret)
-                       goto abort;
+                       return ret;
        }
 
-       if (mv88e6xxx_6352_family(ds) || mv88e6xxx_6351_family(ds) ||
-           mv88e6xxx_6165_family(ds) || mv88e6xxx_6097_family(ds) ||
-           mv88e6xxx_6185_family(ds) || mv88e6xxx_6095_family(ds) ||
-           mv88e6xxx_6320_family(ds)) {
+       if (mv88e6xxx_6352_family(ps) || mv88e6xxx_6351_family(ps) ||
+           mv88e6xxx_6165_family(ps) || mv88e6xxx_6097_family(ps) ||
+           mv88e6xxx_6185_family(ps) || mv88e6xxx_6095_family(ps) ||
+           mv88e6xxx_6320_family(ps)) {
                /* Rate Control: disable ingress rate limiting. */
-               ret = _mv88e6xxx_reg_write(ds, REG_PORT(port),
+               ret = _mv88e6xxx_reg_write(ps, REG_PORT(port),
                                           PORT_RATE_CONTROL, 0x0001);
                if (ret)
-                       goto abort;
+                       return ret;
        }
 
        /* Port Control 1: disable trunking, disable sending
         * learning messages to this port.
         */
-       ret = _mv88e6xxx_reg_write(ds, REG_PORT(port), PORT_CONTROL_1, 0x0000);
+       ret = _mv88e6xxx_reg_write(ps, REG_PORT(port), PORT_CONTROL_1, 0x0000);
        if (ret)
-               goto abort;
+               return ret;
 
        /* Port based VLAN map: give each port the same default address
         * database, and allow bidirectional communication between the
         * CPU and DSA port(s), and the other ports.
         */
-       ret = _mv88e6xxx_port_fid_set(ds, port, 0);
+       ret = _mv88e6xxx_port_fid_set(ps, port, 0);
        if (ret)
-               goto abort;
+               return ret;
 
-       ret = _mv88e6xxx_port_based_vlan_map(ds, port);
+       ret = _mv88e6xxx_port_based_vlan_map(ps, port);
        if (ret)
-               goto abort;
+               return ret;
 
        /* Default VLAN ID and priority: don't set a default VLAN
         * ID, and set the default packet priority to zero.
         */
-       ret = _mv88e6xxx_reg_write(ds, REG_PORT(port), PORT_DEFAULT_VLAN,
+       ret = _mv88e6xxx_reg_write(ps, REG_PORT(port), PORT_DEFAULT_VLAN,
                                   0x0000);
-abort:
-       mutex_unlock(&ps->smi_mutex);
-       return ret;
-}
-
-int mv88e6xxx_setup_ports(struct dsa_switch *ds)
-{
-       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
-       int ret;
-       int i;
+       if (ret)
+               return ret;
 
-       for (i = 0; i < ps->info->num_ports; i++) {
-               ret = mv88e6xxx_setup_port(ds, i);
-               if (ret < 0)
-                       return ret;
-       }
        return 0;
 }
 
-int mv88e6xxx_setup_common(struct dsa_switch *ds)
+static int mv88e6xxx_setup_global(struct mv88e6xxx_priv_state *ps)
 {
-       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
+       struct dsa_switch *ds = ps->ds;
+       u32 upstream_port = dsa_upstream_port(ds);
+       u16 reg;
+       int err;
+       int i;
 
-       ps->ds = ds;
-       mutex_init(&ps->smi_mutex);
+       /* Enable the PHY Polling Unit if present, don't discard any packets,
+        * and mask all interrupt sources.
+        */
+       reg = 0;
+       if (mv88e6xxx_has(ps, MV88E6XXX_FLAG_PPU) ||
+           mv88e6xxx_has(ps, MV88E6XXX_FLAG_PPU_ACTIVE))
+               reg |= GLOBAL_CONTROL_PPU_ENABLE;
 
-       INIT_WORK(&ps->bridge_work, mv88e6xxx_bridge_work);
+       err = _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_CONTROL, reg);
+       if (err)
+               return err;
 
-       return 0;
-}
+       /* Configure the upstream port, and configure it as the port to which
+        * ingress and egress and ARP monitor frames are to be sent.
+        */
+       reg = upstream_port << GLOBAL_MONITOR_CONTROL_INGRESS_SHIFT |
+               upstream_port << GLOBAL_MONITOR_CONTROL_EGRESS_SHIFT |
+               upstream_port << GLOBAL_MONITOR_CONTROL_ARP_SHIFT;
+       err = _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_MONITOR_CONTROL, reg);
+       if (err)
+               return err;
 
-int mv88e6xxx_setup_global(struct dsa_switch *ds)
-{
-       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
-       int err;
-       int i;
+       /* Disable remote management, and set the switch's DSA device number. */
+       err = _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_CONTROL_2,
+                                  GLOBAL_CONTROL_2_MULTIPLE_CASCADE |
+                                  (ds->index & 0x1f));
+       if (err)
+               return err;
 
-       mutex_lock(&ps->smi_mutex);
        /* Set the default address aging time to 5 minutes, and
         * enable address learn messages to be sent to all message
         * ports.
         */
-       err = _mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_ATU_CONTROL,
+       err = _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_ATU_CONTROL,
                                   0x0140 | GLOBAL_ATU_CONTROL_LEARN2ALL);
        if (err)
-               goto unlock;
+               return err;
 
        /* Configure the IP ToS mapping registers. */
-       err = _mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_IP_PRI_0, 0x0000);
+       err = _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_IP_PRI_0, 0x0000);
        if (err)
-               goto unlock;
-       err = _mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_IP_PRI_1, 0x0000);
+               return err;
+       err = _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_IP_PRI_1, 0x0000);
        if (err)
-               goto unlock;
-       err = _mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_IP_PRI_2, 0x5555);
+               return err;
+       err = _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_IP_PRI_2, 0x5555);
        if (err)
-               goto unlock;
-       err = _mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_IP_PRI_3, 0x5555);
+               return err;
+       err = _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_IP_PRI_3, 0x5555);
        if (err)
-               goto unlock;
-       err = _mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_IP_PRI_4, 0xaaaa);
+               return err;
+       err = _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_IP_PRI_4, 0xaaaa);
        if (err)
-               goto unlock;
-       err = _mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_IP_PRI_5, 0xaaaa);
+               return err;
+       err = _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_IP_PRI_5, 0xaaaa);
        if (err)
-               goto unlock;
-       err = _mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_IP_PRI_6, 0xffff);
+               return err;
+       err = _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_IP_PRI_6, 0xffff);
        if (err)
-               goto unlock;
-       err = _mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_IP_PRI_7, 0xffff);
+               return err;
+       err = _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_IP_PRI_7, 0xffff);
        if (err)
-               goto unlock;
+               return err;
 
        /* Configure the IEEE 802.1p priority mapping register. */
-       err = _mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_IEEE_PRI, 0xfa41);
+       err = _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_IEEE_PRI, 0xfa41);
        if (err)
-               goto unlock;
+               return err;
 
        /* Send all frames with destination addresses matching
         * 01:80:c2:00:00:0x to the CPU port.
         */
-       err = _mv88e6xxx_reg_write(ds, REG_GLOBAL2, GLOBAL2_MGMT_EN_0X, 0xffff);
+       err = _mv88e6xxx_reg_write(ps, REG_GLOBAL2, GLOBAL2_MGMT_EN_0X, 0xffff);
        if (err)
-               goto unlock;
+               return err;
 
        /* Ignore removed tag data on doubly tagged packets, disable
         * flow control messages, force flow control priority to the
         * highest, and send all special multicast frames to the CPU
         * port at the highest priority.
         */
-       err = _mv88e6xxx_reg_write(ds, REG_GLOBAL2, GLOBAL2_SWITCH_MGMT,
+       err = _mv88e6xxx_reg_write(ps, REG_GLOBAL2, GLOBAL2_SWITCH_MGMT,
                                   0x7 | GLOBAL2_SWITCH_MGMT_RSVD2CPU | 0x70 |
                                   GLOBAL2_SWITCH_MGMT_FORCE_FLOW_CTRL_PRI);
        if (err)
-               goto unlock;
+               return err;
 
        /* Program the DSA routing table. */
        for (i = 0; i < 32; i++) {
                int nexthop = 0x1f;
 
-               if (ds->pd->rtable &&
-                   i != ds->index && i < ds->dst->pd->nr_chips)
-                       nexthop = ds->pd->rtable[i] & 0x1f;
+               if (ps->ds->pd->rtable &&
+                   i != ps->ds->index && i < ps->ds->dst->pd->nr_chips)
+                       nexthop = ps->ds->pd->rtable[i] & 0x1f;
 
                err = _mv88e6xxx_reg_write(
-                       ds, REG_GLOBAL2,
+                       ps, REG_GLOBAL2,
                        GLOBAL2_DEVICE_MAPPING,
                        GLOBAL2_DEVICE_MAPPING_UPDATE |
                        (i << GLOBAL2_DEVICE_MAPPING_TARGET_SHIFT) | nexthop);
                if (err)
-                       goto unlock;
+                       return err;
        }
 
        /* Clear all trunk masks. */
        for (i = 0; i < 8; i++) {
-               err = _mv88e6xxx_reg_write(ds, REG_GLOBAL2, GLOBAL2_TRUNK_MASK,
+               err = _mv88e6xxx_reg_write(ps, REG_GLOBAL2, GLOBAL2_TRUNK_MASK,
                                           0x8000 |
                                           (i << GLOBAL2_TRUNK_MASK_NUM_SHIFT) |
                                           ((1 << ps->info->num_ports) - 1));
                if (err)
-                       goto unlock;
+                       return err;
        }
 
        /* Clear all trunk mappings. */
        for (i = 0; i < 16; i++) {
                err = _mv88e6xxx_reg_write(
-                       ds, REG_GLOBAL2,
+                       ps, REG_GLOBAL2,
                        GLOBAL2_TRUNK_MAPPING,
                        GLOBAL2_TRUNK_MAPPING_UPDATE |
                        (i << GLOBAL2_TRUNK_MAPPING_ID_SHIFT));
                if (err)
-                       goto unlock;
+                       return err;
        }
 
-       if (mv88e6xxx_6352_family(ds) || mv88e6xxx_6351_family(ds) ||
-           mv88e6xxx_6165_family(ds) || mv88e6xxx_6097_family(ds) ||
-           mv88e6xxx_6320_family(ds)) {
+       if (mv88e6xxx_6352_family(ps) || mv88e6xxx_6351_family(ps) ||
+           mv88e6xxx_6165_family(ps) || mv88e6xxx_6097_family(ps) ||
+           mv88e6xxx_6320_family(ps)) {
                /* Send all frames with destination addresses matching
                 * 01:80:c2:00:00:2x to the CPU port.
                 */
-               err = _mv88e6xxx_reg_write(ds, REG_GLOBAL2,
+               err = _mv88e6xxx_reg_write(ps, REG_GLOBAL2,
                                           GLOBAL2_MGMT_EN_2X, 0xffff);
                if (err)
-                       goto unlock;
+                       return err;
 
                /* Initialise cross-chip port VLAN table to reset
                 * defaults.
                 */
-               err = _mv88e6xxx_reg_write(ds, REG_GLOBAL2,
+               err = _mv88e6xxx_reg_write(ps, REG_GLOBAL2,
                                           GLOBAL2_PVT_ADDR, 0x9000);
                if (err)
-                       goto unlock;
+                       return err;
 
                /* Clear the priority override table. */
                for (i = 0; i < 16; i++) {
-                       err = _mv88e6xxx_reg_write(ds, REG_GLOBAL2,
+                       err = _mv88e6xxx_reg_write(ps, REG_GLOBAL2,
                                                   GLOBAL2_PRIO_OVERRIDE,
                                                   0x8000 | (i << 8));
                        if (err)
-                               goto unlock;
+                               return err;
                }
        }
 
-       if (mv88e6xxx_6352_family(ds) || mv88e6xxx_6351_family(ds) ||
-           mv88e6xxx_6165_family(ds) || mv88e6xxx_6097_family(ds) ||
-           mv88e6xxx_6185_family(ds) || mv88e6xxx_6095_family(ds) ||
-           mv88e6xxx_6320_family(ds)) {
+       if (mv88e6xxx_6352_family(ps) || mv88e6xxx_6351_family(ps) ||
+           mv88e6xxx_6165_family(ps) || mv88e6xxx_6097_family(ps) ||
+           mv88e6xxx_6185_family(ps) || mv88e6xxx_6095_family(ps) ||
+           mv88e6xxx_6320_family(ps)) {
                /* Disable ingress rate limiting by resetting all
                 * ingress rate limit registers to their initial
                 * state.
                 */
                for (i = 0; i < ps->info->num_ports; i++) {
-                       err = _mv88e6xxx_reg_write(ds, REG_GLOBAL2,
+                       err = _mv88e6xxx_reg_write(ps, REG_GLOBAL2,
                                                   GLOBAL2_INGRESS_OP,
                                                   0x9000 | (i << 8));
                        if (err)
-                               goto unlock;
+                               return err;
                }
        }
 
        /* Clear the statistics counters for all ports */
-       err = _mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_STATS_OP,
+       err = _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_STATS_OP,
                                   GLOBAL_STATS_OP_FLUSH_ALL);
        if (err)
-               goto unlock;
+               return err;
 
        /* Wait for the flush to complete. */
-       err = _mv88e6xxx_stats_wait(ds);
-       if (err < 0)
-               goto unlock;
+       err = _mv88e6xxx_stats_wait(ps);
+       if (err)
+               return err;
 
        /* Clear all ATU entries */
-       err = _mv88e6xxx_atu_flush(ds, 0, true);
-       if (err < 0)
-               goto unlock;
+       err = _mv88e6xxx_atu_flush(ps, 0, true);
+       if (err)
+               return err;
 
        /* Clear all the VTU and STU entries */
-       err = _mv88e6xxx_vtu_stu_flush(ds);
-unlock:
-       mutex_unlock(&ps->smi_mutex);
+       err = _mv88e6xxx_vtu_stu_flush(ps);
+       if (err < 0)
+               return err;
 
        return err;
 }
 
-int mv88e6xxx_switch_reset(struct dsa_switch *ds, bool ppu_active)
+static int mv88e6xxx_setup(struct dsa_switch *ds)
 {
        struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
-       u16 is_reset = (ppu_active ? 0x8800 : 0xc800);
-       struct gpio_desc *gpiod = ds->pd->reset;
-       unsigned long timeout;
-       int ret;
+       int err;
        int i;
 
-       mutex_lock(&ps->smi_mutex);
+       ps->ds = ds;
 
-       /* Set all ports to the disabled state. */
-       for (i = 0; i < ps->info->num_ports; i++) {
-               ret = _mv88e6xxx_reg_read(ds, REG_PORT(i), PORT_CONTROL);
-               if (ret < 0)
-                       goto unlock;
+       mutex_init(&ps->smi_mutex);
 
-               ret = _mv88e6xxx_reg_write(ds, REG_PORT(i), PORT_CONTROL,
-                                          ret & 0xfffc);
-               if (ret)
-                       goto unlock;
-       }
+       INIT_WORK(&ps->bridge_work, mv88e6xxx_bridge_work);
 
-       /* Wait for transmit queues to drain. */
-       usleep_range(2000, 4000);
+       if (mv88e6xxx_has(ps, MV88E6XXX_FLAG_EEPROM))
+               mutex_init(&ps->eeprom_mutex);
 
-       /* If there is a gpio connected to the reset pin, toggle it */
-       if (gpiod) {
-               gpiod_set_value_cansleep(gpiod, 1);
-               usleep_range(10000, 20000);
-               gpiod_set_value_cansleep(gpiod, 0);
-               usleep_range(10000, 20000);
-       }
+       if (mv88e6xxx_has(ps, MV88E6XXX_FLAG_PPU))
+               mv88e6xxx_ppu_state_init(ps);
 
-       /* Reset the switch. Keep the PPU active if requested. The PPU
-        * needs to be active to support indirect phy register access
-        * through global registers 0x18 and 0x19.
-        */
-       if (ppu_active)
-               ret = _mv88e6xxx_reg_write(ds, REG_GLOBAL, 0x04, 0xc000);
-       else
-               ret = _mv88e6xxx_reg_write(ds, REG_GLOBAL, 0x04, 0xc400);
-       if (ret)
+       mutex_lock(&ps->smi_mutex);
+
+       err = mv88e6xxx_switch_reset(ps);
+       if (err)
                goto unlock;
 
-       /* Wait up to one second for reset to complete. */
-       timeout = jiffies + 1 * HZ;
-       while (time_before(jiffies, timeout)) {
-               ret = _mv88e6xxx_reg_read(ds, REG_GLOBAL, 0x00);
-               if (ret < 0)
-                       goto unlock;
+       err = mv88e6xxx_setup_global(ps);
+       if (err)
+               goto unlock;
 
-               if ((ret & is_reset) == is_reset)
-                       break;
-               usleep_range(1000, 2000);
+       for (i = 0; i < ps->info->num_ports; i++) {
+               err = mv88e6xxx_setup_port(ps, i);
+               if (err)
+                       goto unlock;
        }
-       if (time_after(jiffies, timeout))
-               ret = -ETIMEDOUT;
-       else
-               ret = 0;
+
 unlock:
        mutex_unlock(&ps->smi_mutex);
 
-       return ret;
+       return err;
 }
 
 int mv88e6xxx_phy_page_read(struct dsa_switch *ds, int port, int page, int reg)
@@ -2854,7 +3156,7 @@ int mv88e6xxx_phy_page_read(struct dsa_switch *ds, int port, int page, int reg)
        int ret;
 
        mutex_lock(&ps->smi_mutex);
-       ret = _mv88e6xxx_phy_page_read(ds, port, page, reg);
+       ret = _mv88e6xxx_phy_page_read(ps, port, page, reg);
        mutex_unlock(&ps->smi_mutex);
 
        return ret;
@@ -2867,82 +3169,61 @@ int mv88e6xxx_phy_page_write(struct dsa_switch *ds, int port, int page,
        int ret;
 
        mutex_lock(&ps->smi_mutex);
-       ret = _mv88e6xxx_phy_page_write(ds, port, page, reg, val);
+       ret = _mv88e6xxx_phy_page_write(ps, port, page, reg, val);
        mutex_unlock(&ps->smi_mutex);
 
        return ret;
 }
 
-static int mv88e6xxx_port_to_phy_addr(struct dsa_switch *ds, int port)
+static int mv88e6xxx_port_to_phy_addr(struct mv88e6xxx_priv_state *ps,
+                                     int port)
 {
-       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
-
        if (port >= 0 && port < ps->info->num_ports)
                return port;
        return -EINVAL;
 }
 
-int
-mv88e6xxx_phy_read(struct dsa_switch *ds, int port, int regnum)
+static int mv88e6xxx_phy_read(struct dsa_switch *ds, int port, int regnum)
 {
        struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
-       int addr = mv88e6xxx_port_to_phy_addr(ds, port);
+       int addr = mv88e6xxx_port_to_phy_addr(ps, port);
        int ret;
 
        if (addr < 0)
-               return addr;
+               return 0xffff;
 
        mutex_lock(&ps->smi_mutex);
-       ret = _mv88e6xxx_phy_read(ds, addr, regnum);
-       mutex_unlock(&ps->smi_mutex);
-       return ret;
-}
-
-int
-mv88e6xxx_phy_write(struct dsa_switch *ds, int port, int regnum, u16 val)
-{
-       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
-       int addr = mv88e6xxx_port_to_phy_addr(ds, port);
-       int ret;
 
-       if (addr < 0)
-               return addr;
+       if (mv88e6xxx_has(ps, MV88E6XXX_FLAG_PPU))
+               ret = mv88e6xxx_phy_read_ppu(ps, addr, regnum);
+       else if (mv88e6xxx_has(ps, MV88E6XXX_FLAG_SMI_PHY))
+               ret = _mv88e6xxx_phy_read_indirect(ps, addr, regnum);
+       else
+               ret = _mv88e6xxx_phy_read(ps, addr, regnum);
 
-       mutex_lock(&ps->smi_mutex);
-       ret = _mv88e6xxx_phy_write(ds, addr, regnum, val);
        mutex_unlock(&ps->smi_mutex);
        return ret;
 }
 
-int
-mv88e6xxx_phy_read_indirect(struct dsa_switch *ds, int port, int regnum)
+static int mv88e6xxx_phy_write(struct dsa_switch *ds, int port, int regnum,
+                              u16 val)
 {
        struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
-       int addr = mv88e6xxx_port_to_phy_addr(ds, port);
+       int addr = mv88e6xxx_port_to_phy_addr(ps, port);
        int ret;
 
        if (addr < 0)
-               return addr;
+               return 0xffff;
 
        mutex_lock(&ps->smi_mutex);
-       ret = _mv88e6xxx_phy_read_indirect(ds, addr, regnum);
-       mutex_unlock(&ps->smi_mutex);
-       return ret;
-}
-
-int
-mv88e6xxx_phy_write_indirect(struct dsa_switch *ds, int port, int regnum,
-                            u16 val)
-{
-       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
-       int addr = mv88e6xxx_port_to_phy_addr(ds, port);
-       int ret;
 
-       if (addr < 0)
-               return addr;
+       if (mv88e6xxx_has(ps, MV88E6XXX_FLAG_PPU))
+               ret = mv88e6xxx_phy_write_ppu(ps, addr, regnum, val);
+       else if (mv88e6xxx_has(ps, MV88E6XXX_FLAG_SMI_PHY))
+               ret = _mv88e6xxx_phy_write_indirect(ps, addr, regnum, val);
+       else
+               ret = _mv88e6xxx_phy_write(ps, addr, regnum, val);
 
-       mutex_lock(&ps->smi_mutex);
-       ret = _mv88e6xxx_phy_write_indirect(ds, addr, regnum, val);
        mutex_unlock(&ps->smi_mutex);
        return ret;
 }
@@ -2959,44 +3240,45 @@ static int mv88e61xx_get_temp(struct dsa_switch *ds, int *temp)
 
        mutex_lock(&ps->smi_mutex);
 
-       ret = _mv88e6xxx_phy_write(ds, 0x0, 0x16, 0x6);
+       ret = _mv88e6xxx_phy_write(ps, 0x0, 0x16, 0x6);
        if (ret < 0)
                goto error;
 
        /* Enable temperature sensor */
-       ret = _mv88e6xxx_phy_read(ds, 0x0, 0x1a);
+       ret = _mv88e6xxx_phy_read(ps, 0x0, 0x1a);
        if (ret < 0)
                goto error;
 
-       ret = _mv88e6xxx_phy_write(ds, 0x0, 0x1a, ret | (1 << 5));
+       ret = _mv88e6xxx_phy_write(ps, 0x0, 0x1a, ret | (1 << 5));
        if (ret < 0)
                goto error;
 
        /* Wait for temperature to stabilize */
        usleep_range(10000, 12000);
 
-       val = _mv88e6xxx_phy_read(ds, 0x0, 0x1a);
+       val = _mv88e6xxx_phy_read(ps, 0x0, 0x1a);
        if (val < 0) {
                ret = val;
                goto error;
        }
 
        /* Disable temperature sensor */
-       ret = _mv88e6xxx_phy_write(ds, 0x0, 0x1a, ret & ~(1 << 5));
+       ret = _mv88e6xxx_phy_write(ps, 0x0, 0x1a, ret & ~(1 << 5));
        if (ret < 0)
                goto error;
 
        *temp = ((val & 0x1f) - 5) * 5;
 
 error:
-       _mv88e6xxx_phy_write(ds, 0x0, 0x16, 0x0);
+       _mv88e6xxx_phy_write(ps, 0x0, 0x16, 0x0);
        mutex_unlock(&ps->smi_mutex);
        return ret;
 }
 
 static int mv88e63xx_get_temp(struct dsa_switch *ds, int *temp)
 {
-       int phy = mv88e6xxx_6320_family(ds) ? 3 : 0;
+       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
+       int phy = mv88e6xxx_6320_family(ps) ? 3 : 0;
        int ret;
 
        *temp = 0;
@@ -3010,20 +3292,26 @@ static int mv88e63xx_get_temp(struct dsa_switch *ds, int *temp)
        return 0;
 }
 
-int mv88e6xxx_get_temp(struct dsa_switch *ds, int *temp)
+static int mv88e6xxx_get_temp(struct dsa_switch *ds, int *temp)
 {
-       if (mv88e6xxx_6320_family(ds) || mv88e6xxx_6352_family(ds))
+       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
+
+       if (!mv88e6xxx_has(ps, MV88E6XXX_FLAG_TEMP))
+               return -EOPNOTSUPP;
+
+       if (mv88e6xxx_6320_family(ps) || mv88e6xxx_6352_family(ps))
                return mv88e63xx_get_temp(ds, temp);
 
        return mv88e61xx_get_temp(ds, temp);
 }
 
-int mv88e6xxx_get_temp_limit(struct dsa_switch *ds, int *temp)
+static int mv88e6xxx_get_temp_limit(struct dsa_switch *ds, int *temp)
 {
-       int phy = mv88e6xxx_6320_family(ds) ? 3 : 0;
+       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
+       int phy = mv88e6xxx_6320_family(ps) ? 3 : 0;
        int ret;
 
-       if (!mv88e6xxx_6320_family(ds) && !mv88e6xxx_6352_family(ds))
+       if (!mv88e6xxx_has(ps, MV88E6XXX_FLAG_TEMP_LIMIT))
                return -EOPNOTSUPP;
 
        *temp = 0;
@@ -3037,12 +3325,13 @@ int mv88e6xxx_get_temp_limit(struct dsa_switch *ds, int *temp)
        return 0;
 }
 
-int mv88e6xxx_set_temp_limit(struct dsa_switch *ds, int temp)
+static int mv88e6xxx_set_temp_limit(struct dsa_switch *ds, int temp)
 {
-       int phy = mv88e6xxx_6320_family(ds) ? 3 : 0;
+       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
+       int phy = mv88e6xxx_6320_family(ps) ? 3 : 0;
        int ret;
 
-       if (!mv88e6xxx_6320_family(ds) && !mv88e6xxx_6352_family(ds))
+       if (!mv88e6xxx_has(ps, MV88E6XXX_FLAG_TEMP_LIMIT))
                return -EOPNOTSUPP;
 
        ret = mv88e6xxx_phy_page_read(ds, phy, 6, 26);
@@ -3053,12 +3342,13 @@ int mv88e6xxx_set_temp_limit(struct dsa_switch *ds, int temp)
                                        (ret & 0xe0ff) | (temp << 8));
 }
 
-int mv88e6xxx_get_temp_alarm(struct dsa_switch *ds, bool *alarm)
+static int mv88e6xxx_get_temp_alarm(struct dsa_switch *ds, bool *alarm)
 {
-       int phy = mv88e6xxx_6320_family(ds) ? 3 : 0;
+       struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
+       int phy = mv88e6xxx_6320_family(ps) ? 3 : 0;
        int ret;
 
-       if (!mv88e6xxx_6320_family(ds) && !mv88e6xxx_6352_family(ds))
+       if (!mv88e6xxx_has(ps, MV88E6XXX_FLAG_TEMP_LIMIT))
                return -EOPNOTSUPP;
 
        *alarm = false;
@@ -3073,6 +3363,161 @@ int mv88e6xxx_get_temp_alarm(struct dsa_switch *ds, bool *alarm)
 }
 #endif /* CONFIG_NET_DSA_HWMON */
 
+static const struct mv88e6xxx_info mv88e6xxx_table[] = {
+       [MV88E6085] = {
+               .prod_num = PORT_SWITCH_ID_PROD_NUM_6085,
+               .family = MV88E6XXX_FAMILY_6097,
+               .name = "Marvell 88E6085",
+               .num_databases = 4096,
+               .num_ports = 10,
+               .flags = MV88E6XXX_FLAGS_FAMILY_6097,
+       },
+
+       [MV88E6095] = {
+               .prod_num = PORT_SWITCH_ID_PROD_NUM_6095,
+               .family = MV88E6XXX_FAMILY_6095,
+               .name = "Marvell 88E6095/88E6095F",
+               .num_databases = 256,
+               .num_ports = 11,
+               .flags = MV88E6XXX_FLAGS_FAMILY_6095,
+       },
+
+       [MV88E6123] = {
+               .prod_num = PORT_SWITCH_ID_PROD_NUM_6123,
+               .family = MV88E6XXX_FAMILY_6165,
+               .name = "Marvell 88E6123",
+               .num_databases = 4096,
+               .num_ports = 3,
+               .flags = MV88E6XXX_FLAGS_FAMILY_6165,
+       },
+
+       [MV88E6131] = {
+               .prod_num = PORT_SWITCH_ID_PROD_NUM_6131,
+               .family = MV88E6XXX_FAMILY_6185,
+               .name = "Marvell 88E6131",
+               .num_databases = 256,
+               .num_ports = 8,
+               .flags = MV88E6XXX_FLAGS_FAMILY_6185,
+       },
+
+       [MV88E6161] = {
+               .prod_num = PORT_SWITCH_ID_PROD_NUM_6161,
+               .family = MV88E6XXX_FAMILY_6165,
+               .name = "Marvell 88E6161",
+               .num_databases = 4096,
+               .num_ports = 6,
+               .flags = MV88E6XXX_FLAGS_FAMILY_6165,
+       },
+
+       [MV88E6165] = {
+               .prod_num = PORT_SWITCH_ID_PROD_NUM_6165,
+               .family = MV88E6XXX_FAMILY_6165,
+               .name = "Marvell 88E6165",
+               .num_databases = 4096,
+               .num_ports = 6,
+               .flags = MV88E6XXX_FLAGS_FAMILY_6165,
+       },
+
+       [MV88E6171] = {
+               .prod_num = PORT_SWITCH_ID_PROD_NUM_6171,
+               .family = MV88E6XXX_FAMILY_6351,
+               .name = "Marvell 88E6171",
+               .num_databases = 4096,
+               .num_ports = 7,
+               .flags = MV88E6XXX_FLAGS_FAMILY_6351,
+       },
+
+       [MV88E6172] = {
+               .prod_num = PORT_SWITCH_ID_PROD_NUM_6172,
+               .family = MV88E6XXX_FAMILY_6352,
+               .name = "Marvell 88E6172",
+               .num_databases = 4096,
+               .num_ports = 7,
+               .flags = MV88E6XXX_FLAGS_FAMILY_6352,
+       },
+
+       [MV88E6175] = {
+               .prod_num = PORT_SWITCH_ID_PROD_NUM_6175,
+               .family = MV88E6XXX_FAMILY_6351,
+               .name = "Marvell 88E6175",
+               .num_databases = 4096,
+               .num_ports = 7,
+               .flags = MV88E6XXX_FLAGS_FAMILY_6351,
+       },
+
+       [MV88E6176] = {
+               .prod_num = PORT_SWITCH_ID_PROD_NUM_6176,
+               .family = MV88E6XXX_FAMILY_6352,
+               .name = "Marvell 88E6176",
+               .num_databases = 4096,
+               .num_ports = 7,
+               .flags = MV88E6XXX_FLAGS_FAMILY_6352,
+       },
+
+       [MV88E6185] = {
+               .prod_num = PORT_SWITCH_ID_PROD_NUM_6185,
+               .family = MV88E6XXX_FAMILY_6185,
+               .name = "Marvell 88E6185",
+               .num_databases = 256,
+               .num_ports = 10,
+               .flags = MV88E6XXX_FLAGS_FAMILY_6185,
+       },
+
+       [MV88E6240] = {
+               .prod_num = PORT_SWITCH_ID_PROD_NUM_6240,
+               .family = MV88E6XXX_FAMILY_6352,
+               .name = "Marvell 88E6240",
+               .num_databases = 4096,
+               .num_ports = 7,
+               .flags = MV88E6XXX_FLAGS_FAMILY_6352,
+       },
+
+       [MV88E6320] = {
+               .prod_num = PORT_SWITCH_ID_PROD_NUM_6320,
+               .family = MV88E6XXX_FAMILY_6320,
+               .name = "Marvell 88E6320",
+               .num_databases = 4096,
+               .num_ports = 7,
+               .flags = MV88E6XXX_FLAGS_FAMILY_6320,
+       },
+
+       [MV88E6321] = {
+               .prod_num = PORT_SWITCH_ID_PROD_NUM_6321,
+               .family = MV88E6XXX_FAMILY_6320,
+               .name = "Marvell 88E6321",
+               .num_databases = 4096,
+               .num_ports = 7,
+               .flags = MV88E6XXX_FLAGS_FAMILY_6320,
+       },
+
+       [MV88E6350] = {
+               .prod_num = PORT_SWITCH_ID_PROD_NUM_6350,
+               .family = MV88E6XXX_FAMILY_6351,
+               .name = "Marvell 88E6350",
+               .num_databases = 4096,
+               .num_ports = 7,
+               .flags = MV88E6XXX_FLAGS_FAMILY_6351,
+       },
+
+       [MV88E6351] = {
+               .prod_num = PORT_SWITCH_ID_PROD_NUM_6351,
+               .family = MV88E6XXX_FAMILY_6351,
+               .name = "Marvell 88E6351",
+               .num_databases = 4096,
+               .num_ports = 7,
+               .flags = MV88E6XXX_FLAGS_FAMILY_6351,
+       },
+
+       [MV88E6352] = {
+               .prod_num = PORT_SWITCH_ID_PROD_NUM_6352,
+               .family = MV88E6XXX_FAMILY_6352,
+               .name = "Marvell 88E6352",
+               .num_databases = 4096,
+               .num_ports = 7,
+               .flags = MV88E6XXX_FLAGS_FAMILY_6352,
+       },
+};
+
 static const struct mv88e6xxx_info *
 mv88e6xxx_lookup_info(unsigned int prod_num, const struct mv88e6xxx_info *table,
                      unsigned int num)
@@ -3086,10 +3531,9 @@ mv88e6xxx_lookup_info(unsigned int prod_num, const struct mv88e6xxx_info *table,
        return NULL;
 }
 
-const char *mv88e6xxx_drv_probe(struct device *dsa_dev, struct device *host_dev,
-                               int sw_addr, void **priv,
-                               const struct mv88e6xxx_info *table,
-                               unsigned int num)
+static const char *mv88e6xxx_probe(struct device *dsa_dev,
+                                  struct device *host_dev, int sw_addr,
+                                  void **priv)
 {
        const struct mv88e6xxx_info *info;
        struct mv88e6xxx_priv_state *ps;
@@ -3108,7 +3552,8 @@ const char *mv88e6xxx_drv_probe(struct device *dsa_dev, struct device *host_dev,
        prod_num = (id & 0xfff0) >> 4;
        rev = id & 0x000f;
 
-       info = mv88e6xxx_lookup_info(prod_num, table, num);
+       info = mv88e6xxx_lookup_info(prod_num, mv88e6xxx_table,
+                                    ARRAY_SIZE(mv88e6xxx_table));
        if (!info)
                return NULL;
 
@@ -3130,41 +3575,73 @@ const char *mv88e6xxx_drv_probe(struct device *dsa_dev, struct device *host_dev,
        return name;
 }
 
+struct dsa_switch_driver mv88e6xxx_switch_driver = {
+       .tag_protocol           = DSA_TAG_PROTO_EDSA,
+       .probe                  = mv88e6xxx_probe,
+       .setup                  = mv88e6xxx_setup,
+       .set_addr               = mv88e6xxx_set_addr,
+       .phy_read               = mv88e6xxx_phy_read,
+       .phy_write              = mv88e6xxx_phy_write,
+       .adjust_link            = mv88e6xxx_adjust_link,
+       .get_strings            = mv88e6xxx_get_strings,
+       .get_ethtool_stats      = mv88e6xxx_get_ethtool_stats,
+       .get_sset_count         = mv88e6xxx_get_sset_count,
+       .set_eee                = mv88e6xxx_set_eee,
+       .get_eee                = mv88e6xxx_get_eee,
+#ifdef CONFIG_NET_DSA_HWMON
+       .get_temp               = mv88e6xxx_get_temp,
+       .get_temp_limit         = mv88e6xxx_get_temp_limit,
+       .set_temp_limit         = mv88e6xxx_set_temp_limit,
+       .get_temp_alarm         = mv88e6xxx_get_temp_alarm,
+#endif
+       .get_eeprom             = mv88e6xxx_get_eeprom,
+       .set_eeprom             = mv88e6xxx_set_eeprom,
+       .get_regs_len           = mv88e6xxx_get_regs_len,
+       .get_regs               = mv88e6xxx_get_regs,
+       .port_bridge_join       = mv88e6xxx_port_bridge_join,
+       .port_bridge_leave      = mv88e6xxx_port_bridge_leave,
+       .port_stp_state_set     = mv88e6xxx_port_stp_state_set,
+       .port_vlan_filtering    = mv88e6xxx_port_vlan_filtering,
+       .port_vlan_prepare      = mv88e6xxx_port_vlan_prepare,
+       .port_vlan_add          = mv88e6xxx_port_vlan_add,
+       .port_vlan_del          = mv88e6xxx_port_vlan_del,
+       .port_vlan_dump         = mv88e6xxx_port_vlan_dump,
+       .port_fdb_prepare       = mv88e6xxx_port_fdb_prepare,
+       .port_fdb_add           = mv88e6xxx_port_fdb_add,
+       .port_fdb_del           = mv88e6xxx_port_fdb_del,
+       .port_fdb_dump          = mv88e6xxx_port_fdb_dump,
+};
+
 static int __init mv88e6xxx_init(void)
 {
-#if IS_ENABLED(CONFIG_NET_DSA_MV88E6131)
-       register_switch_driver(&mv88e6131_switch_driver);
-#endif
-#if IS_ENABLED(CONFIG_NET_DSA_MV88E6123)
-       register_switch_driver(&mv88e6123_switch_driver);
-#endif
-#if IS_ENABLED(CONFIG_NET_DSA_MV88E6352)
-       register_switch_driver(&mv88e6352_switch_driver);
-#endif
-#if IS_ENABLED(CONFIG_NET_DSA_MV88E6171)
-       register_switch_driver(&mv88e6171_switch_driver);
-#endif
+       register_switch_driver(&mv88e6xxx_switch_driver);
+
        return 0;
 }
 module_init(mv88e6xxx_init);
 
 static void __exit mv88e6xxx_cleanup(void)
 {
-#if IS_ENABLED(CONFIG_NET_DSA_MV88E6171)
-       unregister_switch_driver(&mv88e6171_switch_driver);
-#endif
-#if IS_ENABLED(CONFIG_NET_DSA_MV88E6352)
-       unregister_switch_driver(&mv88e6352_switch_driver);
-#endif
-#if IS_ENABLED(CONFIG_NET_DSA_MV88E6123)
-       unregister_switch_driver(&mv88e6123_switch_driver);
-#endif
-#if IS_ENABLED(CONFIG_NET_DSA_MV88E6131)
-       unregister_switch_driver(&mv88e6131_switch_driver);
-#endif
+       unregister_switch_driver(&mv88e6xxx_switch_driver);
 }
 module_exit(mv88e6xxx_cleanup);
 
+MODULE_ALIAS("platform:mv88e6085");
+MODULE_ALIAS("platform:mv88e6095");
+MODULE_ALIAS("platform:mv88e6095f");
+MODULE_ALIAS("platform:mv88e6123");
+MODULE_ALIAS("platform:mv88e6131");
+MODULE_ALIAS("platform:mv88e6161");
+MODULE_ALIAS("platform:mv88e6165");
+MODULE_ALIAS("platform:mv88e6171");
+MODULE_ALIAS("platform:mv88e6172");
+MODULE_ALIAS("platform:mv88e6175");
+MODULE_ALIAS("platform:mv88e6176");
+MODULE_ALIAS("platform:mv88e6320");
+MODULE_ALIAS("platform:mv88e6321");
+MODULE_ALIAS("platform:mv88e6350");
+MODULE_ALIAS("platform:mv88e6351");
+MODULE_ALIAS("platform:mv88e6352");
 MODULE_AUTHOR("Lennert Buytenhek <buytenh@wantstofly.org>");
 MODULE_DESCRIPTION("Driver for Marvell 88E6XXX ethernet switch chips");
 MODULE_LICENSE("GPL");
index 0dbe2d1..ca69a93 100644 (file)
 
 #define MV88E6XXX_N_FID                4096
 
+/* List of supported models */
+enum mv88e6xxx_model {
+       MV88E6085,
+       MV88E6095,
+       MV88E6123,
+       MV88E6131,
+       MV88E6161,
+       MV88E6165,
+       MV88E6171,
+       MV88E6172,
+       MV88E6175,
+       MV88E6176,
+       MV88E6185,
+       MV88E6240,
+       MV88E6320,
+       MV88E6321,
+       MV88E6350,
+       MV88E6351,
+       MV88E6352,
+};
+
 enum mv88e6xxx_family {
        MV88E6XXX_FAMILY_NONE,
        MV88E6XXX_FAMILY_6065,  /* 6031 6035 6061 6065 */
@@ -350,12 +371,142 @@ enum mv88e6xxx_family {
        MV88E6XXX_FAMILY_6352,  /* 6172 6176 6240 6352 */
 };
 
+enum mv88e6xxx_cap {
+       /* Address Translation Unit.
+        * The ATU is used to lookup and learn MAC addresses. See GLOBAL_ATU_OP.
+        */
+       MV88E6XXX_CAP_ATU,
+
+       /* Energy Efficient Ethernet.
+        */
+       MV88E6XXX_CAP_EEE,
+
+       /* EEPROM Command and Data registers.
+        * See GLOBAL2_EEPROM_OP and GLOBAL2_EEPROM_DATA.
+        */
+       MV88E6XXX_CAP_EEPROM,
+
+       /* Port State Filtering for 802.1D Spanning Tree.
+        * See PORT_CONTROL_STATE_* values in the PORT_CONTROL register.
+        */
+       MV88E6XXX_CAP_PORTSTATE,
+
+       /* PHY Polling Unit.
+        * See GLOBAL_CONTROL_PPU_ENABLE and GLOBAL_STATUS_PPU_POLLING.
+        */
+       MV88E6XXX_CAP_PPU,
+       MV88E6XXX_CAP_PPU_ACTIVE,
+
+       /* SMI PHY Command and Data registers.
+        * This requires an indirect access to PHY registers through
+        * GLOBAL2_SMI_OP, otherwise direct access to PHY registers is done.
+        */
+       MV88E6XXX_CAP_SMI_PHY,
+
+       /* Switch MAC/WoL/WoF register.
+        * This requires an indirect access to set the switch MAC address
+        * through GLOBAL2_SWITCH_MAC, otherwise GLOBAL_MAC_01, GLOBAL_MAC_23,
+        * and GLOBAL_MAC_45 are used with a direct access.
+        */
+       MV88E6XXX_CAP_SWITCH_MAC_WOL_WOF,
+
+       /* Internal temperature sensor.
+        * Available from any enabled port's PHY register 26, page 6.
+        */
+       MV88E6XXX_CAP_TEMP,
+       MV88E6XXX_CAP_TEMP_LIMIT,
+
+       /* In-chip Port Based VLANs.
+        * Each port VLANTable register (see PORT_BASE_VLAN) is used to restrict
+        * the output (or egress) ports to which it is allowed to send frames.
+        */
+       MV88E6XXX_CAP_VLANTABLE,
+
+       /* VLAN Table Unit.
+        * The VTU is used to program 802.1Q VLANs. See GLOBAL_VTU_OP.
+        */
+       MV88E6XXX_CAP_VTU,
+};
+
+/* Bitmask of capabilities */
+#define MV88E6XXX_FLAG_ATU             BIT(MV88E6XXX_CAP_ATU)
+#define MV88E6XXX_FLAG_EEE             BIT(MV88E6XXX_CAP_EEE)
+#define MV88E6XXX_FLAG_EEPROM          BIT(MV88E6XXX_CAP_EEPROM)
+#define MV88E6XXX_FLAG_PORTSTATE       BIT(MV88E6XXX_CAP_PORTSTATE)
+#define MV88E6XXX_FLAG_PPU             BIT(MV88E6XXX_CAP_PPU)
+#define MV88E6XXX_FLAG_PPU_ACTIVE      BIT(MV88E6XXX_CAP_PPU_ACTIVE)
+#define MV88E6XXX_FLAG_SMI_PHY         BIT(MV88E6XXX_CAP_SMI_PHY)
+#define MV88E6XXX_FLAG_SWITCH_MAC      BIT(MV88E6XXX_CAP_SWITCH_MAC_WOL_WOF)
+#define MV88E6XXX_FLAG_TEMP            BIT(MV88E6XXX_CAP_TEMP)
+#define MV88E6XXX_FLAG_TEMP_LIMIT      BIT(MV88E6XXX_CAP_TEMP_LIMIT)
+#define MV88E6XXX_FLAG_VLANTABLE       BIT(MV88E6XXX_CAP_VLANTABLE)
+#define MV88E6XXX_FLAG_VTU             BIT(MV88E6XXX_CAP_VTU)
+
+#define MV88E6XXX_FLAGS_FAMILY_6095    \
+       (MV88E6XXX_FLAG_ATU |           \
+        MV88E6XXX_FLAG_PPU |           \
+        MV88E6XXX_FLAG_VLANTABLE |     \
+        MV88E6XXX_FLAG_VTU)
+
+#define MV88E6XXX_FLAGS_FAMILY_6097    \
+       (MV88E6XXX_FLAG_ATU |           \
+        MV88E6XXX_FLAG_PPU |           \
+        MV88E6XXX_FLAG_VLANTABLE |     \
+        MV88E6XXX_FLAG_VTU)
+
+#define MV88E6XXX_FLAGS_FAMILY_6165    \
+       (MV88E6XXX_FLAG_SWITCH_MAC |    \
+        MV88E6XXX_FLAG_TEMP)
+
+#define MV88E6XXX_FLAGS_FAMILY_6185    \
+       (MV88E6XXX_FLAG_ATU |           \
+        MV88E6XXX_FLAG_PPU |           \
+        MV88E6XXX_FLAG_VLANTABLE |     \
+        MV88E6XXX_FLAG_VTU)
+
+#define MV88E6XXX_FLAGS_FAMILY_6320    \
+       (MV88E6XXX_FLAG_ATU |           \
+        MV88E6XXX_FLAG_EEE |           \
+        MV88E6XXX_FLAG_EEPROM |        \
+        MV88E6XXX_FLAG_PORTSTATE |     \
+        MV88E6XXX_FLAG_PPU_ACTIVE |    \
+        MV88E6XXX_FLAG_SMI_PHY |       \
+        MV88E6XXX_FLAG_SWITCH_MAC |    \
+        MV88E6XXX_FLAG_TEMP |          \
+        MV88E6XXX_FLAG_TEMP_LIMIT |    \
+        MV88E6XXX_FLAG_VLANTABLE |     \
+        MV88E6XXX_FLAG_VTU)
+
+#define MV88E6XXX_FLAGS_FAMILY_6351    \
+       (MV88E6XXX_FLAG_ATU |           \
+        MV88E6XXX_FLAG_PORTSTATE |     \
+        MV88E6XXX_FLAG_PPU_ACTIVE |    \
+        MV88E6XXX_FLAG_SMI_PHY |       \
+        MV88E6XXX_FLAG_SWITCH_MAC |    \
+        MV88E6XXX_FLAG_TEMP |          \
+        MV88E6XXX_FLAG_VLANTABLE |     \
+        MV88E6XXX_FLAG_VTU)
+
+#define MV88E6XXX_FLAGS_FAMILY_6352    \
+       (MV88E6XXX_FLAG_ATU |           \
+        MV88E6XXX_FLAG_EEE |           \
+        MV88E6XXX_FLAG_EEPROM |        \
+        MV88E6XXX_FLAG_PORTSTATE |     \
+        MV88E6XXX_FLAG_PPU_ACTIVE |    \
+        MV88E6XXX_FLAG_SMI_PHY |       \
+        MV88E6XXX_FLAG_SWITCH_MAC |    \
+        MV88E6XXX_FLAG_TEMP |          \
+        MV88E6XXX_FLAG_TEMP_LIMIT |    \
+        MV88E6XXX_FLAG_VLANTABLE |     \
+        MV88E6XXX_FLAG_VTU)
+
 struct mv88e6xxx_info {
        enum mv88e6xxx_family family;
        u16 prod_num;
        const char *name;
        unsigned int num_databases;
        unsigned int num_ports;
+       unsigned long flags;
 };
 
 struct mv88e6xxx_atu_entry {
@@ -388,6 +539,9 @@ struct mv88e6xxx_priv_state {
        /* The dsa_switch this private structure is related to */
        struct dsa_switch *ds;
 
+       /* The device this structure is associated to */
+       struct device *dev;
+
        /* When using multi-chip addressing, this mutex protects
         * access to the indirect access registers.  (In single-chip
         * mode, this mutex is effectively useless.)
@@ -400,7 +554,6 @@ struct mv88e6xxx_priv_state {
        struct mii_bus *bus;
        int sw_addr;
 
-#ifdef CONFIG_NET_DSA_MV88E6XXX_NEED_PPU
        /* Handles automatic disabling and re-enabling of the PHY
         * polling unit.
         */
@@ -408,7 +561,6 @@ struct mv88e6xxx_priv_state {
        int                     ppu_disabled;
        struct work_struct      ppu_work;
        struct timer_list       ppu_timer;
-#endif
 
        /* This mutex serialises access to the statistics unit.
         * Hold this mutex over snapshot + dump sequences.
@@ -446,85 +598,10 @@ struct mv88e6xxx_hw_stat {
        enum stat_type type;
 };
 
-int mv88e6xxx_switch_reset(struct dsa_switch *ds, bool ppu_active);
-const char *mv88e6xxx_drv_probe(struct device *dsa_dev, struct device *host_dev,
-                               int sw_addr, void **priv,
-                               const struct mv88e6xxx_info *table,
-                               unsigned int num);
-
-int mv88e6xxx_setup_ports(struct dsa_switch *ds);
-int mv88e6xxx_setup_common(struct dsa_switch *ds);
-int mv88e6xxx_setup_global(struct dsa_switch *ds);
-int mv88e6xxx_reg_read(struct dsa_switch *ds, int addr, int reg);
-int mv88e6xxx_reg_write(struct dsa_switch *ds, int addr, int reg, u16 val);
-int mv88e6xxx_set_addr_direct(struct dsa_switch *ds, u8 *addr);
-int mv88e6xxx_set_addr_indirect(struct dsa_switch *ds, u8 *addr);
-int mv88e6xxx_phy_read(struct dsa_switch *ds, int port, int regnum);
-int mv88e6xxx_phy_write(struct dsa_switch *ds, int port, int regnum, u16 val);
-int mv88e6xxx_phy_read_indirect(struct dsa_switch *ds, int port, int regnum);
-int mv88e6xxx_phy_write_indirect(struct dsa_switch *ds, int port, int regnum,
-                                u16 val);
-void mv88e6xxx_ppu_state_init(struct dsa_switch *ds);
-int mv88e6xxx_phy_read_ppu(struct dsa_switch *ds, int addr, int regnum);
-int mv88e6xxx_phy_write_ppu(struct dsa_switch *ds, int addr,
-                           int regnum, u16 val);
-void mv88e6xxx_get_strings(struct dsa_switch *ds, int port, uint8_t *data);
-void mv88e6xxx_get_ethtool_stats(struct dsa_switch *ds, int port,
-                                uint64_t *data);
-int mv88e6xxx_get_sset_count(struct dsa_switch *ds);
-int mv88e6xxx_get_sset_count_basic(struct dsa_switch *ds);
-void mv88e6xxx_adjust_link(struct dsa_switch *ds, int port,
-                          struct phy_device *phydev);
-int mv88e6xxx_get_regs_len(struct dsa_switch *ds, int port);
-void mv88e6xxx_get_regs(struct dsa_switch *ds, int port,
-                       struct ethtool_regs *regs, void *_p);
-int mv88e6xxx_get_temp(struct dsa_switch *ds, int *temp);
-int mv88e6xxx_get_temp_limit(struct dsa_switch *ds, int *temp);
-int mv88e6xxx_set_temp_limit(struct dsa_switch *ds, int temp);
-int mv88e6xxx_get_temp_alarm(struct dsa_switch *ds, bool *alarm);
-int mv88e6xxx_eeprom_load_wait(struct dsa_switch *ds);
-int mv88e6xxx_eeprom_busy_wait(struct dsa_switch *ds);
-int mv88e6xxx_phy_read_indirect(struct dsa_switch *ds, int addr, int regnum);
-int mv88e6xxx_phy_write_indirect(struct dsa_switch *ds, int addr, int regnum,
-                                u16 val);
-int mv88e6xxx_get_eee(struct dsa_switch *ds, int port, struct ethtool_eee *e);
-int mv88e6xxx_set_eee(struct dsa_switch *ds, int port,
-                     struct phy_device *phydev, struct ethtool_eee *e);
-int mv88e6xxx_port_bridge_join(struct dsa_switch *ds, int port,
-                              struct net_device *bridge);
-void mv88e6xxx_port_bridge_leave(struct dsa_switch *ds, int port);
-void mv88e6xxx_port_stp_state_set(struct dsa_switch *ds, int port, u8 state);
-int mv88e6xxx_port_vlan_filtering(struct dsa_switch *ds, int port,
-                                 bool vlan_filtering);
-int mv88e6xxx_port_vlan_prepare(struct dsa_switch *ds, int port,
-                               const struct switchdev_obj_port_vlan *vlan,
-                               struct switchdev_trans *trans);
-void mv88e6xxx_port_vlan_add(struct dsa_switch *ds, int port,
-                            const struct switchdev_obj_port_vlan *vlan,
-                            struct switchdev_trans *trans);
-int mv88e6xxx_port_vlan_del(struct dsa_switch *ds, int port,
-                           const struct switchdev_obj_port_vlan *vlan);
-int mv88e6xxx_port_vlan_dump(struct dsa_switch *ds, int port,
-                            struct switchdev_obj_port_vlan *vlan,
-                            int (*cb)(struct switchdev_obj *obj));
-int mv88e6xxx_port_fdb_prepare(struct dsa_switch *ds, int port,
-                              const struct switchdev_obj_port_fdb *fdb,
-                              struct switchdev_trans *trans);
-void mv88e6xxx_port_fdb_add(struct dsa_switch *ds, int port,
-                           const struct switchdev_obj_port_fdb *fdb,
-                           struct switchdev_trans *trans);
-int mv88e6xxx_port_fdb_del(struct dsa_switch *ds, int port,
-                          const struct switchdev_obj_port_fdb *fdb);
-int mv88e6xxx_port_fdb_dump(struct dsa_switch *ds, int port,
-                           struct switchdev_obj_port_fdb *fdb,
-                           int (*cb)(struct switchdev_obj *obj));
-int mv88e6xxx_phy_page_read(struct dsa_switch *ds, int port, int page, int reg);
-int mv88e6xxx_phy_page_write(struct dsa_switch *ds, int port, int page,
-                            int reg, int val);
-
-extern struct dsa_switch_driver mv88e6131_switch_driver;
-extern struct dsa_switch_driver mv88e6123_switch_driver;
-extern struct dsa_switch_driver mv88e6352_switch_driver;
-extern struct dsa_switch_driver mv88e6171_switch_driver;
+static inline bool mv88e6xxx_has(struct mv88e6xxx_priv_state *ps,
+                                unsigned long flags)
+{
+       return (ps->info->flags & flags) == flags;
+}
 
 #endif
index 7677c74..91ada52 100644 (file)
@@ -699,7 +699,7 @@ el3_tx_timeout (struct net_device *dev)
                dev->name, inb(ioaddr + TX_STATUS), inw(ioaddr + EL3_STATUS),
                inw(ioaddr + TX_FREE));
        dev->stats.tx_errors++;
-       dev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(dev); /* prevent tx timeout */
        /* Issue TX_RESET and TX_START commands. */
        outw(TxReset, ioaddr + EL3_CMD);
        outw(TxEnable, ioaddr + EL3_CMD);
index 942fb0d..b26e038 100644 (file)
@@ -992,7 +992,7 @@ static void corkscrew_timeout(struct net_device *dev)
                if (!(inw(ioaddr + EL3_STATUS) & CmdInProgress))
                        break;
        outw(TxEnable, ioaddr + EL3_CMD);
-       dev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(dev); /* prevent tx timeout */
        dev->stats.tx_errors++;
        dev->stats.tx_dropped++;
        netif_wake_queue(dev);
index b9948f0..b88afd7 100644 (file)
@@ -700,7 +700,7 @@ static void el3_tx_timeout(struct net_device *dev)
        netdev_notice(dev, "Transmit timed out!\n");
        dump_status(dev);
        dev->stats.tx_errors++;
-       dev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(dev); /* prevent tx timeout */
        /* Issue TX_RESET and TX_START commands. */
        tc574_wait_for_completion(dev, TxReset);
        outw(TxEnable, ioaddr + EL3_CMD);
index c5a3205..71396e4 100644 (file)
@@ -534,7 +534,7 @@ static void el3_tx_timeout(struct net_device *dev)
        netdev_warn(dev, "Transmit timed out!\n");
        dump_status(dev);
        dev->stats.tx_errors++;
-       dev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(dev); /* prevent tx timeout */
        /* Issue TX_RESET and TX_START commands. */
        tc589_wait_for_completion(dev, TxReset);
        outw(TxEnable, ioaddr + EL3_CMD);
index d81fced..25c55ab 100644 (file)
@@ -1944,7 +1944,7 @@ static void vortex_tx_timeout(struct net_device *dev)
        }
        /* Issue Tx Enable */
        iowrite16(TxEnable, ioaddr + EL3_CMD);
-       dev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(dev); /* prevent tx timeout */
 }
 
 /*
index ec6eac1..4ea717d 100644 (file)
@@ -1041,7 +1041,7 @@ static netdev_tx_t axnet_start_xmit(struct sk_buff *skb,
        {
                ei_local->txing = 1;
                NS8390_trigger_send(dev, send_length, output_page);
-               dev->trans_start = jiffies;
+               netif_trans_update(dev);
                if (output_page == ei_local->tx_start_page) 
                {
                        ei_local->tx1 = -1;
@@ -1270,7 +1270,7 @@ static void ei_tx_intr(struct net_device *dev)
                {
                        ei_local->txing = 1;
                        NS8390_trigger_send(dev, ei_local->tx2, ei_local->tx_start_page + 6);
-                       dev->trans_start = jiffies;
+                       netif_trans_update(dev);
                        ei_local->tx2 = -1,
                        ei_local->lasttx = 2;
                }
@@ -1287,7 +1287,7 @@ static void ei_tx_intr(struct net_device *dev)
                {
                        ei_local->txing = 1;
                        NS8390_trigger_send(dev, ei_local->tx1, ei_local->tx_start_page);
-                       dev->trans_start = jiffies;
+                       netif_trans_update(dev);
                        ei_local->tx1 = -1;
                        ei_local->lasttx = 1;
                }
index b96e885..60f8e2c 100644 (file)
@@ -596,7 +596,7 @@ static void ei_tx_intr(struct net_device *dev)
                if (ei_local->tx2 > 0) {
                        ei_local->txing = 1;
                        NS8390_trigger_send(dev, ei_local->tx2, ei_local->tx_start_page + 6);
-                       dev->trans_start = jiffies;
+                       netif_trans_update(dev);
                        ei_local->tx2 = -1,
                        ei_local->lasttx = 2;
                } else
@@ -609,7 +609,7 @@ static void ei_tx_intr(struct net_device *dev)
                if (ei_local->tx1 > 0) {
                        ei_local->txing = 1;
                        NS8390_trigger_send(dev, ei_local->tx1, ei_local->tx_start_page);
-                       dev->trans_start = jiffies;
+                       netif_trans_update(dev);
                        ei_local->tx1 = -1;
                        ei_local->lasttx = 1;
                } else
index ac72882..1d10696 100644 (file)
@@ -1129,7 +1129,7 @@ static void tx_timeout(struct net_device *dev)
 
        /* Trigger an immediate transmit demand. */
 
-       dev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(dev); /* prevent tx timeout */
        dev->stats.tx_errors++;
        netif_wake_queue(dev);
 }
index 74139cb..3d2245f 100644 (file)
@@ -1430,7 +1430,7 @@ static void bfin_mac_timeout(struct net_device *dev)
        bfin_mac_enable(lp->phydev);
 
        /* We can accept TX packets again */
-       dev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(dev); /* prevent tx timeout */
 }
 
 static void bfin_mac_multicast_hash(struct net_device *dev)
index 0907ab6..30defe6 100644 (file)
@@ -3349,7 +3349,7 @@ static void et131x_down(struct net_device *netdev)
        struct et131x_adapter *adapter = netdev_priv(netdev);
 
        /* Save the timestamp for the TX watchdog, prevent a timeout */
-       netdev->trans_start = jiffies;
+       netif_trans_update(netdev);
 
        phy_stop(adapter->phydev);
        et131x_disable_txrx(netdev);
@@ -3816,7 +3816,7 @@ static netdev_tx_t et131x_tx(struct sk_buff *skb, struct net_device *netdev)
                netif_stop_queue(netdev);
 
        /* Save the timestamp for the TX timeout watchdog */
-       netdev->trans_start = jiffies;
+       netif_trans_update(netdev);
 
        /* TCB is not available */
        if (tx_ring->used >= NUM_TCB)
index 8d50314..de2c4bf 100644 (file)
@@ -428,7 +428,7 @@ static void emac_timeout(struct net_device *dev)
        emac_reset(db);
        emac_init_device(dev);
        /* We can accept TX packets again */
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
        netif_wake_queue(dev);
 
        /* Restore previous register address */
@@ -468,7 +468,7 @@ static int emac_start_xmit(struct sk_buff *skb, struct net_device *dev)
                       db->membase + EMAC_TX_CTL0_REG);
 
                /* save the time stamp */
-               dev->trans_start = jiffies;
+               netif_trans_update(dev);
        } else if (channel == 1) {
                /* set TX len */
                writel(skb->len, db->membase + EMAC_TX_PL1_REG);
@@ -477,7 +477,7 @@ static int emac_start_xmit(struct sk_buff *skb, struct net_device *dev)
                       db->membase + EMAC_TX_CTL1_REG);
 
                /* save the time stamp */
-               dev->trans_start = jiffies;
+               netif_trans_update(dev);
        }
 
        if ((db->tx_fifo_stat & 3) == 3) {
index 66d0b73..dcf2a1f 100644 (file)
@@ -260,7 +260,7 @@ static int lance_reset(struct net_device *dev)
 
        load_csrs(lp);
        lance_init_ring(dev);
-       dev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(dev); /* prevent tx timeout */
        status = init_restart_lance(lp);
 #ifdef DEBUG_DRIVER
        printk("Lance restart=%d\n", status);
@@ -530,7 +530,7 @@ void lance_tx_timeout(struct net_device *dev)
 {
        printk("lance_tx_timeout\n");
        lance_reset(dev);
-       dev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(dev); /* prevent tx timeout */
        netif_wake_queue(dev);
 }
 EXPORT_SYMBOL_GPL(lance_tx_timeout);
@@ -543,11 +543,13 @@ int lance_start_xmit(struct sk_buff *skb, struct net_device *dev)
        static int outs;
        unsigned long flags;
 
-       if (!TX_BUFFS_AVAIL)
-               return NETDEV_TX_LOCKED;
-
        netif_stop_queue(dev);
 
+       if (!TX_BUFFS_AVAIL) {
+               dev_consume_skb_any(skb);
+               return NETDEV_TX_OK;
+       }
+
        skblen = skb->len;
 
 #ifdef DEBUG_DRIVER
index 5613918..a83cd1c 100644 (file)
@@ -512,7 +512,7 @@ static inline int lance_reset(struct net_device *dev)
        load_csrs(lp);
 
        lance_init_ring(dev);
-       dev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(dev); /* prevent tx timeout */
        netif_start_queue(dev);
 
        status = init_restart_lance(lp);
@@ -547,10 +547,8 @@ static netdev_tx_t lance_start_xmit(struct sk_buff *skb,
 
        local_irq_save(flags);
 
-       if (!lance_tx_buffs_avail(lp)) {
-               local_irq_restore(flags);
-               return NETDEV_TX_LOCKED;
-       }
+       if (!lance_tx_buffs_avail(lp))
+               goto out_free;
 
 #ifdef DEBUG
        /* dump the packet */
@@ -573,6 +571,7 @@ static netdev_tx_t lance_start_xmit(struct sk_buff *skb,
 
        /* Kick the lance: transmit now */
        ll->rdp = LE_C0_INEA | LE_C0_TDMD;
+ out_free:
        dev_kfree_skb(skb);
 
        local_irq_restore(flags);
index b10964e..d2bc8e5 100644 (file)
@@ -764,7 +764,7 @@ static void lance_tx_timeout (struct net_device *dev)
        /* lance_restart, essentially */
        lance_init_ring(dev);
        REGA( CSR0 ) = CSR0_INEA | CSR0_INIT | CSR0_STRT;
-       dev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(dev); /* prevent tx timeout */
        netif_wake_queue(dev);
 }
 
index d3977d0..9af309e 100644 (file)
@@ -1074,7 +1074,7 @@ static void au1000_tx_timeout(struct net_device *dev)
        netdev_err(dev, "au1000_tx_timeout: dev=%p\n", dev);
        au1000_reset_mac(dev);
        au1000_init(dev);
-       dev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(dev); /* prevent tx timeout */
        netif_wake_queue(dev);
 }
 
index b584b78..b799c7a 100644 (file)
@@ -877,7 +877,7 @@ static inline int lance_reset(struct net_device *dev)
 
        lance_init_ring(dev);
        load_csrs(lp);
-       dev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(dev); /* prevent tx timeout */
        status = init_restart_lance(lp);
        return status;
 }
index 3a7ebfd..abb1ba2 100644 (file)
@@ -943,7 +943,7 @@ static void lance_tx_timeout (struct net_device *dev)
 #endif
        lance_restart (dev, 0x0043, 1);
 
-       dev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(dev); /* prevent tx timeout */
        netif_wake_queue (dev);
 }
 
index 1cf33ad..cda53db 100644 (file)
@@ -782,7 +782,7 @@ static void ni65_stop_start(struct net_device *dev,struct priv *p)
                if(!p->lock)
                        if (p->tmdnum || !p->xmit_queued)
                                netif_wake_queue(dev);
-               dev->trans_start = jiffies; /* prevent tx timeout */
+               netif_trans_update(dev); /* prevent tx timeout */
        }
        else
                writedatareg(CSR0_STRT | csr0);
@@ -1148,7 +1148,7 @@ static void ni65_timeout(struct net_device *dev)
                printk("%02x ",p->tmdhead[i].u.s.status);
        printk("\n");
        ni65_lance_reinit(dev);
-       dev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(dev); /* prevent tx timeout */
        netif_wake_queue(dev);
 }
 
index 27245ef..2807e18 100644 (file)
@@ -851,7 +851,7 @@ static void mace_tx_timeout(struct net_device *dev)
 #else /* #if RESET_ON_TIMEOUT */
   pr_cont("NOT resetting card\n");
 #endif /* #if RESET_ON_TIMEOUT */
-  dev->trans_start = jiffies; /* prevent tx timeout */
+  netif_trans_update(dev); /* prevent tx timeout */
   netif_wake_queue(dev);
 }
 
index 7ccebae..c22bf52 100644 (file)
@@ -448,7 +448,7 @@ static void pcnet32_netif_stop(struct net_device *dev)
 {
        struct pcnet32_private *lp = netdev_priv(dev);
 
-       dev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(dev); /* prevent tx timeout */
        napi_disable(&lp->napi);
        netif_tx_disable(dev);
 }
@@ -2426,7 +2426,7 @@ static void pcnet32_tx_timeout(struct net_device *dev)
        }
        pcnet32_restart(dev, CSR0_NORMAL);
 
-       dev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(dev); /* prevent tx timeout */
        netif_wake_queue(dev);
 
        spin_unlock_irqrestore(&lp->lock, flags);
index 7847638..9b56b40 100644 (file)
@@ -997,7 +997,7 @@ static int lance_reset(struct net_device *dev)
        }
        lp->init_ring(dev);
        load_csrs(lp);
-       dev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(dev); /* prevent tx timeout */
        status = init_restart_lance(lp);
        return status;
 }
index b212488..6479288 100644 (file)
@@ -729,6 +729,6 @@ static int xgene_enet_cle_init(struct xgene_enet_pdata *pdata)
        return xgene_cle_setup_ptree(pdata, enet_cle);
 }
 
-struct xgene_cle_ops xgene_cle3in_ops = {
+const struct xgene_cle_ops xgene_cle3in_ops = {
        .cle_init = xgene_enet_cle_init,
 };
index 29a17ab..13e829a 100644 (file)
@@ -290,6 +290,6 @@ struct xgene_enet_cle {
        u32 jump_bytes;
 };
 
-extern struct xgene_cle_ops xgene_cle3in_ops;
+extern const struct xgene_cle_ops xgene_cle3in_ops;
 
 #endif /* __XGENE_ENET_CLE_H__ */
index 39e081a..457f745 100644 (file)
@@ -824,7 +824,7 @@ static int xgene_mdiobus_register(struct xgene_enet_pdata *pdata,
                return -EINVAL;
 
        phy = get_phy_device(mdio, phy_id, false);
-       if (!phy || IS_ERR(phy))
+       if (IS_ERR(phy))
                return -EIO;
 
        ret = phy_device_register(phy);
index 8d4c1ad..409152b 100644 (file)
@@ -973,6 +973,17 @@ static enum xgene_ring_owner xgene_derive_ring_owner(struct xgene_enet_pdata *p)
        return owner;
 }
 
+static u8 xgene_start_cpu_bufnum(struct xgene_enet_pdata *pdata)
+{
+       struct device *dev = &pdata->pdev->dev;
+       u32 cpu_bufnum;
+       int ret;
+
+       ret = device_property_read_u32(dev, "channel", &cpu_bufnum);
+
+       return (!ret) ? cpu_bufnum : pdata->cpu_bufnum;
+}
+
 static int xgene_enet_create_desc_rings(struct net_device *ndev)
 {
        struct xgene_enet_pdata *pdata = netdev_priv(ndev);
@@ -981,13 +992,15 @@ static int xgene_enet_create_desc_rings(struct net_device *ndev)
        struct xgene_enet_desc_ring *buf_pool = NULL;
        enum xgene_ring_owner owner;
        dma_addr_t dma_exp_bufs;
-       u8 cpu_bufnum = pdata->cpu_bufnum;
+       u8 cpu_bufnum;
        u8 eth_bufnum = pdata->eth_bufnum;
        u8 bp_bufnum = pdata->bp_bufnum;
        u16 ring_num = pdata->ring_num;
        u16 ring_id;
        int i, ret, size;
 
+       cpu_bufnum = xgene_start_cpu_bufnum(pdata);
+
        for (i = 0; i < pdata->rxq_cnt; i++) {
                /* allocate rx descriptor ring */
                owner = xgene_derive_ring_owner(pdata);
index 175d188..0a2887b 100644 (file)
@@ -191,7 +191,7 @@ struct xgene_enet_pdata {
        const struct xgene_mac_ops *mac_ops;
        const struct xgene_port_ops *port_ops;
        struct xgene_ring_ops *ring_ops;
-       struct xgene_cle_ops *cle_ops;
+       const struct xgene_cle_ops *cle_ops;
        struct delayed_work link_work;
        u32 port_id;
        u8 cpu_bufnum;
index 55b118e..9fe8b5e 100644 (file)
@@ -745,7 +745,7 @@ static netdev_features_t alx_fix_features(struct net_device *netdev,
 
 static void alx_netif_stop(struct alx_priv *alx)
 {
-       alx->dev->trans_start = jiffies;
+       netif_trans_update(alx->dev);
        if (netif_carrier_ok(alx->dev)) {
                netif_carrier_off(alx->dev);
                netif_tx_disable(alx->dev);
index b9203d9..c46b489 100644 (file)
@@ -488,7 +488,7 @@ struct atl1c_tpd_ring {
        dma_addr_t dma;         /* descriptor ring physical address */
        u16 size;               /* descriptor ring length in bytes */
        u16 count;              /* number of descriptors in the ring */
-       u16 next_to_use;        /* this is protectd by adapter->tx_lock */
+       u16 next_to_use;
        atomic_t next_to_clean;
        struct atl1c_buffer *buffer_info;
 };
@@ -542,7 +542,6 @@ struct atl1c_adapter {
        u16 link_duplex;
 
        spinlock_t mdio_lock;
-       spinlock_t tx_lock;
        atomic_t irq_sem;
 
        struct work_struct common_task;
index d0084d4..a3200ea 100644 (file)
@@ -821,7 +821,6 @@ static int atl1c_sw_init(struct atl1c_adapter *adapter)
        atl1c_set_rxbufsize(adapter, adapter->netdev);
        atomic_set(&adapter->irq_sem, 1);
        spin_lock_init(&adapter->mdio_lock);
-       spin_lock_init(&adapter->tx_lock);
        set_bit(__AT_DOWN, &adapter->flags);
 
        return 0;
@@ -2206,7 +2205,6 @@ static netdev_tx_t atl1c_xmit_frame(struct sk_buff *skb,
                                          struct net_device *netdev)
 {
        struct atl1c_adapter *adapter = netdev_priv(netdev);
-       unsigned long flags;
        u16 tpd_req = 1;
        struct atl1c_tpd_desc *tpd;
        enum atl1c_trans_queue type = atl1c_trans_normal;
@@ -2217,16 +2215,10 @@ static netdev_tx_t atl1c_xmit_frame(struct sk_buff *skb,
        }
 
        tpd_req = atl1c_cal_tpd_req(skb);
-       if (!spin_trylock_irqsave(&adapter->tx_lock, flags)) {
-               if (netif_msg_pktdata(adapter))
-                       dev_info(&adapter->pdev->dev, "tx locked\n");
-               return NETDEV_TX_LOCKED;
-       }
 
        if (atl1c_tpd_avail(adapter, type) < tpd_req) {
                /* no enough descriptor, just stop queue */
                netif_stop_queue(netdev);
-               spin_unlock_irqrestore(&adapter->tx_lock, flags);
                return NETDEV_TX_BUSY;
        }
 
@@ -2234,7 +2226,6 @@ static netdev_tx_t atl1c_xmit_frame(struct sk_buff *skb,
 
        /* do TSO and check sum */
        if (atl1c_tso_csum(adapter, skb, &tpd, type) != 0) {
-               spin_unlock_irqrestore(&adapter->tx_lock, flags);
                dev_kfree_skb_any(skb);
                return NETDEV_TX_OK;
        }
@@ -2257,12 +2248,10 @@ static netdev_tx_t atl1c_xmit_frame(struct sk_buff *skb,
                           "tx-skb droppted due to dma error\n");
                /* roll back tpd/buffer */
                atl1c_tx_rollback(adapter, tpd, type);
-               spin_unlock_irqrestore(&adapter->tx_lock, flags);
                dev_kfree_skb_any(skb);
        } else {
                netdev_sent_queue(adapter->netdev, skb->len);
                atl1c_tx_queue(adapter, skb, tpd, type);
-               spin_unlock_irqrestore(&adapter->tx_lock, flags);
        }
 
        return NETDEV_TX_OK;
index 0212dac..632bb84 100644 (file)
@@ -442,7 +442,6 @@ struct atl1e_adapter {
        u16 link_duplex;
 
        spinlock_t mdio_lock;
-       spinlock_t tx_lock;
        atomic_t irq_sem;
 
        struct work_struct reset_task;
index 59a03a1..974713b 100644 (file)
@@ -648,7 +648,6 @@ static int atl1e_sw_init(struct atl1e_adapter *adapter)
 
        atomic_set(&adapter->irq_sem, 1);
        spin_lock_init(&adapter->mdio_lock);
-       spin_lock_init(&adapter->tx_lock);
 
        set_bit(__AT_DOWN, &adapter->flags);
 
@@ -1866,7 +1865,6 @@ static netdev_tx_t atl1e_xmit_frame(struct sk_buff *skb,
                                          struct net_device *netdev)
 {
        struct atl1e_adapter *adapter = netdev_priv(netdev);
-       unsigned long flags;
        u16 tpd_req = 1;
        struct atl1e_tpd_desc *tpd;
 
@@ -1880,13 +1878,10 @@ static netdev_tx_t atl1e_xmit_frame(struct sk_buff *skb,
                return NETDEV_TX_OK;
        }
        tpd_req = atl1e_cal_tdp_req(skb);
-       if (!spin_trylock_irqsave(&adapter->tx_lock, flags))
-               return NETDEV_TX_LOCKED;
 
        if (atl1e_tpd_avail(adapter) < tpd_req) {
                /* no enough descriptor, just stop queue */
                netif_stop_queue(netdev);
-               spin_unlock_irqrestore(&adapter->tx_lock, flags);
                return NETDEV_TX_BUSY;
        }
 
@@ -1910,7 +1905,6 @@ static netdev_tx_t atl1e_xmit_frame(struct sk_buff *skb,
 
        /* do TSO and check sum */
        if (atl1e_tso_csum(adapter, skb, tpd) != 0) {
-               spin_unlock_irqrestore(&adapter->tx_lock, flags);
                dev_kfree_skb_any(skb);
                return NETDEV_TX_OK;
        }
@@ -1921,10 +1915,7 @@ static netdev_tx_t atl1e_xmit_frame(struct sk_buff *skb,
        }
 
        atl1e_tx_queue(adapter, tpd_req, tpd);
-
-       netdev->trans_start = jiffies; /* NETIF_F_LLTX driver :( */
 out:
-       spin_unlock_irqrestore(&adapter->tx_lock, flags);
        return NETDEV_TX_OK;
 }
 
@@ -2285,8 +2276,7 @@ static int atl1e_init_netdev(struct net_device *netdev, struct pci_dev *pdev)
 
        netdev->hw_features = NETIF_F_SG | NETIF_F_HW_CSUM | NETIF_F_TSO |
                              NETIF_F_HW_VLAN_CTAG_RX;
-       netdev->features = netdev->hw_features | NETIF_F_LLTX |
-                          NETIF_F_HW_VLAN_CTAG_TX;
+       netdev->features = netdev->hw_features | NETIF_F_HW_VLAN_CTAG_TX;
        /* not enabled by default */
        netdev->hw_features |= NETIF_F_RXALL | NETIF_F_RXFCS;
        return 0;
index 30b0c28..543bf38 100644 (file)
@@ -1117,7 +1117,7 @@ static void bcm_sysport_tx_timeout(struct net_device *dev)
 {
        netdev_warn(dev, "transmit timeout!\n");
 
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
        dev->stats.tx_errors++;
 
        netif_tx_wake_all_queues(dev);
index 4645c44..fd85b6d 100644 (file)
@@ -588,12 +588,30 @@ static inline int bnxt_alloc_rx_page(struct bnxt *bp,
        struct page *page;
        dma_addr_t mapping;
        u16 sw_prod = rxr->rx_sw_agg_prod;
+       unsigned int offset = 0;
 
-       page = alloc_page(gfp);
-       if (!page)
-               return -ENOMEM;
+       if (PAGE_SIZE > BNXT_RX_PAGE_SIZE) {
+               page = rxr->rx_page;
+               if (!page) {
+                       page = alloc_page(gfp);
+                       if (!page)
+                               return -ENOMEM;
+                       rxr->rx_page = page;
+                       rxr->rx_page_offset = 0;
+               }
+               offset = rxr->rx_page_offset;
+               rxr->rx_page_offset += BNXT_RX_PAGE_SIZE;
+               if (rxr->rx_page_offset == PAGE_SIZE)
+                       rxr->rx_page = NULL;
+               else
+                       get_page(page);
+       } else {
+               page = alloc_page(gfp);
+               if (!page)
+                       return -ENOMEM;
+       }
 
-       mapping = dma_map_page(&pdev->dev, page, 0, PAGE_SIZE,
+       mapping = dma_map_page(&pdev->dev, page, offset, BNXT_RX_PAGE_SIZE,
                               PCI_DMA_FROMDEVICE);
        if (dma_mapping_error(&pdev->dev, mapping)) {
                __free_page(page);
@@ -608,6 +626,7 @@ static inline int bnxt_alloc_rx_page(struct bnxt *bp,
        rxr->rx_sw_agg_prod = NEXT_RX_AGG(sw_prod);
 
        rx_agg_buf->page = page;
+       rx_agg_buf->offset = offset;
        rx_agg_buf->mapping = mapping;
        rxbd->rx_bd_haddr = cpu_to_le64(mapping);
        rxbd->rx_bd_opaque = sw_prod;
@@ -649,6 +668,7 @@ static void bnxt_reuse_rx_agg_bufs(struct bnxt_napi *bnapi, u16 cp_cons,
                page = cons_rx_buf->page;
                cons_rx_buf->page = NULL;
                prod_rx_buf->page = page;
+               prod_rx_buf->offset = cons_rx_buf->offset;
 
                prod_rx_buf->mapping = cons_rx_buf->mapping;
 
@@ -716,7 +736,8 @@ static struct sk_buff *bnxt_rx_pages(struct bnxt *bp, struct bnxt_napi *bnapi,
                            RX_AGG_CMP_LEN) >> RX_AGG_CMP_LEN_SHIFT;
 
                cons_rx_buf = &rxr->rx_agg_ring[cons];
-               skb_fill_page_desc(skb, i, cons_rx_buf->page, 0, frag_len);
+               skb_fill_page_desc(skb, i, cons_rx_buf->page,
+                                  cons_rx_buf->offset, frag_len);
                __clear_bit(cons, rxr->rx_agg_bmap);
 
                /* It is possible for bnxt_alloc_rx_page() to allocate
@@ -747,7 +768,7 @@ static struct sk_buff *bnxt_rx_pages(struct bnxt *bp, struct bnxt_napi *bnapi,
                        return NULL;
                }
 
-               dma_unmap_page(&pdev->dev, mapping, PAGE_SIZE,
+               dma_unmap_page(&pdev->dev, mapping, BNXT_RX_PAGE_SIZE,
                               PCI_DMA_FROMDEVICE);
 
                skb->data_len += frag_len;
@@ -1635,13 +1656,17 @@ static void bnxt_free_rx_skbs(struct bnxt *bp)
 
                        dma_unmap_page(&pdev->dev,
                                       dma_unmap_addr(rx_agg_buf, mapping),
-                                      PAGE_SIZE, PCI_DMA_FROMDEVICE);
+                                      BNXT_RX_PAGE_SIZE, PCI_DMA_FROMDEVICE);
 
                        rx_agg_buf->page = NULL;
                        __clear_bit(j, rxr->rx_agg_bmap);
 
                        __free_page(page);
                }
+               if (rxr->rx_page) {
+                       __free_page(rxr->rx_page);
+                       rxr->rx_page = NULL;
+               }
        }
 }
 
@@ -2024,7 +2049,7 @@ static int bnxt_init_one_rx_ring(struct bnxt *bp, int ring_nr)
        if (!(bp->flags & BNXT_FLAG_AGG_RINGS))
                return 0;
 
-       type = ((u32)PAGE_SIZE << RX_BD_LEN_SHIFT) |
+       type = ((u32)BNXT_RX_PAGE_SIZE << RX_BD_LEN_SHIFT) |
                RX_BD_TYPE_RX_AGG_BD | RX_BD_FLAGS_SOP;
 
        bnxt_init_rxbd_pages(ring, type);
@@ -2215,7 +2240,7 @@ void bnxt_set_ring_params(struct bnxt *bp)
        bp->rx_agg_nr_pages = 0;
 
        if (bp->flags & BNXT_FLAG_TPA)
-               agg_factor = 4;
+               agg_factor = min_t(u32, 4, 65536 / BNXT_RX_PAGE_SIZE);
 
        bp->flags &= ~BNXT_FLAG_JUMBO;
        if (rx_space > PAGE_SIZE) {
@@ -3076,12 +3101,12 @@ static int bnxt_hwrm_vnic_set_tpa(struct bnxt *bp, u16 vnic_id, u32 tpa_flags)
                /* Number of segs are log2 units, and first packet is not
                 * included as part of this units.
                 */
-               if (mss <= PAGE_SIZE) {
-                       n = PAGE_SIZE / mss;
+               if (mss <= BNXT_RX_PAGE_SIZE) {
+                       n = BNXT_RX_PAGE_SIZE / mss;
                        nsegs = (MAX_SKB_FRAGS - 1) * n;
                } else {
-                       n = mss / PAGE_SIZE;
-                       if (mss & (PAGE_SIZE - 1))
+                       n = mss / BNXT_RX_PAGE_SIZE;
+                       if (mss & (BNXT_RX_PAGE_SIZE - 1))
                                n++;
                        nsegs = (MAX_SKB_FRAGS - n) / n;
                }
@@ -4367,7 +4392,7 @@ static int bnxt_setup_int_mode(struct bnxt *bp)
        if (bp->flags & BNXT_FLAG_MSIX_CAP)
                rc = bnxt_setup_msix(bp);
 
-       if (!(bp->flags & BNXT_FLAG_USING_MSIX)) {
+       if (!(bp->flags & BNXT_FLAG_USING_MSIX) && BNXT_PF(bp)) {
                /* fallback to INTA */
                rc = bnxt_setup_inta(bp);
        }
@@ -6194,14 +6219,19 @@ static int bnxt_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
                           NETIF_F_TSO | NETIF_F_TSO6 |
                           NETIF_F_GSO_UDP_TUNNEL | NETIF_F_GSO_GRE |
                           NETIF_F_GSO_IPIP | NETIF_F_GSO_SIT |
-                          NETIF_F_RXHASH |
+                          NETIF_F_GSO_UDP_TUNNEL_CSUM | NETIF_F_GSO_GRE_CSUM |
+                          NETIF_F_GSO_PARTIAL | NETIF_F_RXHASH |
                           NETIF_F_RXCSUM | NETIF_F_LRO | NETIF_F_GRO;
 
        dev->hw_enc_features =
                        NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM | NETIF_F_SG |
                        NETIF_F_TSO | NETIF_F_TSO6 |
                        NETIF_F_GSO_UDP_TUNNEL | NETIF_F_GSO_GRE |
-                       NETIF_F_GSO_IPIP | NETIF_F_GSO_SIT;
+                       NETIF_F_GSO_UDP_TUNNEL_CSUM | NETIF_F_GSO_GRE_CSUM |
+                       NETIF_F_GSO_IPIP | NETIF_F_GSO_SIT |
+                       NETIF_F_GSO_PARTIAL;
+       dev->gso_partial_features = NETIF_F_GSO_UDP_TUNNEL_CSUM |
+                                   NETIF_F_GSO_GRE_CSUM;
        dev->vlan_features = dev->hw_features | NETIF_F_HIGHDMA;
        dev->hw_features |= NETIF_F_HW_VLAN_CTAG_RX | NETIF_F_HW_VLAN_CTAG_TX |
                            NETIF_F_HW_VLAN_STAG_RX | NETIF_F_HW_VLAN_STAG_TX;
index 26dac2f..6289635 100644 (file)
@@ -407,6 +407,15 @@ struct rx_tpa_end_cmp_ext {
 
 #define BNXT_PAGE_SIZE (1 << BNXT_PAGE_SHIFT)
 
+/* The RXBD length is 16-bit so we can only support page sizes < 64K */
+#if (PAGE_SHIFT > 15)
+#define BNXT_RX_PAGE_SHIFT 15
+#else
+#define BNXT_RX_PAGE_SHIFT PAGE_SHIFT
+#endif
+
+#define BNXT_RX_PAGE_SIZE (1 << BNXT_RX_PAGE_SHIFT)
+
 #define BNXT_MIN_PKT_SIZE      45
 
 #define BNXT_NUM_TESTS(bp)     0
@@ -506,6 +515,7 @@ struct bnxt_sw_rx_bd {
 
 struct bnxt_sw_rx_agg_bd {
        struct page             *page;
+       unsigned int            offset;
        dma_addr_t              mapping;
 };
 
@@ -586,6 +596,9 @@ struct bnxt_rx_ring_info {
        unsigned long           *rx_agg_bmap;
        u16                     rx_agg_bmap_size;
 
+       struct page             *rx_page;
+       unsigned int            rx_page_offset;
+
        dma_addr_t              rx_desc_mapping[MAX_RX_PAGES];
        dma_addr_t              rx_agg_desc_mapping[MAX_RX_AGG_PAGES];
 
index b69dc58..b1d2ac8 100644 (file)
@@ -5350,7 +5350,10 @@ static int cnic_start_hw(struct cnic_dev *dev)
        return 0;
 
 err1:
-       cp->free_resc(dev);
+       if (ethdev->drv_state & CNIC_DRV_STATE_HANDLES_IRQ)
+               cp->stop_hw(dev);
+       else
+               cp->free_resc(dev);
        pci_dev_put(dev->pcidev);
        return err;
 }
index fbff226..5414563 100644 (file)
@@ -3059,7 +3059,7 @@ static void bcmgenet_timeout(struct net_device *dev)
        bcmgenet_intrl2_0_writel(priv, int0_enable, INTRL2_CPU_MASK_CLEAR);
        bcmgenet_intrl2_1_writel(priv, int1_enable, INTRL2_CPU_MASK_CLEAR);
 
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
 
        dev->stats.tx_errors++;
 
index eacc559..f1b8118 100644 (file)
@@ -2462,7 +2462,7 @@ static void sbmac_tx_timeout (struct net_device *dev)
        spin_lock_irqsave(&sc->sbm_lock, flags);
 
 
-       dev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(dev); /* prevent tx timeout */
        dev->stats.tx_errors++;
 
        spin_unlock_irqrestore(&sc->sbm_lock, flags);
index 3010080..ff300f7 100644 (file)
@@ -7383,7 +7383,7 @@ static void tg3_napi_fini(struct tg3 *tp)
 
 static inline void tg3_netif_stop(struct tg3 *tp)
 {
-       tp->dev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(tp->dev);    /* prevent tx timeout */
        tg3_napi_disable(tp);
        netif_carrier_off(tp->dev);
        netif_tx_disable(tp->dev);
index eec3200..cb07d95 100644 (file)
@@ -440,7 +440,7 @@ static int macb_mii_init(struct macb *bp)
        snprintf(bp->mii_bus->id, MII_BUS_ID_SIZE, "%s-%x",
                 bp->pdev->name, bp->pdev->id);
        bp->mii_bus->priv = bp;
-       bp->mii_bus->parent = &bp->dev->dev;
+       bp->mii_bus->parent = &bp->pdev->dev;
        pdata = dev_get_platdata(&bp->pdev->dev);
 
        dev_set_drvdata(&bp->dev->dev, bp->mii_bus);
@@ -458,7 +458,8 @@ static int macb_mii_init(struct macb *bp)
                                struct phy_device *phydev;
 
                                phydev = mdiobus_scan(bp->mii_bus, i);
-                               if (IS_ERR(phydev)) {
+                               if (IS_ERR(phydev) &&
+                                   PTR_ERR(phydev) != -ENODEV) {
                                        err = PTR_ERR(phydev);
                                        break;
                                }
@@ -3005,29 +3006,36 @@ static int macb_probe(struct platform_device *pdev)
        if (err)
                goto err_out_free_netdev;
 
+       err = macb_mii_init(bp);
+       if (err)
+               goto err_out_free_netdev;
+
+       phydev = bp->phy_dev;
+
+       netif_carrier_off(dev);
+
        err = register_netdev(dev);
        if (err) {
                dev_err(&pdev->dev, "Cannot register net device, aborting.\n");
-               goto err_out_unregister_netdev;
+               goto err_out_unregister_mdio;
        }
 
-       err = macb_mii_init(bp);
-       if (err)
-               goto err_out_unregister_netdev;
-
-       netif_carrier_off(dev);
+       phy_attached_info(phydev);
 
        netdev_info(dev, "Cadence %s rev 0x%08x at 0x%08lx irq %d (%pM)\n",
                    macb_is_gem(bp) ? "GEM" : "MACB", macb_readl(bp, MID),
                    dev->base_addr, dev->irq, dev->dev_addr);
 
-       phydev = bp->phy_dev;
-       phy_attached_info(phydev);
-
        return 0;
 
-err_out_unregister_netdev:
-       unregister_netdev(dev);
+err_out_unregister_mdio:
+       phy_disconnect(bp->phy_dev);
+       mdiobus_unregister(bp->mii_bus);
+       mdiobus_free(bp->mii_bus);
+
+       /* Shutdown the PHY if there is a GPIO reset */
+       if (bp->reset_gpio)
+               gpiod_set_value(bp->reset_gpio, 0);
 
 err_out_free_netdev:
        free_netdev(dev);
index 34d269c..8de79ae 100644 (file)
@@ -2899,7 +2899,7 @@ static int liquidio_xmit(struct sk_buff *skb, struct net_device *netdev)
        if (status == IQ_SEND_STOP)
                stop_q(lio->netdev, q_idx);
 
-       netdev->trans_start = jiffies;
+       netif_trans_update(netdev);
 
        stats->tx_done++;
        stats->tx_tot_bytes += skb->len;
@@ -2928,7 +2928,7 @@ static void liquidio_tx_timeout(struct net_device *netdev)
        netif_info(lio, tx_err, lio->netdev,
                   "Transmit timeout tx_dropped:%ld, waking up queues now!!\n",
                   netdev->stats.tx_dropped);
-       netdev->trans_start = jiffies;
+       netif_trans_update(netdev);
        txqs_wake(netdev);
 }
 
index c177c7c..388cd79 100644 (file)
@@ -1320,7 +1320,7 @@ static int octeon_mgmt_xmit(struct sk_buff *skb, struct net_device *netdev)
        /* Ring the bell.  */
        cvmx_write_csr(p->mix + MIX_ORING2, 1);
 
-       netdev->trans_start = jiffies;
+       netif_trans_update(netdev);
        rv = NETDEV_TX_OK;
 out:
        octeon_mgmt_update_tx_stats(netdev);
index bfee298..a19e73f 100644 (file)
@@ -1442,7 +1442,7 @@ static void nicvf_reset_task(struct work_struct *work)
 
        nicvf_stop(nic->netdev);
        nicvf_open(nic->netdev);
-       nic->netdev->trans_start = jiffies;
+       netif_trans_update(nic->netdev);
 }
 
 static int nicvf_config_loopback(struct nicvf *nic,
index 526ea74..86f467a 100644 (file)
@@ -1664,8 +1664,7 @@ static int t1_sge_tx(struct sk_buff *skb, struct adapter *adapter,
        struct cmdQ *q = &sge->cmdQ[qid];
        unsigned int credits, pidx, genbit, count, use_sched_skb = 0;
 
-       if (!spin_trylock(&q->lock))
-               return NETDEV_TX_LOCKED;
+       spin_lock(&q->lock);
 
        reclaim_completed_tx(sge, q);
 
index 60908ea..43da891 100644 (file)
@@ -576,7 +576,7 @@ static void setup_rss(struct adapter *adap)
        unsigned int nq0 = adap2pinfo(adap, 0)->nqsets;
        unsigned int nq1 = adap->port[1] ? adap2pinfo(adap, 1)->nqsets : 1;
        u8 cpus[SGE_QSETS + 1];
-       u16 rspq_map[RSS_TABLE_SIZE];
+       u16 rspq_map[RSS_TABLE_SIZE + 1];
 
        for (i = 0; i < SGE_QSETS; ++i)
                cpus[i] = i;
@@ -586,6 +586,7 @@ static void setup_rss(struct adapter *adap)
                rspq_map[i] = i % nq0;
                rspq_map[i + RSS_TABLE_SIZE / 2] = (i % nq1) + nq0;
        }
+       rspq_map[RSS_TABLE_SIZE] = 0xffff; /* terminator */
 
        t3_config_rss(adap, F_RQFEEDBACKENABLE | F_TNLLKPEN | F_TNLMAPEN |
                      F_TNLPRTEN | F_TNL2TUPEN | F_TNL4TUPEN |
index 326d400..b4fceb9 100644 (file)
@@ -324,7 +324,9 @@ struct adapter_params {
        unsigned int sf_fw_start;         /* start of FW image in flash */
 
        unsigned int fw_vers;
+       unsigned int bs_vers;           /* bootstrap version */
        unsigned int tp_vers;
+       unsigned int er_vers;           /* expansion ROM version */
        u8 api_vers[7];
 
        unsigned short mtus[NMTUS];
@@ -357,6 +359,34 @@ struct sge_idma_monitor_state {
        unsigned int idma_warn[2];      /* time to warning in HZ */
 };
 
+/* Firmware Mailbox Command/Reply log.  All values are in Host-Endian format.
+ * The access and execute times are signed in order to accommodate negative
+ * error returns.
+ */
+struct mbox_cmd {
+       u64 cmd[MBOX_LEN / 8];          /* a Firmware Mailbox Command/Reply */
+       u64 timestamp;                  /* OS-dependent timestamp */
+       u32 seqno;                      /* sequence number */
+       s16 access;                     /* time (ms) to access mailbox */
+       s16 execute;                    /* time (ms) to execute */
+};
+
+struct mbox_cmd_log {
+       unsigned int size;              /* number of entries in the log */
+       unsigned int cursor;            /* next position in the log to write */
+       u32 seqno;                      /* next sequence number */
+       /* variable length mailbox command log starts here */
+};
+
+/* Given a pointer to a Firmware Mailbox Command Log and a log entry index,
+ * return a pointer to the specified entry.
+ */
+static inline struct mbox_cmd *mbox_cmd_log_entry(struct mbox_cmd_log *log,
+                                                 unsigned int entry_idx)
+{
+       return &((struct mbox_cmd *)&(log)[1])[entry_idx];
+}
+
 #include "t4fw_api.h"
 
 #define FW_VERSION(chip) ( \
@@ -394,6 +424,7 @@ struct link_config {
        unsigned char  fc;               /* actual link flow control */
        unsigned char  autoneg;          /* autonegotiating? */
        unsigned char  link_ok;          /* link up? */
+       unsigned char  link_down_rc;     /* link down reason */
 };
 
 #define FW_LEN16(fw_struct) FW_CMD_LEN16_V(sizeof(fw_struct) / 16)
@@ -731,6 +762,7 @@ struct adapter {
        u32 t4_bar0;
        struct pci_dev *pdev;
        struct device *pdev_dev;
+       const char *name;
        unsigned int mbox;
        unsigned int pf;
        unsigned int flags;
@@ -776,6 +808,10 @@ struct adapter {
        struct work_struct db_drop_task;
        bool tid_release_task_busy;
 
+       /* support for mailbox command/reply logging */
+#define T4_OS_LOG_MBOX_CMDS 256
+       struct mbox_cmd_log *mbox_log;
+
        struct dentry *debugfs_root;
        bool use_bd;     /* Use SGE Back Door intfc for reading SGE Contexts */
        bool trace_rss; /* 1 implies that different RSS flit per filter is
@@ -1306,6 +1342,7 @@ int t4_fl_pkt_align(struct adapter *adap);
 unsigned int t4_flash_cfg_addr(struct adapter *adapter);
 int t4_check_fw_version(struct adapter *adap);
 int t4_get_fw_version(struct adapter *adapter, u32 *vers);
+int t4_get_bs_version(struct adapter *adapter, u32 *vers);
 int t4_get_tp_version(struct adapter *adapter, u32 *vers);
 int t4_get_exprom_version(struct adapter *adapter, u32 *vers);
 int t4_prep_fw(struct adapter *adap, struct fw_info *fw_info,
@@ -1329,6 +1366,8 @@ int t4_init_sge_params(struct adapter *adapter);
 int t4_init_tp_params(struct adapter *adap);
 int t4_filter_field_shift(const struct adapter *adap, int filter_sel);
 int t4_init_rss_mode(struct adapter *adap, int mbox);
+int t4_init_portinfo(struct port_info *pi, int mbox,
+                    int port, int pf, int vf, u8 mac[]);
 int t4_port_init(struct adapter *adap, int mbox, int pf, int vf);
 void t4_fatal_err(struct adapter *adapter);
 int t4_config_rss_range(struct adapter *adapter, int mbox, unsigned int viid,
@@ -1464,6 +1503,7 @@ int t4_ctrl_eq_free(struct adapter *adap, unsigned int mbox, unsigned int pf,
 int t4_ofld_eq_free(struct adapter *adap, unsigned int mbox, unsigned int pf,
                    unsigned int vf, unsigned int eqid);
 int t4_sge_ctxt_flush(struct adapter *adap, unsigned int mbox);
+void t4_handle_get_port_info(struct port_info *pi, const __be64 *rpl);
 int t4_handle_fw_rpl(struct adapter *adap, const __be64 *rpl);
 void t4_db_full(struct adapter *adapter);
 void t4_db_dropped(struct adapter *adapter);
index 052c660..6ee2ed3 100644 (file)
@@ -253,7 +253,7 @@ void cxgb4_dcb_handle_fw_update(struct adapter *adap,
 {
        const union fw_port_dcb *fwdcb = &pcmd->u.dcb;
        int port = FW_PORT_CMD_PORTID_G(be32_to_cpu(pcmd->op_to_portid));
-       struct net_device *dev = adap->port[port];
+       struct net_device *dev = adap->port[adap->chan_map[port]];
        struct port_info *pi = netdev_priv(dev);
        struct port_dcb_info *dcb = &pi->dcb;
        int dcb_type = pcmd->u.dcb.pgid.type;
index 0bb41e9..91fb508 100644 (file)
@@ -1152,6 +1152,104 @@ static const struct file_operations devlog_fops = {
        .release = seq_release_private
 };
 
+/* Show Firmware Mailbox Command/Reply Log
+ *
+ * Note that we don't do any locking when dumping the Firmware Mailbox Log so
+ * it's possible that we can catch things during a log update and therefore
+ * see partially corrupted log entries.  But it's probably Good Enough(tm).
+ * If we ever decide that we want to make sure that we're dumping a coherent
+ * log, we'd need to perform locking in the mailbox logging and in
+ * mboxlog_open() where we'd need to grab the entire mailbox log in one go
+ * like we do for the Firmware Device Log.
+ */
+static int mboxlog_show(struct seq_file *seq, void *v)
+{
+       struct adapter *adapter = seq->private;
+       struct mbox_cmd_log *log = adapter->mbox_log;
+       struct mbox_cmd *entry;
+       int entry_idx, i;
+
+       if (v == SEQ_START_TOKEN) {
+               seq_printf(seq,
+                          "%10s  %15s  %5s  %5s  %s\n",
+                          "Seq#", "Tstamp", "Atime", "Etime",
+                          "Command/Reply");
+               return 0;
+       }
+
+       entry_idx = log->cursor + ((uintptr_t)v - 2);
+       if (entry_idx >= log->size)
+               entry_idx -= log->size;
+       entry = mbox_cmd_log_entry(log, entry_idx);
+
+       /* skip over unused entries */
+       if (entry->timestamp == 0)
+               return 0;
+
+       seq_printf(seq, "%10u  %15llu  %5d  %5d",
+                  entry->seqno, entry->timestamp,
+                  entry->access, entry->execute);
+       for (i = 0; i < MBOX_LEN / 8; i++) {
+               u64 flit = entry->cmd[i];
+               u32 hi = (u32)(flit >> 32);
+               u32 lo = (u32)flit;
+
+               seq_printf(seq, "  %08x %08x", hi, lo);
+       }
+       seq_puts(seq, "\n");
+       return 0;
+}
+
+static inline void *mboxlog_get_idx(struct seq_file *seq, loff_t pos)
+{
+       struct adapter *adapter = seq->private;
+       struct mbox_cmd_log *log = adapter->mbox_log;
+
+       return ((pos <= log->size) ? (void *)(uintptr_t)(pos + 1) : NULL);
+}
+
+static void *mboxlog_start(struct seq_file *seq, loff_t *pos)
+{
+       return *pos ? mboxlog_get_idx(seq, *pos) : SEQ_START_TOKEN;
+}
+
+static void *mboxlog_next(struct seq_file *seq, void *v, loff_t *pos)
+{
+       ++*pos;
+       return mboxlog_get_idx(seq, *pos);
+}
+
+static void mboxlog_stop(struct seq_file *seq, void *v)
+{
+}
+
+static const struct seq_operations mboxlog_seq_ops = {
+       .start = mboxlog_start,
+       .next  = mboxlog_next,
+       .stop  = mboxlog_stop,
+       .show  = mboxlog_show
+};
+
+static int mboxlog_open(struct inode *inode, struct file *file)
+{
+       int res = seq_open(file, &mboxlog_seq_ops);
+
+       if (!res) {
+               struct seq_file *seq = file->private_data;
+
+               seq->private = inode->i_private;
+       }
+       return res;
+}
+
+static const struct file_operations mboxlog_fops = {
+       .owner   = THIS_MODULE,
+       .open    = mboxlog_open,
+       .read    = seq_read,
+       .llseek  = seq_lseek,
+       .release = seq_release,
+};
+
 static int mbox_show(struct seq_file *seq, void *v)
 {
        static const char * const owner[] = { "none", "FW", "driver",
@@ -1572,6 +1670,7 @@ static const struct file_operations flash_debugfs_fops = {
        .owner   = THIS_MODULE,
        .open    = mem_open,
        .read    = flash_read,
+       .llseek  = default_llseek,
 };
 
 static inline void tcamxy2valmask(u64 x, u64 y, u8 *addr, u64 *mask)
@@ -3128,6 +3227,7 @@ int t4_setup_debugfs(struct adapter *adap)
                { "cim_qcfg", &cim_qcfg_fops, S_IRUSR, 0 },
                { "clk", &clk_debugfs_fops, S_IRUSR, 0 },
                { "devlog", &devlog_fops, S_IRUSR, 0 },
+               { "mboxlog", &mboxlog_fops, S_IRUSR, 0 },
                { "mbox0", &mbox_debugfs_fops, S_IRUSR | S_IWUSR, 0 },
                { "mbox1", &mbox_debugfs_fops, S_IRUSR | S_IWUSR, 1 },
                { "mbox2", &mbox_debugfs_fops, S_IRUSR | S_IWUSR, 2 },
index a1e329e..477db47 100644 (file)
@@ -304,6 +304,22 @@ static void dcb_tx_queue_prio_enable(struct net_device *dev, int enable)
 }
 #endif /* CONFIG_CHELSIO_T4_DCB */
 
+int cxgb4_dcb_enabled(const struct net_device *dev)
+{
+#ifdef CONFIG_CHELSIO_T4_DCB
+       struct port_info *pi = netdev_priv(dev);
+
+       if (!pi->dcb.enabled)
+               return 0;
+
+       return ((pi->dcb.state == CXGB4_DCB_STATE_FW_ALLSYNCED) ||
+               (pi->dcb.state == CXGB4_DCB_STATE_HOST));
+#else
+       return 0;
+#endif
+}
+EXPORT_SYMBOL(cxgb4_dcb_enabled);
+
 void t4_os_link_changed(struct adapter *adapter, int port_id, int link_stat)
 {
        struct net_device *dev = adapter->port[port_id];
@@ -314,8 +330,10 @@ void t4_os_link_changed(struct adapter *adapter, int port_id, int link_stat)
                        netif_carrier_on(dev);
                else {
 #ifdef CONFIG_CHELSIO_T4_DCB
-                       cxgb4_dcb_state_init(dev);
-                       dcb_tx_queue_prio_enable(dev, false);
+                       if (cxgb4_dcb_enabled(dev)) {
+                               cxgb4_dcb_state_init(dev);
+                               dcb_tx_queue_prio_enable(dev, false);
+                       }
 #endif /* CONFIG_CHELSIO_T4_DCB */
                        netif_carrier_off(dev);
                }
@@ -337,6 +355,17 @@ void t4_os_portmod_changed(const struct adapter *adap, int port_id)
                netdev_info(dev, "port module unplugged\n");
        else if (pi->mod_type < ARRAY_SIZE(mod_str))
                netdev_info(dev, "%s module inserted\n", mod_str[pi->mod_type]);
+       else if (pi->mod_type == FW_PORT_MOD_TYPE_NOTSUPPORTED)
+               netdev_info(dev, "%s: unsupported port module inserted\n",
+                           dev->name);
+       else if (pi->mod_type == FW_PORT_MOD_TYPE_UNKNOWN)
+               netdev_info(dev, "%s: unknown port module inserted\n",
+                           dev->name);
+       else if (pi->mod_type == FW_PORT_MOD_TYPE_ERROR)
+               netdev_info(dev, "%s: transceiver module error\n", dev->name);
+       else
+               netdev_info(dev, "%s: unknown module type %d inserted\n",
+                           dev->name, pi->mod_type);
 }
 
 int dbfifo_int_thresh = 10; /* 10 == 640 entry threshold */
@@ -483,28 +512,12 @@ static int link_start(struct net_device *dev)
        return ret;
 }
 
-int cxgb4_dcb_enabled(const struct net_device *dev)
-{
-#ifdef CONFIG_CHELSIO_T4_DCB
-       struct port_info *pi = netdev_priv(dev);
-
-       if (!pi->dcb.enabled)
-               return 0;
-
-       return ((pi->dcb.state == CXGB4_DCB_STATE_FW_ALLSYNCED) ||
-               (pi->dcb.state == CXGB4_DCB_STATE_HOST));
-#else
-       return 0;
-#endif
-}
-EXPORT_SYMBOL(cxgb4_dcb_enabled);
-
 #ifdef CONFIG_CHELSIO_T4_DCB
 /* Handle a Data Center Bridging update message from the firmware. */
 static void dcb_rpl(struct adapter *adap, const struct fw_port_cmd *pcmd)
 {
        int port = FW_PORT_CMD_PORTID_G(ntohl(pcmd->op_to_portid));
-       struct net_device *dev = adap->port[port];
+       struct net_device *dev = adap->port[adap->chan_map[port]];
        int old_dcb_enabled = cxgb4_dcb_enabled(dev);
        int new_dcb_enabled;
 
@@ -634,7 +647,8 @@ static int fwevtq_handler(struct sge_rspq *q, const __be64 *rsp,
                    action == FW_PORT_ACTION_GET_PORT_INFO) {
                        int port = FW_PORT_CMD_PORTID_G(
                                        be32_to_cpu(pcmd->op_to_portid));
-                       struct net_device *dev = q->adap->port[port];
+                       struct net_device *dev =
+                               q->adap->port[q->adap->chan_map[port]];
                        int state_input = ((pcmd->u.info.dcbxdis_pkd &
                                            FW_PORT_CMD_DCBXDIS_F)
                                           ? CXGB4_DCB_INPUT_FW_DISABLED
@@ -3738,7 +3752,10 @@ static int adap_init0(struct adapter *adap)
         * is excessively mismatched relative to the driver.)
         */
        t4_get_fw_version(adap, &adap->params.fw_vers);
+       t4_get_bs_version(adap, &adap->params.bs_vers);
        t4_get_tp_version(adap, &adap->params.tp_vers);
+       t4_get_exprom_version(adap, &adap->params.er_vers);
+
        ret = t4_check_fw_version(adap);
        /* If firmware is too old (not supported by driver) force an update. */
        if (ret)
@@ -4652,6 +4669,68 @@ static void cxgb4_check_pcie_caps(struct adapter *adap)
                         "suggested for optimal performance.\n");
 }
 
+/* Dump basic information about the adapter */
+static void print_adapter_info(struct adapter *adapter)
+{
+       /* Device information */
+       dev_info(adapter->pdev_dev, "Chelsio %s rev %d\n",
+                adapter->params.vpd.id,
+                CHELSIO_CHIP_RELEASE(adapter->params.chip));
+       dev_info(adapter->pdev_dev, "S/N: %s, P/N: %s\n",
+                adapter->params.vpd.sn, adapter->params.vpd.pn);
+
+       /* Firmware Version */
+       if (!adapter->params.fw_vers)
+               dev_warn(adapter->pdev_dev, "No firmware loaded\n");
+       else
+               dev_info(adapter->pdev_dev, "Firmware version: %u.%u.%u.%u\n",
+                        FW_HDR_FW_VER_MAJOR_G(adapter->params.fw_vers),
+                        FW_HDR_FW_VER_MINOR_G(adapter->params.fw_vers),
+                        FW_HDR_FW_VER_MICRO_G(adapter->params.fw_vers),
+                        FW_HDR_FW_VER_BUILD_G(adapter->params.fw_vers));
+
+       /* Bootstrap Firmware Version. (Some adapters don't have Bootstrap
+        * Firmware, so dev_info() is more appropriate here.)
+        */
+       if (!adapter->params.bs_vers)
+               dev_info(adapter->pdev_dev, "No bootstrap loaded\n");
+       else
+               dev_info(adapter->pdev_dev, "Bootstrap version: %u.%u.%u.%u\n",
+                        FW_HDR_FW_VER_MAJOR_G(adapter->params.bs_vers),
+                        FW_HDR_FW_VER_MINOR_G(adapter->params.bs_vers),
+                        FW_HDR_FW_VER_MICRO_G(adapter->params.bs_vers),
+                        FW_HDR_FW_VER_BUILD_G(adapter->params.bs_vers));
+
+       /* TP Microcode Version */
+       if (!adapter->params.tp_vers)
+               dev_warn(adapter->pdev_dev, "No TP Microcode loaded\n");
+       else
+               dev_info(adapter->pdev_dev,
+                        "TP Microcode version: %u.%u.%u.%u\n",
+                        FW_HDR_FW_VER_MAJOR_G(adapter->params.tp_vers),
+                        FW_HDR_FW_VER_MINOR_G(adapter->params.tp_vers),
+                        FW_HDR_FW_VER_MICRO_G(adapter->params.tp_vers),
+                        FW_HDR_FW_VER_BUILD_G(adapter->params.tp_vers));
+
+       /* Expansion ROM version */
+       if (!adapter->params.er_vers)
+               dev_info(adapter->pdev_dev, "No Expansion ROM loaded\n");
+       else
+               dev_info(adapter->pdev_dev,
+                        "Expansion ROM version: %u.%u.%u.%u\n",
+                        FW_HDR_FW_VER_MAJOR_G(adapter->params.er_vers),
+                        FW_HDR_FW_VER_MINOR_G(adapter->params.er_vers),
+                        FW_HDR_FW_VER_MICRO_G(adapter->params.er_vers),
+                        FW_HDR_FW_VER_BUILD_G(adapter->params.er_vers));
+
+       /* Software/Hardware configuration */
+       dev_info(adapter->pdev_dev, "Configuration: %sNIC %s, %s capable\n",
+                is_offload(adapter) ? "R" : "",
+                ((adapter->flags & USING_MSIX) ? "MSI-X" :
+                 (adapter->flags & USING_MSI) ? "MSI" : ""),
+                is_offload(adapter) ? "Offload" : "non-Offload");
+}
+
 static void print_port_info(const struct net_device *dev)
 {
        char buf[80];
@@ -4679,14 +4758,8 @@ static void print_port_info(const struct net_device *dev)
                --bufp;
        sprintf(bufp, "BASE-%s", t4_get_port_type_description(pi->port_type));
 
-       netdev_info(dev, "Chelsio %s rev %d %s %sNIC %s\n",
-                   adap->params.vpd.id,
-                   CHELSIO_CHIP_RELEASE(adap->params.chip), buf,
-                   is_offload(adap) ? "R" : "",
-                   (adap->flags & USING_MSIX) ? " MSI-X" :
-                   (adap->flags & USING_MSI) ? " MSI" : "");
-       netdev_info(dev, "S/N: %s, P/N: %s\n",
-                   adap->params.vpd.sn, adap->params.vpd.pn);
+       netdev_info(dev, "%s: Chelsio %s (%s) %s\n",
+                   dev->name, adap->params.vpd.id, adap->name, buf);
 }
 
 static void enable_pcie_relaxed_ordering(struct pci_dev *dev)
@@ -4838,12 +4911,23 @@ static int init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
                goto out_free_adapter;
        }
 
+       adapter->mbox_log = kzalloc(sizeof(*adapter->mbox_log) +
+                                   (sizeof(struct mbox_cmd) *
+                                    T4_OS_LOG_MBOX_CMDS),
+                                   GFP_KERNEL);
+       if (!adapter->mbox_log) {
+               err = -ENOMEM;
+               goto out_free_adapter;
+       }
+       adapter->mbox_log->size = T4_OS_LOG_MBOX_CMDS;
+
        /* PCI device has been enabled */
        adapter->flags |= DEV_ENABLED;
 
        adapter->regs = regs;
        adapter->pdev = pdev;
        adapter->pdev_dev = &pdev->dev;
+       adapter->name = pci_name(pdev);
        adapter->mbox = func;
        adapter->pf = func;
        adapter->msg_enable = dflt_msg_enable;
@@ -5074,6 +5158,8 @@ static int init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
        if (is_offload(adapter))
                attach_ulds(adapter);
 
+       print_adapter_info(adapter);
+
 sriov:
 #ifdef CONFIG_PCI_IOV
        if (func < ARRAY_SIZE(num_vf) && num_vf[func] > 0)
@@ -5093,6 +5179,7 @@ sriov:
        if (adapter->workq)
                destroy_workqueue(adapter->workq);
 
+       kfree(adapter->mbox_log);
        kfree(adapter);
  out_unmap_bar0:
        iounmap(regs);
@@ -5159,6 +5246,7 @@ static void remove_one(struct pci_dev *pdev)
                        adapter->flags &= ~DEV_ENABLED;
                }
                pci_release_regions(pdev);
+               kfree(adapter->mbox_log);
                synchronize_rcu();
                kfree(adapter);
        } else
index 6278e5a..bad253b 100644 (file)
@@ -3006,7 +3006,9 @@ void t4_free_sge_resources(struct adapter *adap)
                if (etq->q.desc) {
                        t4_eth_eq_free(adap, adap->mbox, adap->pf, 0,
                                       etq->q.cntxt_id);
+                       __netif_tx_lock_bh(etq->txq);
                        free_tx_desc(adap, &etq->q, etq->q.in_use, true);
+                       __netif_tx_unlock_bh(etq->txq);
                        kfree(etq->q.sdesc);
                        free_txq(adap, &etq->q);
                }
index 71586a3..a63addb 100644 (file)
@@ -224,18 +224,34 @@ static void fw_asrt(struct adapter *adap, u32 mbox_addr)
                  be32_to_cpu(asrt.u.assert.x), be32_to_cpu(asrt.u.assert.y));
 }
 
-static void dump_mbox(struct adapter *adap, int mbox, u32 data_reg)
+/**
+ *     t4_record_mbox - record a Firmware Mailbox Command/Reply in the log
+ *     @adapter: the adapter
+ *     @cmd: the Firmware Mailbox Command or Reply
+ *     @size: command length in bytes
+ *     @access: the time (ms) needed to access the Firmware Mailbox
+ *     @execute: the time (ms) the command spent being executed
+ */
+static void t4_record_mbox(struct adapter *adapter,
+                          const __be64 *cmd, unsigned int size,
+                          int access, int execute)
 {
-       dev_err(adap->pdev_dev,
-               "mbox %d: %llx %llx %llx %llx %llx %llx %llx %llx\n", mbox,
-               (unsigned long long)t4_read_reg64(adap, data_reg),
-               (unsigned long long)t4_read_reg64(adap, data_reg + 8),
-               (unsigned long long)t4_read_reg64(adap, data_reg + 16),
-               (unsigned long long)t4_read_reg64(adap, data_reg + 24),
-               (unsigned long long)t4_read_reg64(adap, data_reg + 32),
-               (unsigned long long)t4_read_reg64(adap, data_reg + 40),
-               (unsigned long long)t4_read_reg64(adap, data_reg + 48),
-               (unsigned long long)t4_read_reg64(adap, data_reg + 56));
+       struct mbox_cmd_log *log = adapter->mbox_log;
+       struct mbox_cmd *entry;
+       int i;
+
+       entry = mbox_cmd_log_entry(log, log->cursor++);
+       if (log->cursor == log->size)
+               log->cursor = 0;
+
+       for (i = 0; i < size / 8; i++)
+               entry->cmd[i] = be64_to_cpu(cmd[i]);
+       while (i < MBOX_LEN / 8)
+               entry->cmd[i++] = 0;
+       entry->timestamp = jiffies;
+       entry->seqno = log->seqno++;
+       entry->access = access;
+       entry->execute = execute;
 }
 
 /**
@@ -268,12 +284,16 @@ int t4_wr_mbox_meat_timeout(struct adapter *adap, int mbox, const void *cmd,
                1, 1, 3, 5, 10, 10, 20, 50, 100, 200
        };
 
+       u16 access = 0;
+       u16 execute = 0;
        u32 v;
        u64 res;
-       int i, ms, delay_idx;
+       int i, ms, delay_idx, ret;
        const __be64 *p = cmd;
        u32 data_reg = PF_REG(mbox, CIM_PF_MAILBOX_DATA_A);
        u32 ctl_reg = PF_REG(mbox, CIM_PF_MAILBOX_CTRL_A);
+       __be64 cmd_rpl[MBOX_LEN / 8];
+       u32 pcie_fw;
 
        if ((size & 15) || size > MBOX_LEN)
                return -EINVAL;
@@ -285,13 +305,24 @@ int t4_wr_mbox_meat_timeout(struct adapter *adap, int mbox, const void *cmd,
        if (adap->pdev->error_state != pci_channel_io_normal)
                return -EIO;
 
+       /* If we have a negative timeout, that implies that we can't sleep. */
+       if (timeout < 0) {
+               sleep_ok = false;
+               timeout = -timeout;
+       }
+
        v = MBOWNER_G(t4_read_reg(adap, ctl_reg));
        for (i = 0; v == MBOX_OWNER_NONE && i < 3; i++)
                v = MBOWNER_G(t4_read_reg(adap, ctl_reg));
 
-       if (v != MBOX_OWNER_DRV)
-               return v ? -EBUSY : -ETIMEDOUT;
+       if (v != MBOX_OWNER_DRV) {
+               ret = (v == MBOX_OWNER_FW) ? -EBUSY : -ETIMEDOUT;
+               t4_record_mbox(adap, cmd, MBOX_LEN, access, ret);
+               return ret;
+       }
 
+       /* Copy in the new mailbox command and send it on its way ... */
+       t4_record_mbox(adap, cmd, MBOX_LEN, access, 0);
        for (i = 0; i < size; i += 8)
                t4_write_reg64(adap, data_reg + i, be64_to_cpu(*p++));
 
@@ -301,7 +332,10 @@ int t4_wr_mbox_meat_timeout(struct adapter *adap, int mbox, const void *cmd,
        delay_idx = 0;
        ms = delay[0];
 
-       for (i = 0; i < timeout; i += ms) {
+       for (i = 0;
+            !((pcie_fw = t4_read_reg(adap, PCIE_FW_A)) & PCIE_FW_ERR_F) &&
+            i < timeout;
+            i += ms) {
                if (sleep_ok) {
                        ms = delay[delay_idx];  /* last element may repeat */
                        if (delay_idx < ARRAY_SIZE(delay) - 1)
@@ -317,26 +351,31 @@ int t4_wr_mbox_meat_timeout(struct adapter *adap, int mbox, const void *cmd,
                                continue;
                        }
 
-                       res = t4_read_reg64(adap, data_reg);
+                       get_mbox_rpl(adap, cmd_rpl, MBOX_LEN / 8, data_reg);
+                       res = be64_to_cpu(cmd_rpl[0]);
+
                        if (FW_CMD_OP_G(res >> 32) == FW_DEBUG_CMD) {
                                fw_asrt(adap, data_reg);
                                res = FW_CMD_RETVAL_V(EIO);
                        } else if (rpl) {
-                               get_mbox_rpl(adap, rpl, size / 8, data_reg);
+                               memcpy(rpl, cmd_rpl, size);
                        }
 
-                       if (FW_CMD_RETVAL_G((int)res))
-                               dump_mbox(adap, mbox, data_reg);
                        t4_write_reg(adap, ctl_reg, 0);
+
+                       execute = i + ms;
+                       t4_record_mbox(adap, cmd_rpl,
+                                      MBOX_LEN, access, execute);
                        return -FW_CMD_RETVAL_G((int)res);
                }
        }
 
-       dump_mbox(adap, mbox, data_reg);
+       ret = (pcie_fw & PCIE_FW_ERR_F) ? -ENXIO : -ETIMEDOUT;
+       t4_record_mbox(adap, cmd, MBOX_LEN, access, ret);
        dev_err(adap->pdev_dev, "command %#x in mailbox %d timed out\n",
                *(const u8 *)cmd, mbox);
        t4_report_fw_error(adap);
-       return -ETIMEDOUT;
+       return ret;
 }
 
 int t4_wr_mbox_meat(struct adapter *adap, int mbox, const void *cmd, int size,
@@ -2936,6 +2975,20 @@ int t4_get_fw_version(struct adapter *adapter, u32 *vers)
                             vers, 0);
 }
 
+/**
+ *     t4_get_bs_version - read the firmware bootstrap version
+ *     @adapter: the adapter
+ *     @vers: where to place the version
+ *
+ *     Reads the FW Bootstrap version from flash.
+ */
+int t4_get_bs_version(struct adapter *adapter, u32 *vers)
+{
+       return t4_read_flash(adapter, FLASH_FWBOOTSTRAP_START +
+                            offsetof(struct fw_hdr, fw_ver), 1,
+                            vers, 0);
+}
+
 /**
  *     t4_get_tp_version - read the TP microcode version
  *     @adapter: the adapter
@@ -7089,52 +7142,122 @@ int t4_ofld_eq_free(struct adapter *adap, unsigned int mbox, unsigned int pf,
 }
 
 /**
- *     t4_handle_fw_rpl - process a FW reply message
+ *     t4_link_down_rc_str - return a string for a Link Down Reason Code
  *     @adap: the adapter
+ *     @link_down_rc: Link Down Reason Code
+ *
+ *     Returns a string representation of the Link Down Reason Code.
+ */
+static const char *t4_link_down_rc_str(unsigned char link_down_rc)
+{
+       static const char * const reason[] = {
+               "Link Down",
+               "Remote Fault",
+               "Auto-negotiation Failure",
+               "Reserved",
+               "Insufficient Airflow",
+               "Unable To Determine Reason",
+               "No RX Signal Detected",
+               "Reserved",
+       };
+
+       if (link_down_rc >= ARRAY_SIZE(reason))
+               return "Bad Reason Code";
+
+       return reason[link_down_rc];
+}
+
+/**
+ *     t4_handle_get_port_info - process a FW reply message
+ *     @pi: the port info
  *     @rpl: start of the FW message
  *
- *     Processes a FW message, such as link state change messages.
+ *     Processes a GET_PORT_INFO FW reply message.
+ */
+void t4_handle_get_port_info(struct port_info *pi, const __be64 *rpl)
+{
+       const struct fw_port_cmd *p = (const void *)rpl;
+       struct adapter *adap = pi->adapter;
+
+       /* link/module state change message */
+       int speed = 0, fc = 0;
+       struct link_config *lc;
+       u32 stat = be32_to_cpu(p->u.info.lstatus_to_modtype);
+       int link_ok = (stat & FW_PORT_CMD_LSTATUS_F) != 0;
+       u32 mod = FW_PORT_CMD_MODTYPE_G(stat);
+
+       if (stat & FW_PORT_CMD_RXPAUSE_F)
+               fc |= PAUSE_RX;
+       if (stat & FW_PORT_CMD_TXPAUSE_F)
+               fc |= PAUSE_TX;
+       if (stat & FW_PORT_CMD_LSPEED_V(FW_PORT_CAP_SPEED_100M))
+               speed = 100;
+       else if (stat & FW_PORT_CMD_LSPEED_V(FW_PORT_CAP_SPEED_1G))
+               speed = 1000;
+       else if (stat & FW_PORT_CMD_LSPEED_V(FW_PORT_CAP_SPEED_10G))
+               speed = 10000;
+       else if (stat & FW_PORT_CMD_LSPEED_V(FW_PORT_CAP_SPEED_40G))
+               speed = 40000;
+
+       lc = &pi->link_cfg;
+
+       if (mod != pi->mod_type) {
+               pi->mod_type = mod;
+               t4_os_portmod_changed(adap, pi->port_id);
+       }
+       if (link_ok != lc->link_ok || speed != lc->speed ||
+           fc != lc->fc) {     /* something changed */
+               if (!link_ok && lc->link_ok) {
+                       unsigned char rc = FW_PORT_CMD_LINKDNRC_G(stat);
+
+                       lc->link_down_rc = rc;
+                       dev_warn(adap->pdev_dev,
+                                "Port %d link down, reason: %s\n",
+                                pi->port_id, t4_link_down_rc_str(rc));
+               }
+               lc->link_ok = link_ok;
+               lc->speed = speed;
+               lc->fc = fc;
+               lc->supported = be16_to_cpu(p->u.info.pcap);
+               t4_os_link_changed(adap, pi->port_id, link_ok);
+       }
+}
+
+/**
+ *      t4_handle_fw_rpl - process a FW reply message
+ *      @adap: the adapter
+ *      @rpl: start of the FW message
+ *
+ *      Processes a FW message, such as link state change messages.
  */
 int t4_handle_fw_rpl(struct adapter *adap, const __be64 *rpl)
 {
        u8 opcode = *(const u8 *)rpl;
 
-       if (opcode == FW_PORT_CMD) {    /* link/module state change message */
-               int speed = 0, fc = 0;
-               const struct fw_port_cmd *p = (void *)rpl;
+       /* This might be a port command ... this simplifies the following
+        * conditionals ...  We can get away with pre-dereferencing
+        * action_to_len16 because it's in the first 16 bytes and all messages
+        * will be at least that long.
+        */
+       const struct fw_port_cmd *p = (const void *)rpl;
+       unsigned int action =
+               FW_PORT_CMD_ACTION_G(be32_to_cpu(p->action_to_len16));
+
+       if (opcode == FW_PORT_CMD && action == FW_PORT_ACTION_GET_PORT_INFO) {
+               int i;
                int chan = FW_PORT_CMD_PORTID_G(be32_to_cpu(p->op_to_portid));
-               int port = adap->chan_map[chan];
-               struct port_info *pi = adap2pinfo(adap, port);
-               struct link_config *lc = &pi->link_cfg;
-               u32 stat = be32_to_cpu(p->u.info.lstatus_to_modtype);
-               int link_ok = (stat & FW_PORT_CMD_LSTATUS_F) != 0;
-               u32 mod = FW_PORT_CMD_MODTYPE_G(stat);
-
-               if (stat & FW_PORT_CMD_RXPAUSE_F)
-                       fc |= PAUSE_RX;
-               if (stat & FW_PORT_CMD_TXPAUSE_F)
-                       fc |= PAUSE_TX;
-               if (stat & FW_PORT_CMD_LSPEED_V(FW_PORT_CAP_SPEED_100M))
-                       speed = 100;
-               else if (stat & FW_PORT_CMD_LSPEED_V(FW_PORT_CAP_SPEED_1G))
-                       speed = 1000;
-               else if (stat & FW_PORT_CMD_LSPEED_V(FW_PORT_CAP_SPEED_10G))
-                       speed = 10000;
-               else if (stat & FW_PORT_CMD_LSPEED_V(FW_PORT_CAP_SPEED_40G))
-                       speed = 40000;
-
-               if (link_ok != lc->link_ok || speed != lc->speed ||
-                   fc != lc->fc) {                    /* something changed */
-                       lc->link_ok = link_ok;
-                       lc->speed = speed;
-                       lc->fc = fc;
-                       lc->supported = be16_to_cpu(p->u.info.pcap);
-                       t4_os_link_changed(adap, port, link_ok);
-               }
-               if (mod != pi->mod_type) {
-                       pi->mod_type = mod;
-                       t4_os_portmod_changed(adap, port);
+               struct port_info *pi = NULL;
+
+               for_each_port(adap, i) {
+                       pi = adap2pinfo(adap, i);
+                       if (pi->tx_chan == chan)
+                               break;
                }
+
+               t4_handle_get_port_info(pi, rpl);
+       } else {
+               dev_warn(adap->pdev_dev, "Unknown firmware reply %d\n", opcode);
+               return -EINVAL;
        }
        return 0;
 }
@@ -7654,61 +7777,74 @@ int t4_init_rss_mode(struct adapter *adap, int mbox)
        return 0;
 }
 
-int t4_port_init(struct adapter *adap, int mbox, int pf, int vf)
+/**
+ *     t4_init_portinfo - allocate a virtual interface amd initialize port_info
+ *     @pi: the port_info
+ *     @mbox: mailbox to use for the FW command
+ *     @port: physical port associated with the VI
+ *     @pf: the PF owning the VI
+ *     @vf: the VF owning the VI
+ *     @mac: the MAC address of the VI
+ *
+ *     Allocates a virtual interface for the given physical port.  If @mac is
+ *     not %NULL it contains the MAC address of the VI as assigned by FW.
+ *     @mac should be large enough to hold an Ethernet address.
+ *     Returns < 0 on error.
+ */
+int t4_init_portinfo(struct port_info *pi, int mbox,
+                    int port, int pf, int vf, u8 mac[])
 {
-       u8 addr[6];
-       int ret, i, j = 0;
+       int ret;
        struct fw_port_cmd c;
-       struct fw_rss_vi_config_cmd rvc;
+       unsigned int rss_size;
 
        memset(&c, 0, sizeof(c));
-       memset(&rvc, 0, sizeof(rvc));
+       c.op_to_portid = cpu_to_be32(FW_CMD_OP_V(FW_PORT_CMD) |
+                                    FW_CMD_REQUEST_F | FW_CMD_READ_F |
+                                    FW_PORT_CMD_PORTID_V(port));
+       c.action_to_len16 = cpu_to_be32(
+               FW_PORT_CMD_ACTION_V(FW_PORT_ACTION_GET_PORT_INFO) |
+               FW_LEN16(c));
+       ret = t4_wr_mbox(pi->adapter, mbox, &c, sizeof(c), &c);
+       if (ret)
+               return ret;
+
+       ret = t4_alloc_vi(pi->adapter, mbox, port, pf, vf, 1, mac, &rss_size);
+       if (ret < 0)
+               return ret;
+
+       pi->viid = ret;
+       pi->tx_chan = port;
+       pi->lport = port;
+       pi->rss_size = rss_size;
+
+       ret = be32_to_cpu(c.u.info.lstatus_to_modtype);
+       pi->mdio_addr = (ret & FW_PORT_CMD_MDIOCAP_F) ?
+               FW_PORT_CMD_MDIOADDR_G(ret) : -1;
+       pi->port_type = FW_PORT_CMD_PTYPE_G(ret);
+       pi->mod_type = FW_PORT_MOD_TYPE_NA;
+
+       init_link_config(&pi->link_cfg, be16_to_cpu(c.u.info.pcap));
+       return 0;
+}
+
+int t4_port_init(struct adapter *adap, int mbox, int pf, int vf)
+{
+       u8 addr[6];
+       int ret, i, j = 0;
 
        for_each_port(adap, i) {
-               unsigned int rss_size;
-               struct port_info *p = adap2pinfo(adap, i);
+               struct port_info *pi = adap2pinfo(adap, i);
 
                while ((adap->params.portvec & (1 << j)) == 0)
                        j++;
 
-               c.op_to_portid = cpu_to_be32(FW_CMD_OP_V(FW_PORT_CMD) |
-                                            FW_CMD_REQUEST_F | FW_CMD_READ_F |
-                                            FW_PORT_CMD_PORTID_V(j));
-               c.action_to_len16 = cpu_to_be32(
-                       FW_PORT_CMD_ACTION_V(FW_PORT_ACTION_GET_PORT_INFO) |
-                       FW_LEN16(c));
-               ret = t4_wr_mbox(adap, mbox, &c, sizeof(c), &c);
+               ret = t4_init_portinfo(pi, mbox, j, pf, vf, addr);
                if (ret)
                        return ret;
 
-               ret = t4_alloc_vi(adap, mbox, j, pf, vf, 1, addr, &rss_size);
-               if (ret < 0)
-                       return ret;
-
-               p->viid = ret;
-               p->tx_chan = j;
-               p->lport = j;
-               p->rss_size = rss_size;
                memcpy(adap->port[i]->dev_addr, addr, ETH_ALEN);
                adap->port[i]->dev_port = j;
-
-               ret = be32_to_cpu(c.u.info.lstatus_to_modtype);
-               p->mdio_addr = (ret & FW_PORT_CMD_MDIOCAP_F) ?
-                       FW_PORT_CMD_MDIOADDR_G(ret) : -1;
-               p->port_type = FW_PORT_CMD_PTYPE_G(ret);
-               p->mod_type = FW_PORT_MOD_TYPE_NA;
-
-               rvc.op_to_viid =
-                       cpu_to_be32(FW_CMD_OP_V(FW_RSS_VI_CONFIG_CMD) |
-                                   FW_CMD_REQUEST_F | FW_CMD_READ_F |
-                                   FW_RSS_VI_CONFIG_CMD_VIID(p->viid));
-               rvc.retval_len16 = cpu_to_be32(FW_LEN16(rvc));
-               ret = t4_wr_mbox(adap, mbox, &rvc, sizeof(rvc), &rvc);
-               if (ret)
-                       return ret;
-               p->rss_mode = be32_to_cpu(rvc.u.basicvirtual.defaultq_to_udpen);
-
-               init_link_config(&p->link_cfg, be16_to_cpu(c.u.info.pcap));
                j++;
        }
        return 0;
index 2fc60e8..7f59ca4 100644 (file)
@@ -220,6 +220,13 @@ enum {
        FLASH_FW_START = FLASH_START(FLASH_FW_START_SEC),
        FLASH_FW_MAX_SIZE = FLASH_MAX_SIZE(FLASH_FW_NSECS),
 
+       /* Location of bootstrap firmware image in FLASH.
+        */
+       FLASH_FWBOOTSTRAP_START_SEC = 27,
+       FLASH_FWBOOTSTRAP_NSECS = 1,
+       FLASH_FWBOOTSTRAP_START = FLASH_START(FLASH_FWBOOTSTRAP_START_SEC),
+       FLASH_FWBOOTSTRAP_MAX_SIZE = FLASH_MAX_SIZE(FLASH_FWBOOTSTRAP_NSECS),
+
        /*
         * iSCSI persistent/crash information.
         */
index 7ad6d4e..392d664 100644 (file)
@@ -2510,6 +2510,11 @@ struct fw_port_cmd {
 #define FW_PORT_CMD_PTYPE_G(x) \
        (((x) >> FW_PORT_CMD_PTYPE_S) & FW_PORT_CMD_PTYPE_M)
 
+#define FW_PORT_CMD_LINKDNRC_S         5
+#define FW_PORT_CMD_LINKDNRC_M         0x7
+#define FW_PORT_CMD_LINKDNRC_G(x)      \
+       (((x) >> FW_PORT_CMD_LINKDNRC_S) & FW_PORT_CMD_LINKDNRC_M)
+
 #define FW_PORT_CMD_MODTYPE_S          0
 #define FW_PORT_CMD_MODTYPE_M          0x1f
 #define FW_PORT_CMD_MODTYPE_V(x)       ((x) << FW_PORT_CMD_MODTYPE_S)
index 4a707c3..734dd77 100644 (file)
@@ -387,6 +387,10 @@ struct adapter {
        /* various locks */
        spinlock_t stats_lock;
 
+       /* support for mailbox command/reply logging */
+#define T4VF_OS_LOG_MBOX_CMDS 256
+       struct mbox_cmd_log *mbox_log;
+
        /* list of MAC addresses in MPS Hash */
        struct list_head mac_hlist;
 };
index 730fec7..04fc6f6 100644 (file)
@@ -1703,6 +1703,105 @@ static const struct ethtool_ops cxgb4vf_ethtool_ops = {
  * ================================================
  */
 
+/*
+ * Show Firmware Mailbox Command/Reply Log
+ *
+ * Note that we don't do any locking when dumping the Firmware Mailbox Log so
+ * it's possible that we can catch things during a log update and therefore
+ * see partially corrupted log entries.  But i9t's probably Good Enough(tm).
+ * If we ever decide that we want to make sure that we're dumping a coherent
+ * log, we'd need to perform locking in the mailbox logging and in
+ * mboxlog_open() where we'd need to grab the entire mailbox log in one go
+ * like we do for the Firmware Device Log.  But as stated above, meh ...
+ */
+static int mboxlog_show(struct seq_file *seq, void *v)
+{
+       struct adapter *adapter = seq->private;
+       struct mbox_cmd_log *log = adapter->mbox_log;
+       struct mbox_cmd *entry;
+       int entry_idx, i;
+
+       if (v == SEQ_START_TOKEN) {
+               seq_printf(seq,
+                          "%10s  %15s  %5s  %5s  %s\n",
+                          "Seq#", "Tstamp", "Atime", "Etime",
+                          "Command/Reply");
+               return 0;
+       }
+
+       entry_idx = log->cursor + ((uintptr_t)v - 2);
+       if (entry_idx >= log->size)
+               entry_idx -= log->size;
+       entry = mbox_cmd_log_entry(log, entry_idx);
+
+       /* skip over unused entries */
+       if (entry->timestamp == 0)
+               return 0;
+
+       seq_printf(seq, "%10u  %15llu  %5d  %5d",
+                  entry->seqno, entry->timestamp,
+                  entry->access, entry->execute);
+       for (i = 0; i < MBOX_LEN / 8; i++) {
+               u64 flit = entry->cmd[i];
+               u32 hi = (u32)(flit >> 32);
+               u32 lo = (u32)flit;
+
+               seq_printf(seq, "  %08x %08x", hi, lo);
+       }
+       seq_puts(seq, "\n");
+       return 0;
+}
+
+static inline void *mboxlog_get_idx(struct seq_file *seq, loff_t pos)
+{
+       struct adapter *adapter = seq->private;
+       struct mbox_cmd_log *log = adapter->mbox_log;
+
+       return ((pos <= log->size) ? (void *)(uintptr_t)(pos + 1) : NULL);
+}
+
+static void *mboxlog_start(struct seq_file *seq, loff_t *pos)
+{
+       return *pos ? mboxlog_get_idx(seq, *pos) : SEQ_START_TOKEN;
+}
+
+static void *mboxlog_next(struct seq_file *seq, void *v, loff_t *pos)
+{
+       ++*pos;
+       return mboxlog_get_idx(seq, *pos);
+}
+
+static void mboxlog_stop(struct seq_file *seq, void *v)
+{
+}
+
+static const struct seq_operations mboxlog_seq_ops = {
+       .start = mboxlog_start,
+       .next  = mboxlog_next,
+       .stop  = mboxlog_stop,
+       .show  = mboxlog_show
+};
+
+static int mboxlog_open(struct inode *inode, struct file *file)
+{
+       int res = seq_open(file, &mboxlog_seq_ops);
+
+       if (!res) {
+               struct seq_file *seq = file->private_data;
+
+               seq->private = inode->i_private;
+       }
+       return res;
+}
+
+static const struct file_operations mboxlog_fops = {
+       .owner   = THIS_MODULE,
+       .open    = mboxlog_open,
+       .read    = seq_read,
+       .llseek  = seq_lseek,
+       .release = seq_release,
+};
+
 /*
  * Show SGE Queue Set information.  We display QPL Queues Sets per line.
  */
@@ -2122,6 +2221,7 @@ struct cxgb4vf_debugfs_entry {
 };
 
 static struct cxgb4vf_debugfs_entry debugfs_files[] = {
+       { "mboxlog",    S_IRUGO, &mboxlog_fops },
        { "sge_qinfo",  S_IRUGO, &sge_qinfo_debugfs_fops },
        { "sge_qstats", S_IRUGO, &sge_qstats_proc_fops },
        { "resources",  S_IRUGO, &resources_proc_fops },
@@ -2664,6 +2764,16 @@ static int cxgb4vf_pci_probe(struct pci_dev *pdev,
        adapter->pdev = pdev;
        adapter->pdev_dev = &pdev->dev;
 
+       adapter->mbox_log = kzalloc(sizeof(*adapter->mbox_log) +
+                                   (sizeof(struct mbox_cmd) *
+                                    T4VF_OS_LOG_MBOX_CMDS),
+                                   GFP_KERNEL);
+       if (!adapter->mbox_log) {
+               err = -ENOMEM;
+               goto err_free_adapter;
+       }
+       adapter->mbox_log->size = T4VF_OS_LOG_MBOX_CMDS;
+
        /*
         * Initialize SMP data synchronization resources.
         */
@@ -2913,6 +3023,7 @@ err_unmap_bar0:
        iounmap(adapter->regs);
 
 err_free_adapter:
+       kfree(adapter->mbox_log);
        kfree(adapter);
 
 err_release_regions:
@@ -2982,6 +3093,7 @@ static void cxgb4vf_pci_remove(struct pci_dev *pdev)
                iounmap(adapter->regs);
                if (!is_t4(adapter->params.chip))
                        iounmap(adapter->bar2);
+               kfree(adapter->mbox_log);
                kfree(adapter);
        }
 
index 1ccd282..1bb57d3 100644 (file)
@@ -1448,7 +1448,7 @@ int t4vf_eth_xmit(struct sk_buff *skb, struct net_device *dev)
         * the new TX descriptors and return success.
         */
        txq_advance(&txq->q, ndesc);
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
        ring_tx_db(adapter, &txq->q, ndesc);
        return NETDEV_TX_OK;
 
index 9b40a85..438374a 100644 (file)
@@ -36,6 +36,7 @@
 #ifndef __T4VF_COMMON_H__
 #define __T4VF_COMMON_H__
 
+#include "../cxgb4/t4_hw.h"
 #include "../cxgb4/t4fw_api.h"
 
 #define CHELSIO_CHIP_CODE(version, revision) (((version) << 4) | (revision))
@@ -227,6 +228,34 @@ struct adapter_params {
        u8 nports;                      /* # of Ethernet "ports" */
 };
 
+/* Firmware Mailbox Command/Reply log.  All values are in Host-Endian format.
+ * The access and execute times are signed in order to accommodate negative
+ * error returns.
+ */
+struct mbox_cmd {
+       u64 cmd[MBOX_LEN / 8];          /* a Firmware Mailbox Command/Reply */
+       u64 timestamp;                  /* OS-dependent timestamp */
+       u32 seqno;                      /* sequence number */
+       s16 access;                     /* time (ms) to access mailbox */
+       s16 execute;                    /* time (ms) to execute */
+};
+
+struct mbox_cmd_log {
+       unsigned int size;              /* number of entries in the log */
+       unsigned int cursor;            /* next position in the log to write */
+       u32 seqno;                      /* next sequence number */
+       /* variable length mailbox command log starts here */
+};
+
+/* Given a pointer to a Firmware Mailbox Command Log and a log entry index,
+ * return a pointer to the specified entry.
+ */
+static inline struct mbox_cmd *mbox_cmd_log_entry(struct mbox_cmd_log *log,
+                                                 unsigned int entry_idx)
+{
+       return &((struct mbox_cmd *)&(log)[1])[entry_idx];
+}
+
 #include "adapter.h"
 
 #ifndef PCI_VENDOR_ID_CHELSIO
index fed83d8..955ff7c 100644 (file)
@@ -76,21 +76,33 @@ static void get_mbox_rpl(struct adapter *adapter, __be64 *rpl, int size,
                *rpl++ = cpu_to_be64(t4_read_reg64(adapter, mbox_data));
 }
 
-/*
- * Dump contents of mailbox with a leading tag.
+/**
+ *     t4vf_record_mbox - record a Firmware Mailbox Command/Reply in the log
+ *     @adapter: the adapter
+ *     @cmd: the Firmware Mailbox Command or Reply
+ *     @size: command length in bytes
+ *     @access: the time (ms) needed to access the Firmware Mailbox
+ *     @execute: the time (ms) the command spent being executed
  */
-static void dump_mbox(struct adapter *adapter, const char *tag, u32 mbox_data)
+static void t4vf_record_mbox(struct adapter *adapter, const __be64 *cmd,
+                            int size, int access, int execute)
 {
-       dev_err(adapter->pdev_dev,
-               "mbox %s: %llx %llx %llx %llx %llx %llx %llx %llx\n", tag,
-               (unsigned long long)t4_read_reg64(adapter, mbox_data +  0),
-               (unsigned long long)t4_read_reg64(adapter, mbox_data +  8),
-               (unsigned long long)t4_read_reg64(adapter, mbox_data + 16),
-               (unsigned long long)t4_read_reg64(adapter, mbox_data + 24),
-               (unsigned long long)t4_read_reg64(adapter, mbox_data + 32),
-               (unsigned long long)t4_read_reg64(adapter, mbox_data + 40),
-               (unsigned long long)t4_read_reg64(adapter, mbox_data + 48),
-               (unsigned long long)t4_read_reg64(adapter, mbox_data + 56));
+       struct mbox_cmd_log *log = adapter->mbox_log;
+       struct mbox_cmd *entry;
+       int i;
+
+       entry = mbox_cmd_log_entry(log, log->cursor++);
+       if (log->cursor == log->size)
+               log->cursor = 0;
+
+       for (i = 0; i < size / 8; i++)
+               entry->cmd[i] = be64_to_cpu(cmd[i]);
+       while (i < MBOX_LEN / 8)
+               entry->cmd[i++] = 0;
+       entry->timestamp = jiffies;
+       entry->seqno = log->seqno++;
+       entry->access = access;
+       entry->execute = execute;
 }
 
 /**
@@ -120,10 +132,13 @@ int t4vf_wr_mbox_core(struct adapter *adapter, const void *cmd, int size,
                1, 1, 3, 5, 10, 10, 20, 50, 100
        };
 
+       u16 access = 0, execute = 0;
        u32 v, mbox_data;
-       int i, ms, delay_idx;
+       int i, ms, delay_idx, ret;
        const __be64 *p;
        u32 mbox_ctl = T4VF_CIM_BASE_ADDR + CIM_VF_EXT_MAILBOX_CTRL;
+       u32 cmd_op = FW_CMD_OP_G(be32_to_cpu(((struct fw_cmd_hdr *)cmd)->hi));
+       __be64 cmd_rpl[MBOX_LEN / 8];
 
        /* In T6, mailbox size is changed to 128 bytes to avoid
         * invalidating the entire prefetch buffer.
@@ -148,8 +163,11 @@ int t4vf_wr_mbox_core(struct adapter *adapter, const void *cmd, int size,
        v = MBOWNER_G(t4_read_reg(adapter, mbox_ctl));
        for (i = 0; v == MBOX_OWNER_NONE && i < 3; i++)
                v = MBOWNER_G(t4_read_reg(adapter, mbox_ctl));
-       if (v != MBOX_OWNER_DRV)
-               return v == MBOX_OWNER_FW ? -EBUSY : -ETIMEDOUT;
+       if (v != MBOX_OWNER_DRV) {
+               ret = (v == MBOX_OWNER_FW) ? -EBUSY : -ETIMEDOUT;
+               t4vf_record_mbox(adapter, cmd, size, access, ret);
+               return ret;
+       }
 
        /*
         * Write the command array into the Mailbox Data register array and
@@ -164,6 +182,8 @@ int t4vf_wr_mbox_core(struct adapter *adapter, const void *cmd, int size,
         * Data registers before doing the write to the VF Mailbox Control
         * register.
         */
+       if (cmd_op != FW_VI_STATS_CMD)
+               t4vf_record_mbox(adapter, cmd, size, access, 0);
        for (i = 0, p = cmd; i < size; i += 8)
                t4_write_reg64(adapter, mbox_data + i, be64_to_cpu(*p++));
        t4_read_reg(adapter, mbox_data);         /* flush write */
@@ -209,31 +229,33 @@ int t4vf_wr_mbox_core(struct adapter *adapter, const void *cmd, int size,
                         * We return the (negated) firmware command return
                         * code (this depends on FW_SUCCESS == 0).
                         */
+                       get_mbox_rpl(adapter, cmd_rpl, size, mbox_data);
 
                        /* return value in low-order little-endian word */
-                       v = t4_read_reg(adapter, mbox_data);
-                       if (FW_CMD_RETVAL_G(v))
-                               dump_mbox(adapter, "FW Error", mbox_data);
+                       v = be64_to_cpu(cmd_rpl[0]);
 
                        if (rpl) {
                                /* request bit in high-order BE word */
                                WARN_ON((be32_to_cpu(*(const __be32 *)cmd)
                                         & FW_CMD_REQUEST_F) == 0);
-                               get_mbox_rpl(adapter, rpl, size, mbox_data);
+                               memcpy(rpl, cmd_rpl, size);
                                WARN_ON((be32_to_cpu(*(__be32 *)rpl)
                                         & FW_CMD_REQUEST_F) != 0);
                        }
                        t4_write_reg(adapter, mbox_ctl,
                                     MBOWNER_V(MBOX_OWNER_NONE));
+                       execute = i + ms;
+                       if (cmd_op != FW_VI_STATS_CMD)
+                               t4vf_record_mbox(adapter, cmd_rpl, size, access,
+                                                execute);
                        return -FW_CMD_RETVAL_G(v);
                }
        }
 
-       /*
-        * We timed out.  Return the error ...
-        */
-       dump_mbox(adapter, "FW Timeout", mbox_data);
-       return -ETIMEDOUT;
+       /* We timed out.  Return the error ... */
+       ret = -ETIMEDOUT;
+       t4vf_record_mbox(adapter, cmd, size, access, ret);
+       return ret;
 }
 
 #define ADVERT_MASK (FW_PORT_CAP_SPEED_100M | FW_PORT_CAP_SPEED_1G |\
index 48d9194..9e06130 100644 (file)
@@ -966,7 +966,7 @@ dm9000_init_dm9000(struct net_device *dev)
        /* Init Driver variable */
        db->tx_pkt_cnt = 0;
        db->queue_pkt_len = 0;
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
 }
 
 /* Our watchdog timed out. Called by the networking layer */
@@ -985,7 +985,7 @@ static void dm9000_timeout(struct net_device *dev)
        dm9000_init_dm9000(dev);
        dm9000_unmask_interrupts(db);
        /* We can accept TX packets again */
-       dev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(dev); /* prevent tx timeout */
        netif_wake_queue(dev);
 
        /* Restore previous register address */
index 3acde3b..cbe8497 100644 (file)
@@ -1336,7 +1336,7 @@ de4x5_open(struct net_device *dev)
     }
 
     lp->interrupt = UNMASK_INTERRUPTS;
-    dev->trans_start = jiffies; /* prevent tx timeout */
+    netif_trans_update(dev); /* prevent tx timeout */
 
     START_DE4X5;
 
@@ -1465,7 +1465,7 @@ de4x5_queue_pkt(struct sk_buff *skb, struct net_device *dev)
 
     netif_stop_queue(dev);
     if (!lp->tx_enable)                   /* Cannot send for now */
-       return NETDEV_TX_LOCKED;
+               goto tx_err;
 
     /*
     ** Clean out the TX ring asynchronously to interrupts - sometimes the
@@ -1478,7 +1478,7 @@ de4x5_queue_pkt(struct sk_buff *skb, struct net_device *dev)
 
     /* Test if cache is already locked - requeue skb if so */
     if (test_and_set_bit(0, (void *)&lp->cache.lock) && !lp->interrupt)
-       return NETDEV_TX_LOCKED;
+               goto tx_err;
 
     /* Transmit descriptor ring full or stale skb */
     if (netif_queue_stopped(dev) || (u_long) lp->tx_skb[lp->tx_new] > 1) {
@@ -1519,6 +1519,9 @@ de4x5_queue_pkt(struct sk_buff *skb, struct net_device *dev)
     lp->cache.lock = 0;
 
     return NETDEV_TX_OK;
+tx_err:
+       dev_kfree_skb_any(skb);
+       return NETDEV_TX_OK;
 }
 
 /*
@@ -1932,7 +1935,7 @@ set_multicast_list(struct net_device *dev)
 
            lp->tx_new = (lp->tx_new + 1) % lp->txRingSize;
            outl(POLL_DEMAND, DE4X5_TPD);       /* Start the TX */
-           dev->trans_start = jiffies; /* prevent tx timeout */
+           netif_trans_update(dev); /* prevent tx timeout */
        }
     }
 }
index afd8e78..8ed0fd8 100644 (file)
        (__CHK_IO_SIZE(((pci_dev)->device << 16) | (pci_dev)->vendor, \
        (pci_dev)->revision))
 
-/* Sten Check */
-#define DEVICE net_device
-
 /* Structure/enum declaration ------------------------------- */
 struct tx_desc {
         __le32 tdes0, tdes1, tdes2, tdes3; /* Data for the card */
@@ -313,10 +310,10 @@ static u8 SF_mode;                /* Special Function: 1:VLAN, 2:RX Flow Control
 
 
 /* function declaration ------------------------------------- */
-static int dmfe_open(struct DEVICE *);
-static netdev_tx_t dmfe_start_xmit(struct sk_buff *, struct DEVICE *);
-static int dmfe_stop(struct DEVICE *);
-static void dmfe_set_filter_mode(struct DEVICE *);
+static int dmfe_open(struct net_device *);
+static netdev_tx_t dmfe_start_xmit(struct sk_buff *, struct net_device *);
+static int dmfe_stop(struct net_device *);
+static void dmfe_set_filter_mode(struct net_device *);
 static const struct ethtool_ops netdev_ethtool_ops;
 static u16 read_srom_word(void __iomem *, int);
 static irqreturn_t dmfe_interrupt(int , void *);
@@ -326,8 +323,8 @@ static void poll_dmfe (struct net_device *dev);
 static void dmfe_descriptor_init(struct net_device *);
 static void allocate_rx_buffer(struct net_device *);
 static void update_cr6(u32, void __iomem *);
-static void send_filter_frame(struct DEVICE *);
-static void dm9132_id_table(struct DEVICE *);
+static void send_filter_frame(struct net_device *);
+static void dm9132_id_table(struct net_device *);
 static u16 dmfe_phy_read(void __iomem *, u8, u8, u32);
 static void dmfe_phy_write(void __iomem *, u8, u8, u16, u32);
 static void dmfe_phy_write_1bit(void __iomem *, u32);
@@ -336,12 +333,12 @@ static u8 dmfe_sense_speed(struct dmfe_board_info *);
 static void dmfe_process_mode(struct dmfe_board_info *);
 static void dmfe_timer(unsigned long);
 static inline u32 cal_CRC(unsigned char *, unsigned int, u8);
-static void dmfe_rx_packet(struct DEVICE *, struct dmfe_board_info *);
-static void dmfe_free_tx_pkt(struct DEVICE *, struct dmfe_board_info *);
+static void dmfe_rx_packet(struct net_device *, struct dmfe_board_info *);
+static void dmfe_free_tx_pkt(struct net_device *, struct dmfe_board_info *);
 static void dmfe_reuse_skb(struct dmfe_board_info *, struct sk_buff *);
-static void dmfe_dynamic_reset(struct DEVICE *);
+static void dmfe_dynamic_reset(struct net_device *);
 static void dmfe_free_rxbuffer(struct dmfe_board_info *);
-static void dmfe_init_dm910x(struct DEVICE *);
+static void dmfe_init_dm910x(struct net_device *);
 static void dmfe_parse_srom(struct dmfe_board_info *);
 static void dmfe_program_DM9801(struct dmfe_board_info *, int);
 static void dmfe_program_DM9802(struct dmfe_board_info *);
@@ -558,7 +555,7 @@ static void dmfe_remove_one(struct pci_dev *pdev)
  *     The interface is opened whenever "ifconfig" actives it.
  */
 
-static int dmfe_open(struct DEVICE *dev)
+static int dmfe_open(struct net_device *dev)
 {
        struct dmfe_board_info *db = netdev_priv(dev);
        const int irq = db->pdev->irq;
@@ -617,7 +614,7 @@ static int dmfe_open(struct DEVICE *dev)
  *     Enable Tx/Rx machine
  */
 
-static void dmfe_init_dm910x(struct DEVICE *dev)
+static void dmfe_init_dm910x(struct net_device *dev)
 {
        struct dmfe_board_info *db = netdev_priv(dev);
        void __iomem *ioaddr = db->ioaddr;
@@ -684,7 +681,7 @@ static void dmfe_init_dm910x(struct DEVICE *dev)
  */
 
 static netdev_tx_t dmfe_start_xmit(struct sk_buff *skb,
-                                        struct DEVICE *dev)
+                                        struct net_device *dev)
 {
        struct dmfe_board_info *db = netdev_priv(dev);
        void __iomem *ioaddr = db->ioaddr;
@@ -728,7 +725,7 @@ static netdev_tx_t dmfe_start_xmit(struct sk_buff *skb,
                txptr->tdes0 = cpu_to_le32(0x80000000); /* Set owner bit */
                db->tx_packet_cnt++;                    /* Ready to send */
                dw32(DCR1, 0x1);                        /* Issue Tx polling */
-               dev->trans_start = jiffies;             /* saved time stamp */
+               netif_trans_update(dev);                /* saved time stamp */
        } else {
                db->tx_queue_cnt++;                     /* queue TX packet */
                dw32(DCR1, 0x1);                        /* Issue Tx polling */
@@ -754,7 +751,7 @@ static netdev_tx_t dmfe_start_xmit(struct sk_buff *skb,
  *     The interface is stopped when it is brought.
  */
 
-static int dmfe_stop(struct DEVICE *dev)
+static int dmfe_stop(struct net_device *dev)
 {
        struct dmfe_board_info *db = netdev_priv(dev);
        void __iomem *ioaddr = db->ioaddr;
@@ -798,7 +795,7 @@ static int dmfe_stop(struct DEVICE *dev)
 
 static irqreturn_t dmfe_interrupt(int irq, void *dev_id)
 {
-       struct DEVICE *dev = dev_id;
+       struct net_device *dev = dev_id;
        struct dmfe_board_info *db = netdev_priv(dev);
        void __iomem *ioaddr = db->ioaddr;
        unsigned long flags;
@@ -879,7 +876,7 @@ static void poll_dmfe (struct net_device *dev)
  *     Free TX resource after TX complete
  */
 
-static void dmfe_free_tx_pkt(struct DEVICE *dev, struct dmfe_board_info * db)
+static void dmfe_free_tx_pkt(struct net_device *dev, struct dmfe_board_info *db)
 {
        struct tx_desc *txptr;
        void __iomem *ioaddr = db->ioaddr;
@@ -934,7 +931,7 @@ static void dmfe_free_tx_pkt(struct DEVICE *dev, struct dmfe_board_info * db)
                db->tx_packet_cnt++;                    /* Ready to send */
                db->tx_queue_cnt--;
                dw32(DCR1, 0x1);                        /* Issue Tx polling */
-               dev->trans_start = jiffies;             /* saved time stamp */
+               netif_trans_update(dev);                /* saved time stamp */
        }
 
        /* Resource available check */
@@ -961,7 +958,7 @@ static inline u32 cal_CRC(unsigned char * Data, unsigned int Len, u8 flag)
  *     Receive the come packet and pass to upper layer
  */
 
-static void dmfe_rx_packet(struct DEVICE *dev, struct dmfe_board_info * db)
+static void dmfe_rx_packet(struct net_device *dev, struct dmfe_board_info *db)
 {
        struct rx_desc *rxptr;
        struct sk_buff *skb, *newskb;
@@ -1052,7 +1049,7 @@ static void dmfe_rx_packet(struct DEVICE *dev, struct dmfe_board_info * db)
  * Set DM910X multicast address
  */
 
-static void dmfe_set_filter_mode(struct DEVICE * dev)
+static void dmfe_set_filter_mode(struct net_device *dev)
 {
        struct dmfe_board_info *db = netdev_priv(dev);
        unsigned long flags;
@@ -1545,7 +1542,7 @@ static void send_filter_frame(struct net_device *dev)
                update_cr6(db->cr6_data | 0x2000, ioaddr);
                dw32(DCR1, 0x1);        /* Issue Tx polling */
                update_cr6(db->cr6_data, ioaddr);
-               dev->trans_start = jiffies;
+               netif_trans_update(dev);
        } else
                db->tx_queue_cnt++;     /* Put in TX queue */
 }
index 5364563..7bcccf5 100644 (file)
@@ -44,7 +44,7 @@ void pnic_do_nway(struct net_device *dev)
                        tp->csr6 = new_csr6;
                        /* Restart Tx */
                        tulip_restart_rxtx(tp);
-                       dev->trans_start = jiffies;
+                       netif_trans_update(dev);
                }
        }
 }
@@ -70,7 +70,7 @@ void pnic_lnk_change(struct net_device *dev, int csr5)
                        iowrite32(tp->csr6, ioaddr + CSR6);
                        iowrite32(0x30, ioaddr + CSR12);
                        iowrite32(0x0201F078, ioaddr + 0xB8); /* Turn on autonegotiation. */
-                       dev->trans_start = jiffies;
+                       netif_trans_update(dev);
                }
        } else if (ioread32(ioaddr + CSR5) & TPLnkPass) {
                if (tulip_media_cap[dev->if_port] & MediaIsMII) {
@@ -147,7 +147,7 @@ void pnic_timer(unsigned long data)
                                tp->csr6 = new_csr6;
                                /* Restart Tx */
                                tulip_restart_rxtx(tp);
-                               dev->trans_start = jiffies;
+                               netif_trans_update(dev);
                                if (tulip_debug > 1)
                                        dev_info(&dev->dev,
                                                 "Changing PNIC configuration to %s %s-duplex, CSR6 %08x\n",
index 94d0eeb..bbde90b 100644 (file)
@@ -605,7 +605,7 @@ static void tulip_tx_timeout(struct net_device *dev)
 
 out_unlock:
        spin_unlock_irqrestore (&tp->lock, flags);
-       dev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(dev); /* prevent tx timeout */
        netif_wake_queue (dev);
 }
 
index 447d092..e750b5d 100644 (file)
@@ -636,7 +636,7 @@ static netdev_tx_t uli526x_start_xmit(struct sk_buff *skb,
                txptr->tdes0 = cpu_to_le32(0x80000000); /* Set owner bit */
                db->tx_packet_cnt++;                    /* Ready to send */
                uw32(DCR1, 0x1);                        /* Issue Tx polling */
-               dev->trans_start = jiffies;             /* saved time stamp */
+               netif_trans_update(dev);                /* saved time stamp */
        }
 
        /* Tx resource check */
@@ -1431,7 +1431,7 @@ static void send_filter_frame(struct net_device *dev, int mc_cnt)
                update_cr6(db->cr6_data | 0x2000, ioaddr);
                uw32(DCR1, 0x1);        /* Issue Tx polling */
                update_cr6(db->cr6_data, ioaddr);
-               dev->trans_start = jiffies;
+               netif_trans_update(dev);
        } else
                netdev_err(dev, "No Tx resource - Send_filter_frame!\n");
 }
index 3c0e4d5..1f62b94 100644 (file)
@@ -966,7 +966,7 @@ static void tx_timeout(struct net_device *dev)
        enable_irq(irq);
 
        netif_wake_queue(dev);
-       dev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(dev); /* prevent tx timeout */
        np->stats.tx_errors++;
 }
 
index f92b6d9..78f1446 100644 (file)
@@ -706,7 +706,7 @@ rio_tx_timeout (struct net_device *dev)
                dev->name, dr32(TxStatus));
        rio_free_tx(dev, 0);
        dev->if_port = 0;
-       dev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(dev); /* prevent tx timeout */
 }
 
 static netdev_tx_t
index a28a2e5..58c6338 100644 (file)
@@ -1011,7 +1011,7 @@ static void tx_timeout(struct net_device *dev)
 
        dev->if_port = 0;
 
-       dev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(dev); /* prevent tx timeout */
        dev->stats.tx_errors++;
        if (np->cur_tx - np->dirty_tx < TX_QUEUE_LEN - 4) {
                netif_wake_queue(dev);
index b1b9eba..c08bd76 100644 (file)
@@ -1227,7 +1227,7 @@ static void fealnx_tx_timeout(struct net_device *dev)
 
        spin_unlock_irqrestore(&np->lock, flags);
 
-       dev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(dev); /* prevent tx timeout */
        dev->stats.tx_errors++;
        netif_wake_queue(dev); /* or .._start_.. ?? */
 }
index 25553ee..f444714 100644 (file)
@@ -763,24 +763,28 @@ static void mpc52xx_fec_reset(struct net_device *dev)
 
 /* ethtool interface */
 
-static int mpc52xx_fec_get_settings(struct net_device *dev, struct ethtool_cmd *cmd)
+static int mpc52xx_fec_get_ksettings(struct net_device *dev,
+                                    struct ethtool_link_ksettings *cmd)
 {
        struct mpc52xx_fec_priv *priv = netdev_priv(dev);
+       struct phy_device *phydev = priv->phydev;
 
        if (!priv->phydev)
                return -ENODEV;
 
-       return phy_ethtool_gset(priv->phydev, cmd);
+       return phy_ethtool_ksettings_get(phydev, cmd);
 }
 
-static int mpc52xx_fec_set_settings(struct net_device *dev, struct ethtool_cmd *cmd)
+static int mpc52xx_fec_set_ksettings(struct net_device *dev,
+                                    const struct ethtool_link_ksettings *cmd)
 {
        struct mpc52xx_fec_priv *priv = netdev_priv(dev);
+       struct phy_device *phydev = priv->phydev;
 
        if (!priv->phydev)
                return -ENODEV;
 
-       return phy_ethtool_sset(priv->phydev, cmd);
+       return phy_ethtool_ksettings_set(phydev, cmd);
 }
 
 static u32 mpc52xx_fec_get_msglevel(struct net_device *dev)
@@ -796,12 +800,12 @@ static void mpc52xx_fec_set_msglevel(struct net_device *dev, u32 level)
 }
 
 static const struct ethtool_ops mpc52xx_fec_ethtool_ops = {
-       .get_settings = mpc52xx_fec_get_settings,
-       .set_settings = mpc52xx_fec_set_settings,
        .get_link = ethtool_op_get_link,
        .get_msglevel = mpc52xx_fec_get_msglevel,
        .set_msglevel = mpc52xx_fec_set_msglevel,
        .get_ts_info = ethtool_op_get_ts_info,
+       .get_link_ksettings = mpc52xx_fec_get_ksettings,
+       .set_link_ksettings = mpc52xx_fec_set_ksettings,
 };
 
 
index 48a9c17..da90b5a 100644 (file)
@@ -847,24 +847,28 @@ static void fs_get_regs(struct net_device *dev, struct ethtool_regs *regs,
                regs->version = 0;
 }
 
-static int fs_get_settings(struct net_device *dev, struct ethtool_cmd *cmd)
+static int fs_get_ksettings(struct net_device *dev,
+                           struct ethtool_link_ksettings *cmd)
 {
        struct fs_enet_private *fep = netdev_priv(dev);
+       struct phy_device *phydev = fep->phydev;
 
        if (!fep->phydev)
                return -ENODEV;
 
-       return phy_ethtool_gset(fep->phydev, cmd);
+       return phy_ethtool_ksettings_get(phydev, cmd);
 }
 
-static int fs_set_settings(struct net_device *dev, struct ethtool_cmd *cmd)
+static int fs_set_ksettings(struct net_device *dev,
+                           const struct ethtool_link_ksettings *cmd)
 {
        struct fs_enet_private *fep = netdev_priv(dev);
+       struct phy_device *phydev = fep->phydev;
 
        if (!fep->phydev)
                return -ENODEV;
 
-       return phy_ethtool_sset(fep->phydev, cmd);
+       return phy_ethtool_ksettings_set(phydev, cmd);
 }
 
 static int fs_nway_reset(struct net_device *dev)
@@ -887,14 +891,14 @@ static void fs_set_msglevel(struct net_device *dev, u32 value)
 static const struct ethtool_ops fs_ethtool_ops = {
        .get_drvinfo = fs_get_drvinfo,
        .get_regs_len = fs_get_regs_len,
-       .get_settings = fs_get_settings,
-       .set_settings = fs_set_settings,
        .nway_reset = fs_nway_reset,
        .get_link = ethtool_op_get_link,
        .get_msglevel = fs_get_msglevel,
        .set_msglevel = fs_set_msglevel,
        .get_regs = fs_get_regs,
        .get_ts_info = ethtool_op_get_ts_info,
+       .get_link_ksettings = fs_get_ksettings,
+       .set_link_ksettings = fs_set_ksettings,
 };
 
 static int fs_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
index d2f917a..a580041 100644 (file)
@@ -2076,7 +2076,7 @@ void gfar_start(struct gfar_private *priv)
 
        gfar_ints_enable(priv);
 
-       priv->ndev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(priv->ndev); /* prevent tx timeout */
 }
 
 static void free_grp_irqs(struct gfar_priv_grp *grp)
index 4b0ee85..2c45c80 100644 (file)
@@ -185,7 +185,8 @@ static void gfar_gdrvinfo(struct net_device *dev,
 }
 
 
-static int gfar_ssettings(struct net_device *dev, struct ethtool_cmd *cmd)
+static int gfar_set_ksettings(struct net_device *dev,
+                             const struct ethtool_link_ksettings *cmd)
 {
        struct gfar_private *priv = netdev_priv(dev);
        struct phy_device *phydev = priv->phydev;
@@ -193,29 +194,19 @@ static int gfar_ssettings(struct net_device *dev, struct ethtool_cmd *cmd)
        if (NULL == phydev)
                return -ENODEV;
 
-       return phy_ethtool_sset(phydev, cmd);
+       return phy_ethtool_ksettings_set(phydev, cmd);
 }
 
-
-/* Return the current settings in the ethtool_cmd structure */
-static int gfar_gsettings(struct net_device *dev, struct ethtool_cmd *cmd)
+static int gfar_get_ksettings(struct net_device *dev,
+                             struct ethtool_link_ksettings *cmd)
 {
        struct gfar_private *priv = netdev_priv(dev);
        struct phy_device *phydev = priv->phydev;
-       struct gfar_priv_rx_q *rx_queue = NULL;
-       struct gfar_priv_tx_q *tx_queue = NULL;
 
        if (NULL == phydev)
                return -ENODEV;
-       tx_queue = priv->tx_queue[0];
-       rx_queue = priv->rx_queue[0];
-
-       /* etsec-1.7 and older versions have only one txic
-        * and rxic regs although they support multiple queues */
-       cmd->maxtxpkt = get_icft_value(tx_queue->txic);
-       cmd->maxrxpkt = get_icft_value(rx_queue->rxic);
 
-       return phy_ethtool_gset(phydev, cmd);
+       return phy_ethtool_ksettings_get(phydev, cmd);
 }
 
 /* Return the length of the register structure */
@@ -1565,8 +1556,6 @@ static int gfar_get_ts_info(struct net_device *dev,
 }
 
 const struct ethtool_ops gfar_ethtool_ops = {
-       .get_settings = gfar_gsettings,
-       .set_settings = gfar_ssettings,
        .get_drvinfo = gfar_gdrvinfo,
        .get_regs_len = gfar_reglen,
        .get_regs = gfar_get_regs,
@@ -1589,4 +1578,6 @@ const struct ethtool_ops gfar_ethtool_ops = {
        .set_rxnfc = gfar_set_nfc,
        .get_rxnfc = gfar_get_nfc,
        .get_ts_info = gfar_get_ts_info,
+       .get_link_ksettings = gfar_get_ksettings,
+       .set_link_ksettings = gfar_set_ksettings,
 };
index 89714f5..812a968 100644 (file)
@@ -105,23 +105,20 @@ static const char rx_fw_stat_gstrings[][ETH_GSTRING_LEN] = {
 #define UEC_RX_FW_STATS_LEN ARRAY_SIZE(rx_fw_stat_gstrings)
 
 static int
-uec_get_settings(struct net_device *netdev, struct ethtool_cmd *ecmd)
+uec_get_ksettings(struct net_device *netdev, struct ethtool_link_ksettings *cmd)
 {
        struct ucc_geth_private *ugeth = netdev_priv(netdev);
        struct phy_device *phydev = ugeth->phydev;
-       struct ucc_geth_info *ug_info = ugeth->ug_info;
 
        if (!phydev)
                return -ENODEV;
 
-       ecmd->maxtxpkt = 1;
-       ecmd->maxrxpkt = ug_info->interruptcoalescingmaxvalue[0];
-
-       return phy_ethtool_gset(phydev, ecmd);
+       return phy_ethtool_ksettings_get(phydev, cmd);
 }
 
 static int
-uec_set_settings(struct net_device *netdev, struct ethtool_cmd *ecmd)
+uec_set_ksettings(struct net_device *netdev,
+                 const struct ethtool_link_ksettings *cmd)
 {
        struct ucc_geth_private *ugeth = netdev_priv(netdev);
        struct phy_device *phydev = ugeth->phydev;
@@ -129,7 +126,7 @@ uec_set_settings(struct net_device *netdev, struct ethtool_cmd *ecmd)
        if (!phydev)
                return -ENODEV;
 
-       return phy_ethtool_sset(phydev, ecmd);
+       return phy_ethtool_ksettings_set(phydev, cmd);
 }
 
 static void
@@ -392,8 +389,6 @@ static int uec_set_wol(struct net_device *netdev, struct ethtool_wolinfo *wol)
 #endif /* CONFIG_PM */
 
 static const struct ethtool_ops uec_ethtool_ops = {
-       .get_settings           = uec_get_settings,
-       .set_settings           = uec_set_settings,
        .get_drvinfo            = uec_get_drvinfo,
        .get_regs_len           = uec_get_regs_len,
        .get_regs               = uec_get_regs,
@@ -411,6 +406,8 @@ static const struct ethtool_ops uec_ethtool_ops = {
        .get_wol                = uec_get_wol,
        .set_wol                = uec_set_wol,
        .get_ts_info            = ethtool_op_get_ts_info,
+       .get_link_ksettings     = uec_get_ksettings,
+       .set_link_ksettings     = uec_set_ksettings,
 };
 
 void uec_set_ethtool_ops(struct net_device *netdev)
index 678f501..399cfd2 100644 (file)
@@ -746,7 +746,7 @@ static irqreturn_t fjn_interrupt(int dummy, void *dev_id)
            lp->sent = lp->tx_queue ;
            lp->tx_queue = 0;
            lp->tx_queue_len = 0;
-           dev->trans_start = jiffies;
+           netif_trans_update(dev);
        } else {
            lp->tx_started = 0;
        }
index e51892d..b9f2ea5 100644 (file)
@@ -636,7 +636,7 @@ static int hix5hd2_net_xmit(struct sk_buff *skb, struct net_device *dev)
        pos = dma_ring_incr(pos, TX_DESC_NUM);
        writel_relaxed(dma_byte(pos), priv->base + TX_BQ_WR_ADDR);
 
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
        dev->stats.tx_packets++;
        dev->stats.tx_bytes += skb->len;
        netdev_sent_queue(dev, skb->len);
index 1591422..7a757e8 100644 (file)
@@ -29,25 +29,6 @@ static struct hns_mac_cb *hns_get_mac_cb(struct hnae_handle *handle)
        return vf_cb->mac_cb;
 }
 
-/**
- * hns_ae_map_eport_to_dport - translate enet port id to dsaf port id
- * @port_id: enet port id
- *: debug port 0-1, service port 2 -7 (dsaf mode only 2)
- * return: dsaf port id
- *: service ports 0 - 5, debug port 6-7
- **/
-static int hns_ae_map_eport_to_dport(u32 port_id)
-{
-       int port_index;
-
-       if (port_id < DSAF_DEBUG_NW_NUM)
-               port_index = port_id + DSAF_SERVICE_PORT_NUM_PER_DSAF;
-       else
-               port_index = port_id - DSAF_DEBUG_NW_NUM;
-
-       return port_index;
-}
-
 static struct dsaf_device *hns_ae_get_dsaf_dev(struct hnae_ae_dev *dev)
 {
        return container_of(dev, struct dsaf_device, ae_dev);
@@ -56,50 +37,35 @@ static struct dsaf_device *hns_ae_get_dsaf_dev(struct hnae_ae_dev *dev)
 static struct hns_ppe_cb *hns_get_ppe_cb(struct hnae_handle *handle)
 {
        int ppe_index;
-       int ppe_common_index;
        struct ppe_common_cb *ppe_comm;
        struct  hnae_vf_cb *vf_cb = hns_ae_get_vf_cb(handle);
 
-       if (vf_cb->port_index < DSAF_SERVICE_PORT_NUM_PER_DSAF) {
-               ppe_index = vf_cb->port_index;
-               ppe_common_index = 0;
-       } else {
-               ppe_index = 0;
-               ppe_common_index =
-                       vf_cb->port_index - DSAF_SERVICE_PORT_NUM_PER_DSAF + 1;
-       }
-       ppe_comm = vf_cb->dsaf_dev->ppe_common[ppe_common_index];
+       ppe_comm = vf_cb->dsaf_dev->ppe_common[0];
+       ppe_index = vf_cb->port_index;
+
        return &ppe_comm->ppe_cb[ppe_index];
 }
 
 static int hns_ae_get_q_num_per_vf(
        struct dsaf_device *dsaf_dev, int port)
 {
-       int common_idx = hns_dsaf_get_comm_idx_by_port(port);
-
-       return dsaf_dev->rcb_common[common_idx]->max_q_per_vf;
+       return dsaf_dev->rcb_common[0]->max_q_per_vf;
 }
 
 static int hns_ae_get_vf_num_per_port(
        struct dsaf_device *dsaf_dev, int port)
 {
-       int common_idx = hns_dsaf_get_comm_idx_by_port(port);
-
-       return dsaf_dev->rcb_common[common_idx]->max_vfn;
+       return dsaf_dev->rcb_common[0]->max_vfn;
 }
 
 static struct ring_pair_cb *hns_ae_get_base_ring_pair(
        struct dsaf_device *dsaf_dev, int port)
 {
-       int common_idx = hns_dsaf_get_comm_idx_by_port(port);
-       struct rcb_common_cb *rcb_comm = dsaf_dev->rcb_common[common_idx];
+       struct rcb_common_cb *rcb_comm = dsaf_dev->rcb_common[0];
        int q_num = rcb_comm->max_q_per_vf;
        int vf_num = rcb_comm->max_vfn;
 
-       if (common_idx == HNS_DSAF_COMM_SERVICE_NW_IDX)
-               return &rcb_comm->ring_pair_cb[port * q_num * vf_num];
-       else
-               return &rcb_comm->ring_pair_cb[0];
+       return &rcb_comm->ring_pair_cb[port * q_num * vf_num];
 }
 
 static struct ring_pair_cb *hns_ae_get_ring_pair(struct hnae_queue *q)
@@ -110,7 +76,6 @@ static struct ring_pair_cb *hns_ae_get_ring_pair(struct hnae_queue *q)
 struct hnae_handle *hns_ae_get_handle(struct hnae_ae_dev *dev,
                                      u32 port_id)
 {
-       int port_idx;
        int vfnum_per_port;
        int qnum_per_vf;
        int i;
@@ -120,11 +85,10 @@ struct hnae_handle *hns_ae_get_handle(struct hnae_ae_dev *dev,
        struct hnae_vf_cb *vf_cb;
 
        dsaf_dev = hns_ae_get_dsaf_dev(dev);
-       port_idx = hns_ae_map_eport_to_dport(port_id);
 
-       ring_pair_cb = hns_ae_get_base_ring_pair(dsaf_dev, port_idx);
-       vfnum_per_port = hns_ae_get_vf_num_per_port(dsaf_dev, port_idx);
-       qnum_per_vf = hns_ae_get_q_num_per_vf(dsaf_dev, port_idx);
+       ring_pair_cb = hns_ae_get_base_ring_pair(dsaf_dev, port_id);
+       vfnum_per_port = hns_ae_get_vf_num_per_port(dsaf_dev, port_id);
+       qnum_per_vf = hns_ae_get_q_num_per_vf(dsaf_dev, port_id);
 
        vf_cb = kzalloc(sizeof(*vf_cb) +
                        qnum_per_vf * sizeof(struct hnae_queue *), GFP_KERNEL);
@@ -163,14 +127,14 @@ struct hnae_handle *hns_ae_get_handle(struct hnae_ae_dev *dev,
        }
 
        vf_cb->dsaf_dev = dsaf_dev;
-       vf_cb->port_index = port_idx;
-       vf_cb->mac_cb = &dsaf_dev->mac_cb[port_idx];
+       vf_cb->port_index = port_id;
+       vf_cb->mac_cb = dsaf_dev->mac_cb[port_id];
 
        ae_handle->phy_if = vf_cb->mac_cb->phy_if;
        ae_handle->phy_node = vf_cb->mac_cb->phy_node;
        ae_handle->if_support = vf_cb->mac_cb->if_support;
        ae_handle->port_type = vf_cb->mac_cb->mac_type;
-       ae_handle->dport_id = port_idx;
+       ae_handle->dport_id = port_id;
 
        return ae_handle;
 vf_id_err:
@@ -320,11 +284,8 @@ static void hns_ae_reset(struct hnae_handle *handle)
        struct hnae_vf_cb *vf_cb = hns_ae_get_vf_cb(handle);
 
        if (vf_cb->mac_cb->mac_type == HNAE_PORT_DEBUG) {
-               u8 ppe_common_index =
-                       vf_cb->port_index - DSAF_SERVICE_PORT_NUM_PER_DSAF + 1;
-
                hns_mac_reset(vf_cb->mac_cb);
-               hns_ppe_reset_common(vf_cb->dsaf_dev, ppe_common_index);
+               hns_ppe_reset_common(vf_cb->dsaf_dev, 0);
        }
 }
 
@@ -703,7 +664,7 @@ void hns_ae_update_led_status(struct hnae_handle *handle)
 
        assert(handle);
        mac_cb = hns_get_mac_cb(handle);
-       if (!mac_cb->cpld_vaddr)
+       if (!mac_cb->cpld_ctrl)
                return;
        hns_set_led_opt(mac_cb);
 }
@@ -723,7 +684,6 @@ int hns_ae_cpld_set_led_id(struct hnae_handle *handle,
 void hns_ae_get_regs(struct hnae_handle *handle, void *data)
 {
        u32 *p = data;
-       u32 rcb_com_idx;
        int i;
        struct hnae_vf_cb *vf_cb = hns_ae_get_vf_cb(handle);
        struct hns_ppe_cb *ppe_cb = hns_get_ppe_cb(handle);
@@ -731,8 +691,7 @@ void hns_ae_get_regs(struct hnae_handle *handle, void *data)
        hns_ppe_get_regs(ppe_cb, p);
        p += hns_ppe_get_regs_count();
 
-       rcb_com_idx = hns_dsaf_get_comm_idx_by_port(vf_cb->port_index);
-       hns_rcb_get_common_regs(vf_cb->dsaf_dev->rcb_common[rcb_com_idx], p);
+       hns_rcb_get_common_regs(vf_cb->dsaf_dev->rcb_common[0], p);
        p += hns_rcb_get_common_regs_count();
 
        for (i = 0; i < handle->q_num; i++) {
index 10c367d..611581f 100644 (file)
@@ -7,18 +7,19 @@
  * (at your option) any later version.
  */
 
-#include <linux/module.h>
-#include <linux/kernel.h>
 #include <linux/init.h>
-#include <linux/netdevice.h>
-#include <linux/phy_fixed.h>
 #include <linux/interrupt.h>
-#include <linux/platform_device.h>
+#include <linux/kernel.h>
+#include <linux/mfd/syscon.h>
+#include <linux/module.h>
+#include <linux/netdevice.h>
 #include <linux/of.h>
 #include <linux/of_address.h>
+#include <linux/phy_fixed.h>
+#include <linux/platform_device.h>
 
-#include "hns_dsaf_misc.h"
 #include "hns_dsaf_main.h"
+#include "hns_dsaf_misc.h"
 #include "hns_dsaf_rcb.h"
 
 #define MAC_EN_FLAG_V          0xada0328
@@ -81,17 +82,6 @@ static enum mac_mode hns_get_enet_interface(const struct hns_mac_cb *mac_cb)
        }
 }
 
-int hns_mac_get_sfp_prsnt(struct hns_mac_cb *mac_cb, int *sfp_prsnt)
-{
-       if (!mac_cb->cpld_vaddr)
-               return -ENODEV;
-
-       *sfp_prsnt = !dsaf_read_b((u8 *)mac_cb->cpld_vaddr
-                                       + MAC_SFP_PORT_OFFSET);
-
-       return 0;
-}
-
 void hns_mac_get_link_status(struct hns_mac_cb *mac_cb, u32 *link_status)
 {
        struct mac_driver *mac_ctrl_drv;
@@ -168,10 +158,9 @@ static int hns_mac_get_inner_port_num(struct hns_mac_cb *mac_cb,
                                      u8 vmid, u8 *port_num)
 {
        u8 tmp_port;
-       u32 comm_idx;
 
        if (mac_cb->dsaf_dev->dsaf_mode <= DSAF_MODE_ENABLE) {
-               if (mac_cb->mac_id != DSAF_MAX_PORT_NUM_PER_CHIP) {
+               if (mac_cb->mac_id != DSAF_MAX_PORT_NUM) {
                        dev_err(mac_cb->dev,
                                "input invalid,%s mac%d vmid%d !\n",
                                mac_cb->dsaf_dev->ae_dev.name,
@@ -179,7 +168,7 @@ static int hns_mac_get_inner_port_num(struct hns_mac_cb *mac_cb,
                        return -EINVAL;
                }
        } else if (mac_cb->dsaf_dev->dsaf_mode < DSAF_MODE_MAX) {
-               if (mac_cb->mac_id >= DSAF_MAX_PORT_NUM_PER_CHIP) {
+               if (mac_cb->mac_id >= DSAF_MAX_PORT_NUM) {
                        dev_err(mac_cb->dev,
                                "input invalid,%s mac%d vmid%d!\n",
                                mac_cb->dsaf_dev->ae_dev.name,
@@ -192,9 +181,7 @@ static int hns_mac_get_inner_port_num(struct hns_mac_cb *mac_cb,
                return -EINVAL;
        }
 
-       comm_idx = hns_dsaf_get_comm_idx_by_port(mac_cb->mac_id);
-
-       if (vmid >= mac_cb->dsaf_dev->rcb_common[comm_idx]->max_vfn) {
+       if (vmid >= mac_cb->dsaf_dev->rcb_common[0]->max_vfn) {
                dev_err(mac_cb->dev, "input invalid,%s mac%d vmid%d !\n",
                        mac_cb->dsaf_dev->ae_dev.name, mac_cb->mac_id, vmid);
                return -EINVAL;
@@ -234,7 +221,7 @@ static int hns_mac_get_inner_port_num(struct hns_mac_cb *mac_cb,
 }
 
 /**
- *hns_mac_get_inner_port_num - change vf mac address
+ *hns_mac_change_vf_addr - change vf mac address
  *@mac_cb: mac device
  *@vmid: vmid
  *@addr:mac address
@@ -249,7 +236,7 @@ int hns_mac_change_vf_addr(struct hns_mac_cb *mac_cb,
        struct mac_entry_idx *old_entry;
 
        old_entry = &mac_cb->addr_entry_idx[vmid];
-       if (dsaf_dev) {
+       if (!HNS_DSAF_IS_DEBUG(dsaf_dev)) {
                memcpy(mac_entry.addr, addr, sizeof(mac_entry.addr));
                mac_entry.in_vlan_id = old_entry->vlan_id;
                mac_entry.in_port_num = mac_cb->mac_id;
@@ -289,7 +276,7 @@ int hns_mac_set_multi(struct hns_mac_cb *mac_cb,
        struct dsaf_device *dsaf_dev = mac_cb->dsaf_dev;
        struct dsaf_drv_mac_single_dest_entry mac_entry;
 
-       if (dsaf_dev && addr) {
+       if (!HNS_DSAF_IS_DEBUG(dsaf_dev) && addr) {
                memcpy(mac_entry.addr, addr, sizeof(mac_entry.addr));
                mac_entry.in_vlan_id = 0;/*vlan_id;*/
                mac_entry.in_port_num = mac_cb->mac_id;
@@ -380,7 +367,7 @@ static int hns_mac_port_config_bc_en(struct hns_mac_cb *mac_cb,
        if (mac_cb->mac_type == HNAE_PORT_DEBUG)
                return 0;
 
-       if (dsaf_dev) {
+       if (!HNS_DSAF_IS_DEBUG(dsaf_dev)) {
                memcpy(mac_entry.addr, addr, sizeof(mac_entry.addr));
                mac_entry.in_vlan_id = vlan_id;
                mac_entry.in_port_num = mac_cb->mac_id;
@@ -418,7 +405,7 @@ int hns_mac_vm_config_bc_en(struct hns_mac_cb *mac_cb, u32 vmid, bool enable)
 
        uc_mac_entry = &mac_cb->addr_entry_idx[vmid];
 
-       if (dsaf_dev)  {
+       if (!HNS_DSAF_IS_DEBUG(dsaf_dev))  {
                memcpy(mac_entry.addr, addr, sizeof(mac_entry.addr));
                mac_entry.in_vlan_id = uc_mac_entry->vlan_id;
                mac_entry.in_port_num = mac_cb->mac_id;
@@ -651,14 +638,18 @@ free_mac_drv:
 }
 
 /**
- *mac_free_dev  - get mac information from device node
+ *hns_mac_get_info  - get mac information from device node
  *@mac_cb: mac device
  *@np:device node
- *@mac_mode_idx:mac mode index
+ * return: 0 --success, negative --fail
  */
-static void hns_mac_get_info(struct hns_mac_cb *mac_cb,
-                            struct device_node *np, u32 mac_mode_idx)
+static int  hns_mac_get_info(struct hns_mac_cb *mac_cb)
 {
+       struct device_node *np = mac_cb->dev->of_node;
+       struct regmap *syscon;
+       struct of_phandle_args cpld_args;
+       u32 ret;
+
        mac_cb->link = false;
        mac_cb->half_duplex = false;
        mac_cb->speed = mac_phy_to_speed[mac_cb->phy_if];
@@ -674,12 +665,73 @@ static void hns_mac_get_info(struct hns_mac_cb *mac_cb,
 
        mac_cb->max_frm = MAC_DEFAULT_MTU;
        mac_cb->tx_pause_frm_time = MAC_DEFAULT_PAUSE_TIME;
-
-       /* Get the rest of the PHY information */
-       mac_cb->phy_node = of_parse_phandle(np, "phy-handle", mac_cb->mac_id);
+       mac_cb->port_rst_off = mac_cb->mac_id;
+       mac_cb->port_mode_off = 0;
+
+       /* if the dsaf node doesn't contain a port subnode, get phy-handle
+        * from dsaf node
+        */
+       if (!mac_cb->fw_port) {
+               mac_cb->phy_node = of_parse_phandle(np, "phy-handle",
+                                                   mac_cb->mac_id);
+               if (mac_cb->phy_node)
+                       dev_dbg(mac_cb->dev, "mac%d phy_node: %s\n",
+                               mac_cb->mac_id, mac_cb->phy_node->name);
+               return 0;
+       }
+       if (!is_of_node(mac_cb->fw_port))
+               return -EINVAL;
+       /* parse property from port subnode in dsaf */
+       mac_cb->phy_node = of_parse_phandle(to_of_node(mac_cb->fw_port),
+                                           "phy-handle", 0);
        if (mac_cb->phy_node)
                dev_dbg(mac_cb->dev, "mac%d phy_node: %s\n",
                        mac_cb->mac_id, mac_cb->phy_node->name);
+       syscon = syscon_node_to_regmap(
+                       of_parse_phandle(to_of_node(mac_cb->fw_port),
+                                        "serdes-syscon", 0));
+       if (IS_ERR_OR_NULL(syscon)) {
+               dev_err(mac_cb->dev, "serdes-syscon is needed!\n");
+               return -EINVAL;
+       }
+       mac_cb->serdes_ctrl = syscon;
+
+       ret = fwnode_property_read_u32(mac_cb->fw_port,
+                                      "port-rst-offset",
+                                      &mac_cb->port_rst_off);
+       if (ret) {
+               dev_dbg(mac_cb->dev,
+                       "mac%d port-rst-offset not found, use default value.\n",
+                       mac_cb->mac_id);
+       }
+
+       ret = fwnode_property_read_u32(mac_cb->fw_port,
+                                      "port-mode-offset",
+                                      &mac_cb->port_mode_off);
+       if (ret) {
+               dev_dbg(mac_cb->dev,
+                       "mac%d port-mode-offset not found, use default value.\n",
+                       mac_cb->mac_id);
+       }
+
+       ret = of_parse_phandle_with_fixed_args(to_of_node(mac_cb->fw_port),
+                                              "cpld-syscon", 1, 0, &cpld_args);
+       if (ret) {
+               dev_dbg(mac_cb->dev, "mac%d no cpld-syscon found.\n",
+                       mac_cb->mac_id);
+               mac_cb->cpld_ctrl = NULL;
+       } else {
+               syscon = syscon_node_to_regmap(cpld_args.np);
+               if (IS_ERR_OR_NULL(syscon)) {
+                       dev_dbg(mac_cb->dev, "no cpld-syscon found!\n");
+                       mac_cb->cpld_ctrl = NULL;
+               } else {
+                       mac_cb->cpld_ctrl = syscon;
+                       mac_cb->cpld_ctrl_reg = cpld_args.args[0];
+               }
+       }
+
+       return 0;
 }
 
 /**
@@ -709,40 +761,31 @@ u8 __iomem *hns_mac_get_vaddr(struct dsaf_device *dsaf_dev,
                return base + 0x40000 + mac_id * 0x4000 -
                                mac_mode_idx * 0x20000;
        else
-               return mac_cb->serdes_vaddr + 0x1000
-                       + (mac_id - DSAF_SERVICE_PORT_NUM_PER_DSAF) * 0x100000;
+               return dsaf_dev->ppe_base + 0x1000;
 }
 
 /**
  * hns_mac_get_cfg - get mac cfg from dtb or acpi table
  * @dsaf_dev: dsa fabric device struct pointer
- * @mac_idx: mac index
- * retuen 0 - success , negative --fail
+ * @mac_cb: mac control block
+ * return 0 - success , negative --fail
  */
-int hns_mac_get_cfg(struct dsaf_device *dsaf_dev, int mac_idx)
+int hns_mac_get_cfg(struct dsaf_device *dsaf_dev, struct hns_mac_cb *mac_cb)
 {
        int ret;
        u32 mac_mode_idx;
-       struct hns_mac_cb *mac_cb = &dsaf_dev->mac_cb[mac_idx];
 
        mac_cb->dsaf_dev = dsaf_dev;
        mac_cb->dev = dsaf_dev->dev;
-       mac_cb->mac_id = mac_idx;
 
        mac_cb->sys_ctl_vaddr = dsaf_dev->sc_base;
        mac_cb->serdes_vaddr = dsaf_dev->sds_base;
 
-       if (dsaf_dev->cpld_base &&
-           mac_idx < DSAF_SERVICE_PORT_NUM_PER_DSAF) {
-               mac_cb->cpld_vaddr = dsaf_dev->cpld_base +
-                       mac_cb->mac_id * CPLD_ADDR_PORT_OFFSET;
-               cpld_led_reset(mac_cb);
-       }
        mac_cb->sfp_prsnt = 0;
        mac_cb->txpkt_for_led = 0;
        mac_cb->rxpkt_for_led = 0;
 
-       if (mac_idx < DSAF_SERVICE_PORT_NUM_PER_DSAF)
+       if (!HNS_DSAF_IS_DEBUG(dsaf_dev))
                mac_cb->mac_type = HNAE_PORT_SERVICE;
        else
                mac_cb->mac_type = HNAE_PORT_DEBUG;
@@ -758,53 +801,100 @@ int hns_mac_get_cfg(struct dsaf_device *dsaf_dev, int mac_idx)
        }
        mac_mode_idx = (u32)ret;
 
-       hns_mac_get_info(mac_cb, mac_cb->dev->of_node, mac_mode_idx);
+       ret  = hns_mac_get_info(mac_cb);
+       if (ret)
+               return ret;
 
+       cpld_led_reset(mac_cb);
        mac_cb->vaddr = hns_mac_get_vaddr(dsaf_dev, mac_cb, mac_mode_idx);
 
        return 0;
 }
 
+static int hns_mac_get_max_port_num(struct dsaf_device *dsaf_dev)
+{
+       if (HNS_DSAF_IS_DEBUG(dsaf_dev))
+               return 1;
+       else
+               return  DSAF_MAX_PORT_NUM;
+}
+
 /**
  * hns_mac_init - init mac
  * @dsaf_dev: dsa fabric device struct pointer
- * retuen 0 - success , negative --fail
+ * return 0 - success , negative --fail
  */
 int hns_mac_init(struct dsaf_device *dsaf_dev)
 {
-       int i;
+       bool found = false;
        int ret;
-       size_t size;
+       u32 port_id;
+       int max_port_num = hns_mac_get_max_port_num(dsaf_dev);
        struct hns_mac_cb *mac_cb;
+       struct fwnode_handle *child;
 
-       size = sizeof(struct hns_mac_cb) * DSAF_MAX_PORT_NUM_PER_CHIP;
-       dsaf_dev->mac_cb = devm_kzalloc(dsaf_dev->dev, size, GFP_KERNEL);
-       if (!dsaf_dev->mac_cb)
-               return -ENOMEM;
+       device_for_each_child_node(dsaf_dev->dev, child) {
+               ret = fwnode_property_read_u32(child, "reg", &port_id);
+               if (ret) {
+                       dev_err(dsaf_dev->dev,
+                               "get reg fail, ret=%d!\n", ret);
+                       return ret;
+               }
+               if (port_id >= max_port_num) {
+                       dev_err(dsaf_dev->dev,
+                               "reg(%u) out of range!\n", port_id);
+                       return -EINVAL;
+               }
+               mac_cb = devm_kzalloc(dsaf_dev->dev, sizeof(*mac_cb),
+                                     GFP_KERNEL);
+               if (!mac_cb)
+                       return -ENOMEM;
+               mac_cb->fw_port = child;
+               mac_cb->mac_id = (u8)port_id;
+               dsaf_dev->mac_cb[port_id] = mac_cb;
+               found = true;
+       }
 
-       for (i = 0; i < DSAF_MAX_PORT_NUM_PER_CHIP; i++) {
-               ret = hns_mac_get_cfg(dsaf_dev, i);
-               if (ret)
-                       goto free_mac_cb;
+       /* if don't get any port subnode from dsaf node
+        * will init all port then, this is compatible with the old dts
+        */
+       if (!found) {
+               for (port_id = 0; port_id < max_port_num; port_id++) {
+                       mac_cb = devm_kzalloc(dsaf_dev->dev, sizeof(*mac_cb),
+                                             GFP_KERNEL);
+                       if (!mac_cb)
+                               return -ENOMEM;
+
+                       mac_cb->mac_id = port_id;
+                       dsaf_dev->mac_cb[port_id] = mac_cb;
+               }
+       }
+       /* init mac_cb for all port */
+       for (port_id = 0; port_id < max_port_num; port_id++) {
+               mac_cb = dsaf_dev->mac_cb[port_id];
+               if (!mac_cb)
+                       continue;
 
-               mac_cb = &dsaf_dev->mac_cb[i];
+               ret = hns_mac_get_cfg(dsaf_dev, mac_cb);
+               if (ret)
+                       return ret;
                ret = hns_mac_init_ex(mac_cb);
                if (ret)
-                       goto free_mac_cb;
+                       return ret;
        }
 
        return 0;
-
-free_mac_cb:
-       dsaf_dev->mac_cb = NULL;
-
-       return ret;
 }
 
 void hns_mac_uninit(struct dsaf_device *dsaf_dev)
 {
-       cpld_led_reset(dsaf_dev->mac_cb);
-       dsaf_dev->mac_cb = NULL;
+       int i;
+       int max_port_num = hns_mac_get_max_port_num(dsaf_dev);
+
+       for (i = 0; i < max_port_num; i++) {
+               cpld_led_reset(dsaf_dev->mac_cb[i]);
+               dsaf_dev->mac_cb[i] = NULL;
+       }
 }
 
 int hns_mac_config_mac_loopback(struct hns_mac_cb *mac_cb,
@@ -892,7 +982,7 @@ void hns_set_led_opt(struct hns_mac_cb *mac_cb)
 int hns_cpld_led_set_id(struct hns_mac_cb *mac_cb,
                        enum hnae_led_state status)
 {
-       if (!mac_cb || !mac_cb->cpld_vaddr)
+       if (!mac_cb || !mac_cb->cpld_ctrl)
                return 0;
 
        return cpld_set_led_id(mac_cb, status);
index 823b6e7..97ce9a7 100644 (file)
 #ifndef _HNS_DSAF_MAC_H
 #define _HNS_DSAF_MAC_H
 
-#include <linux/phy.h>
-#include <linux/kernel.h>
 #include <linux/if_vlan.h>
+#include <linux/kernel.h>
+#include <linux/phy.h>
+#include <linux/regmap.h>
 #include "hns_dsaf_main.h"
 
 struct dsaf_device;
@@ -310,10 +311,15 @@ struct hns_mac_cb {
        struct device *dev;
        struct dsaf_device *dsaf_dev;
        struct mac_priv priv;
+       struct fwnode_handle *fw_port;
        u8 __iomem *vaddr;
-       u8 __iomem *cpld_vaddr;
        u8 __iomem *sys_ctl_vaddr;
        u8 __iomem *serdes_vaddr;
+       struct regmap *serdes_ctrl;
+       struct regmap *cpld_ctrl;
+       u32 cpld_ctrl_reg;
+       u32 port_rst_off;
+       u32 port_mode_off;
        struct mac_entry_idx addr_entry_idx[DSAF_MAX_VM_NUM];
        u8 sfp_prsnt;
        u8 cpld_led_value;
index 8439f6d..1c2ddb2 100644 (file)
@@ -7,27 +7,29 @@
  * (at your option) any later version.
  */
 
-#include <linux/module.h>
-#include <linux/kernel.h>
+#include <linux/device.h>
 #include <linux/init.h>
 #include <linux/interrupt.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
 #include <linux/netdevice.h>
-#include <linux/platform_device.h>
+#include <linux/mfd/syscon.h>
 #include <linux/of.h>
 #include <linux/of_address.h>
 #include <linux/of_irq.h>
-#include <linux/device.h>
+#include <linux/platform_device.h>
 #include <linux/vmalloc.h>
 
+#include "hns_dsaf_mac.h"
 #include "hns_dsaf_main.h"
-#include "hns_dsaf_rcb.h"
 #include "hns_dsaf_ppe.h"
-#include "hns_dsaf_mac.h"
+#include "hns_dsaf_rcb.h"
 
 const char *g_dsaf_mode_match[DSAF_MODE_MAX] = {
        [DSAF_MODE_DISABLE_2PORT_64VM] = "2port-64vf",
        [DSAF_MODE_DISABLE_6PORT_0VM] = "6port-16rss",
        [DSAF_MODE_DISABLE_6PORT_16VM] = "6port-16vf",
+       [DSAF_MODE_DISABLE_SP] = "single-port",
 };
 
 int hns_dsaf_get_cfg(struct dsaf_device *dsaf_dev)
@@ -35,8 +37,13 @@ int hns_dsaf_get_cfg(struct dsaf_device *dsaf_dev)
        int ret, i;
        u32 desc_num;
        u32 buf_size;
+       u32 reset_offset = 0;
+       u32 res_idx = 0;
        const char *mode_str;
+       struct regmap *syscon;
+       struct resource *res;
        struct device_node *np = dsaf_dev->dev->of_node;
+       struct platform_device *pdev = to_platform_device(dsaf_dev->dev);
 
        if (of_device_is_compatible(np, "hisilicon,hns-dsaf-v1"))
                dsaf_dev->dsaf_ver = AE_VERSION_1;
@@ -73,42 +80,68 @@ int hns_dsaf_get_cfg(struct dsaf_device *dsaf_dev)
        else
                dsaf_dev->dsaf_tc_mode = HRD_DSAF_4TC_MODE;
 
-       dsaf_dev->sc_base = of_iomap(np, 0);
-       if (!dsaf_dev->sc_base) {
-               dev_err(dsaf_dev->dev,
-                       "%s of_iomap 0 fail!\n", dsaf_dev->ae_dev.name);
-               ret = -ENOMEM;
-               goto unmap_base_addr;
-       }
+       syscon = syscon_node_to_regmap(
+                       of_parse_phandle(np, "subctrl-syscon", 0));
+       if (IS_ERR_OR_NULL(syscon)) {
+               res = platform_get_resource(pdev, IORESOURCE_MEM, res_idx++);
+               if (!res) {
+                       dev_err(dsaf_dev->dev, "subctrl info is needed!\n");
+                       return -ENOMEM;
+               }
+               dsaf_dev->sc_base = devm_ioremap_resource(&pdev->dev, res);
+               if (!dsaf_dev->sc_base) {
+                       dev_err(dsaf_dev->dev, "subctrl can not map!\n");
+                       return -ENOMEM;
+               }
 
-       dsaf_dev->sds_base = of_iomap(np, 1);
-       if (!dsaf_dev->sds_base) {
-               dev_err(dsaf_dev->dev,
-                       "%s of_iomap 1 fail!\n", dsaf_dev->ae_dev.name);
-               ret = -ENOMEM;
-               goto unmap_base_addr;
+               res = platform_get_resource(pdev, IORESOURCE_MEM, res_idx++);
+               if (!res) {
+                       dev_err(dsaf_dev->dev, "serdes-ctrl info is needed!\n");
+                       return -ENOMEM;
+               }
+               dsaf_dev->sds_base = devm_ioremap_resource(&pdev->dev, res);
+               if (!dsaf_dev->sds_base) {
+                       dev_err(dsaf_dev->dev, "serdes-ctrl can not map!\n");
+                       return -ENOMEM;
+               }
+       } else {
+               dsaf_dev->sub_ctrl = syscon;
        }
 
-       dsaf_dev->ppe_base = of_iomap(np, 2);
-       if (!dsaf_dev->ppe_base) {
-               dev_err(dsaf_dev->dev,
-                       "%s of_iomap 2 fail!\n", dsaf_dev->ae_dev.name);
-               ret = -ENOMEM;
-               goto unmap_base_addr;
+       res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "ppe-base");
+       if (!res) {
+               res = platform_get_resource(pdev, IORESOURCE_MEM, res_idx++);
+               if (!res) {
+                       dev_err(dsaf_dev->dev, "ppe-base info is needed!\n");
+                       return -ENOMEM;
+               }
        }
-
-       dsaf_dev->io_base = of_iomap(np, 3);
-       if (!dsaf_dev->io_base) {
-               dev_err(dsaf_dev->dev,
-                       "%s of_iomap 3 fail!\n", dsaf_dev->ae_dev.name);
-               ret = -ENOMEM;
-               goto unmap_base_addr;
+       dsaf_dev->ppe_base = devm_ioremap_resource(&pdev->dev, res);
+       if (!dsaf_dev->ppe_base) {
+               dev_err(dsaf_dev->dev, "ppe-base resource can not map!\n");
+               return -ENOMEM;
+       }
+       dsaf_dev->ppe_paddr = res->start;
+
+       if (!HNS_DSAF_IS_DEBUG(dsaf_dev)) {
+               res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
+                                                  "dsaf-base");
+               if (!res) {
+                       res = platform_get_resource(pdev, IORESOURCE_MEM,
+                                                   res_idx);
+                       if (!res) {
+                               dev_err(dsaf_dev->dev,
+                                       "dsaf-base info is needed!\n");
+                               return -ENOMEM;
+                       }
+               }
+               dsaf_dev->io_base = devm_ioremap_resource(&pdev->dev, res);
+               if (!dsaf_dev->io_base) {
+                       dev_err(dsaf_dev->dev, "dsaf-base resource can not map!\n");
+                       return -ENOMEM;
+               }
        }
 
-       dsaf_dev->cpld_base = of_iomap(np, 4);
-       if (!dsaf_dev->cpld_base)
-               dev_dbg(dsaf_dev->dev, "NO CPLD ADDR");
-
        ret = of_property_read_u32(np, "desc-num", &desc_num);
        if (ret < 0 || desc_num < HNS_DSAF_MIN_DESC_CNT ||
            desc_num > HNS_DSAF_MAX_DESC_CNT) {
@@ -118,6 +151,13 @@ int hns_dsaf_get_cfg(struct dsaf_device *dsaf_dev)
        }
        dsaf_dev->desc_num = desc_num;
 
+       ret = of_property_read_u32(np, "reset-field-offset", &reset_offset);
+       if (ret < 0) {
+               dev_dbg(dsaf_dev->dev,
+                       "get reset-field-offset fail, ret=%d!\r\n", ret);
+       }
+       dsaf_dev->reset_offset = reset_offset;
+
        ret = of_property_read_u32(np, "buf-size", &buf_size);
        if (ret < 0) {
                dev_err(dsaf_dev->dev,
@@ -149,8 +189,6 @@ unmap_base_addr:
                iounmap(dsaf_dev->sds_base);
        if (dsaf_dev->sc_base)
                iounmap(dsaf_dev->sc_base);
-       if (dsaf_dev->cpld_base)
-               iounmap(dsaf_dev->cpld_base);
        return ret;
 }
 
@@ -167,9 +205,6 @@ static void hns_dsaf_free_cfg(struct dsaf_device *dsaf_dev)
 
        if (dsaf_dev->sc_base)
                iounmap(dsaf_dev->sc_base);
-
-       if (dsaf_dev->cpld_base)
-               iounmap(dsaf_dev->cpld_base);
 }
 
 /**
@@ -217,9 +252,7 @@ static void hns_dsaf_mix_def_qid_cfg(struct dsaf_device *dsaf_dev)
        u32 q_id, q_num_per_port;
        u32 i;
 
-       hns_rcb_get_queue_mode(dsaf_dev->dsaf_mode,
-                              HNS_DSAF_COMM_SERVICE_NW_IDX,
-                              &max_vfn, &max_q_per_vf);
+       hns_rcb_get_queue_mode(dsaf_dev->dsaf_mode, &max_vfn, &max_q_per_vf);
        q_num_per_port = max_vfn * max_q_per_vf;
 
        for (i = 0, q_id = 0; i < DSAF_SERVICE_NW_NUM; i++) {
@@ -239,9 +272,7 @@ static void hns_dsaf_inner_qid_cfg(struct dsaf_device *dsaf_dev)
        if (AE_IS_VER1(dsaf_dev->dsaf_ver))
                return;
 
-       hns_rcb_get_queue_mode(dsaf_dev->dsaf_mode,
-                              HNS_DSAF_COMM_SERVICE_NW_IDX,
-                              &max_vfn, &max_q_per_vf);
+       hns_rcb_get_queue_mode(dsaf_dev->dsaf_mode, &max_vfn, &max_q_per_vf);
        q_num_per_port = max_vfn * max_q_per_vf;
 
        for (mac_id = 0, q_id = 0; mac_id < DSAF_SERVICE_NW_NUM; mac_id++) {
@@ -712,13 +743,15 @@ static void hns_dsaf_tbl_tcam_data_ucast_pul(
 
 void hns_dsaf_set_promisc_mode(struct dsaf_device *dsaf_dev, u32 en)
 {
-       dsaf_set_dev_bit(dsaf_dev, DSAF_CFG_0_REG, DSAF_CFG_MIX_MODE_S, !!en);
+       if (!HNS_DSAF_IS_DEBUG(dsaf_dev))
+               dsaf_set_dev_bit(dsaf_dev, DSAF_CFG_0_REG,
+                                DSAF_CFG_MIX_MODE_S, !!en);
 }
 
 void hns_dsaf_set_inner_lb(struct dsaf_device *dsaf_dev, u32 mac_id, u32 en)
 {
        if (AE_IS_VER1(dsaf_dev->dsaf_ver) ||
-           dsaf_dev->mac_cb[mac_id].mac_type == HNAE_PORT_DEBUG)
+           dsaf_dev->mac_cb[mac_id]->mac_type == HNAE_PORT_DEBUG)
                return;
 
        dsaf_set_dev_bit(dsaf_dev, DSAFV2_SERDES_LBK_0_REG + 4 * mac_id,
@@ -1307,6 +1340,9 @@ static int hns_dsaf_init(struct dsaf_device *dsaf_dev)
        u32 i;
        int ret;
 
+       if (HNS_DSAF_IS_DEBUG(dsaf_dev))
+               return 0;
+
        ret = hns_dsaf_init_hw(dsaf_dev);
        if (ret)
                return ret;
index e8eedc5..f0502ba 100644 (file)
@@ -41,6 +41,7 @@ struct hns_mac_cb;
 #define DSAF_STATIC_NUM 28
 
 #define DSAF_STATS_READ(p, offset) (*((u64 *)((u8 *)(p) + (offset))))
+#define HNS_DSAF_IS_DEBUG(dev) (dev->dsaf_mode == DSAF_MODE_DISABLE_SP)
 
 enum hal_dsaf_mode {
        HRD_DSAF_NO_DSAF_MODE   = 0x0,
@@ -117,6 +118,7 @@ enum dsaf_mode {
        DSAF_MODE_ENABLE_32VM,  /**< en DSAF-mode, support 32 VM */
        DSAF_MODE_ENABLE_128VM, /**< en DSAF-mode, support 128 VM */
        DSAF_MODE_ENABLE,               /**< before is enable DSAF mode*/
+       DSAF_MODE_DISABLE_SP,   /* <non-dsaf, single port mode */
        DSAF_MODE_DISABLE_FIX,  /**< non-dasf, fixed to queue*/
        DSAF_MODE_DISABLE_2PORT_8VM,    /**< non-dasf, 2port 8VM */
        DSAF_MODE_DISABLE_2PORT_16VM,   /**< non-dasf, 2port 16VM */
@@ -275,10 +277,12 @@ struct dsaf_device {
        u8 __iomem *sds_base;
        u8 __iomem *ppe_base;
        u8 __iomem *io_base;
-       u8 __iomem *cpld_base;
+       struct regmap *sub_ctrl;
+       phys_addr_t ppe_paddr;
 
        u32 desc_num; /*  desc num per queue*/
        u32 buf_size; /*  ring buffer size */
+       u32 reset_offset; /* reset field offset in sub sysctrl */
        int buf_size_type; /* ring buffer size-type */
        enum dsaf_mode dsaf_mode;        /* dsaf mode  */
        enum hal_dsaf_mode dsaf_en;
@@ -287,7 +291,7 @@ struct dsaf_device {
 
        struct ppe_common_cb *ppe_common[DSAF_COMM_DEV_NUM];
        struct rcb_common_cb *rcb_common[DSAF_COMM_DEV_NUM];
-       struct hns_mac_cb *mac_cb;
+       struct hns_mac_cb *mac_cb[DSAF_MAX_PORT_NUM];
 
        struct dsaf_hw_stats hw_stats[DSAF_NODE_NUM];
        struct dsaf_int_stat int_stat;
@@ -359,14 +363,6 @@ static inline void hns_dsaf_tbl_line_addr_cfg(struct dsaf_device *dsaf_dev,
                           tab_line_addr);
 }
 
-static inline int hns_dsaf_get_comm_idx_by_port(int port)
-{
-       if ((port < DSAF_COMM_CHN) || (port == DSAF_MAX_PORT_NUM_PER_CHIP))
-               return 0;
-       else
-               return (port - DSAF_COMM_CHN + 1);
-}
-
 static inline struct hnae_vf_cb *hns_ae_get_vf_cb(
        struct hnae_handle *handle)
 {
index e69b022..a837bb9 100644 (file)
@@ -7,10 +7,30 @@
  * (at your option) any later version.
  */
 
-#include "hns_dsaf_misc.h"
 #include "hns_dsaf_mac.h"
-#include "hns_dsaf_reg.h"
+#include "hns_dsaf_misc.h"
 #include "hns_dsaf_ppe.h"
+#include "hns_dsaf_reg.h"
+
+static void dsaf_write_sub(struct dsaf_device *dsaf_dev, u32 reg, u32 val)
+{
+       if (dsaf_dev->sub_ctrl)
+               dsaf_write_syscon(dsaf_dev->sub_ctrl, reg, val);
+       else
+               dsaf_write_reg(dsaf_dev->sc_base, reg, val);
+}
+
+static u32 dsaf_read_sub(struct dsaf_device *dsaf_dev, u32 reg)
+{
+       u32 ret;
+
+       if (dsaf_dev->sub_ctrl)
+               ret = dsaf_read_syscon(dsaf_dev->sub_ctrl, reg);
+       else
+               ret = dsaf_read_reg(dsaf_dev->sc_base, reg);
+
+       return ret;
+}
 
 void hns_cpld_set_led(struct hns_mac_cb *mac_cb, int link_status,
                      u16 speed, int data)
@@ -22,8 +42,8 @@ void hns_cpld_set_led(struct hns_mac_cb *mac_cb, int link_status,
                pr_err("sfp_led_opt mac_dev is null!\n");
                return;
        }
-       if (!mac_cb->cpld_vaddr) {
-               dev_err(mac_cb->dev, "mac_id=%d, cpld_vaddr is null !\n",
+       if (!mac_cb->cpld_ctrl) {
+               dev_err(mac_cb->dev, "mac_id=%d, cpld syscon is null !\n",
                        mac_cb->mac_id);
                return;
        }
@@ -40,21 +60,24 @@ void hns_cpld_set_led(struct hns_mac_cb *mac_cb, int link_status,
                dsaf_set_bit(value, DSAF_LED_DATA_B, data);
 
                if (value != mac_cb->cpld_led_value) {
-                       dsaf_write_b(mac_cb->cpld_vaddr, value);
+                       dsaf_write_syscon(mac_cb->cpld_ctrl,
+                                         mac_cb->cpld_ctrl_reg, value);
                        mac_cb->cpld_led_value = value;
                }
        } else {
-               dsaf_write_b(mac_cb->cpld_vaddr, CPLD_LED_DEFAULT_VALUE);
+               dsaf_write_syscon(mac_cb->cpld_ctrl, mac_cb->cpld_ctrl_reg,
+                                 CPLD_LED_DEFAULT_VALUE);
                mac_cb->cpld_led_value = CPLD_LED_DEFAULT_VALUE;
        }
 }
 
 void cpld_led_reset(struct hns_mac_cb *mac_cb)
 {
-       if (!mac_cb || !mac_cb->cpld_vaddr)
+       if (!mac_cb || !mac_cb->cpld_ctrl)
                return;
 
-       dsaf_write_b(mac_cb->cpld_vaddr, CPLD_LED_DEFAULT_VALUE);
+       dsaf_write_syscon(mac_cb->cpld_ctrl, mac_cb->cpld_ctrl_reg,
+                         CPLD_LED_DEFAULT_VALUE);
        mac_cb->cpld_led_value = CPLD_LED_DEFAULT_VALUE;
 }
 
@@ -63,15 +86,19 @@ int cpld_set_led_id(struct hns_mac_cb *mac_cb,
 {
        switch (status) {
        case HNAE_LED_ACTIVE:
-               mac_cb->cpld_led_value = dsaf_read_b(mac_cb->cpld_vaddr);
+               mac_cb->cpld_led_value =
+                       dsaf_read_syscon(mac_cb->cpld_ctrl,
+                                        mac_cb->cpld_ctrl_reg);
                dsaf_set_bit(mac_cb->cpld_led_value, DSAF_LED_ANCHOR_B,
                             CPLD_LED_ON_VALUE);
-               dsaf_write_b(mac_cb->cpld_vaddr, mac_cb->cpld_led_value);
+               dsaf_write_syscon(mac_cb->cpld_ctrl, mac_cb->cpld_ctrl_reg,
+                                 mac_cb->cpld_led_value);
                return 2;
        case HNAE_LED_INACTIVE:
                dsaf_set_bit(mac_cb->cpld_led_value, DSAF_LED_ANCHOR_B,
                             CPLD_LED_DEFAULT_VALUE);
-               dsaf_write_b(mac_cb->cpld_vaddr, mac_cb->cpld_led_value);
+               dsaf_write_syscon(mac_cb->cpld_ctrl, mac_cb->cpld_ctrl_reg,
+                                 mac_cb->cpld_led_value);
                break;
        default:
                break;
@@ -95,10 +122,8 @@ void hns_dsaf_rst(struct dsaf_device *dsaf_dev, u32 val)
                nt_reg_addr = DSAF_SUB_SC_NT_RESET_DREQ_REG;
        }
 
-       dsaf_write_reg(dsaf_dev->sc_base, xbar_reg_addr,
-                      RESET_REQ_OR_DREQ);
-       dsaf_write_reg(dsaf_dev->sc_base, nt_reg_addr,
-                      RESET_REQ_OR_DREQ);
+       dsaf_write_sub(dsaf_dev, xbar_reg_addr, RESET_REQ_OR_DREQ);
+       dsaf_write_sub(dsaf_dev, nt_reg_addr, RESET_REQ_OR_DREQ);
 }
 
 void hns_dsaf_xge_srst_by_port(struct dsaf_device *dsaf_dev, u32 port, u32 val)
@@ -110,14 +135,14 @@ void hns_dsaf_xge_srst_by_port(struct dsaf_device *dsaf_dev, u32 port, u32 val)
                return;
 
        reg_val |= RESET_REQ_OR_DREQ;
-       reg_val |= 0x2082082 << port;
+       reg_val |= 0x2082082 << dsaf_dev->mac_cb[port]->port_rst_off;
 
        if (val == 0)
                reg_addr = DSAF_SUB_SC_XGE_RESET_REQ_REG;
        else
                reg_addr = DSAF_SUB_SC_XGE_RESET_DREQ_REG;
 
-       dsaf_write_reg(dsaf_dev->sc_base, reg_addr, reg_val);
+       dsaf_write_sub(dsaf_dev, reg_addr, reg_val);
 }
 
 void hns_dsaf_xge_core_srst_by_port(struct dsaf_device *dsaf_dev,
@@ -129,68 +154,63 @@ void hns_dsaf_xge_core_srst_by_port(struct dsaf_device *dsaf_dev,
        if (port >= DSAF_XGE_NUM)
                return;
 
-       reg_val |= XGMAC_TRX_CORE_SRST_M << port;
+       reg_val |= XGMAC_TRX_CORE_SRST_M
+               << dsaf_dev->mac_cb[port]->port_rst_off;
 
        if (val == 0)
                reg_addr = DSAF_SUB_SC_XGE_RESET_REQ_REG;
        else
                reg_addr = DSAF_SUB_SC_XGE_RESET_DREQ_REG;
 
-       dsaf_write_reg(dsaf_dev->sc_base, reg_addr, reg_val);
+       dsaf_write_sub(dsaf_dev, reg_addr, reg_val);
 }
 
 void hns_dsaf_ge_srst_by_port(struct dsaf_device *dsaf_dev, u32 port, u32 val)
 {
        u32 reg_val_1;
        u32 reg_val_2;
+       u32 port_rst_off;
 
        if (port >= DSAF_GE_NUM)
                return;
 
-       if (port < DSAF_SERVICE_NW_NUM) {
+       if (!HNS_DSAF_IS_DEBUG(dsaf_dev)) {
                reg_val_1  = 0x1 << port;
+               port_rst_off = dsaf_dev->mac_cb[port]->port_rst_off;
                /* there is difference between V1 and V2 in register.*/
                if (AE_IS_VER1(dsaf_dev->dsaf_ver))
-                       reg_val_2  = 0x1041041 << port;
+                       reg_val_2  = 0x1041041 << port_rst_off;
                else
-                       reg_val_2  = 0x2082082 << port;
+                       reg_val_2  = 0x2082082 << port_rst_off;
 
                if (val == 0) {
-                       dsaf_write_reg(dsaf_dev->sc_base,
-                                      DSAF_SUB_SC_GE_RESET_REQ1_REG,
+                       dsaf_write_sub(dsaf_dev, DSAF_SUB_SC_GE_RESET_REQ1_REG,
                                       reg_val_1);
 
-                       dsaf_write_reg(dsaf_dev->sc_base,
-                                      DSAF_SUB_SC_GE_RESET_REQ0_REG,
+                       dsaf_write_sub(dsaf_dev, DSAF_SUB_SC_GE_RESET_REQ0_REG,
                                       reg_val_2);
                } else {
-                       dsaf_write_reg(dsaf_dev->sc_base,
-                                      DSAF_SUB_SC_GE_RESET_DREQ0_REG,
+                       dsaf_write_sub(dsaf_dev, DSAF_SUB_SC_GE_RESET_DREQ0_REG,
                                       reg_val_2);
 
-                       dsaf_write_reg(dsaf_dev->sc_base,
-                                      DSAF_SUB_SC_GE_RESET_DREQ1_REG,
+                       dsaf_write_sub(dsaf_dev, DSAF_SUB_SC_GE_RESET_DREQ1_REG,
                                       reg_val_1);
                }
        } else {
-               reg_val_1 = 0x15540 << (port - 6);
-               reg_val_2 = 0x100 << (port - 6);
+               reg_val_1 = 0x15540 << dsaf_dev->reset_offset;
+               reg_val_2 = 0x100 << dsaf_dev->reset_offset;
 
                if (val == 0) {
-                       dsaf_write_reg(dsaf_dev->sc_base,
-                                      DSAF_SUB_SC_GE_RESET_REQ1_REG,
+                       dsaf_write_sub(dsaf_dev, DSAF_SUB_SC_GE_RESET_REQ1_REG,
                                       reg_val_1);
 
-                       dsaf_write_reg(dsaf_dev->sc_base,
-                                      DSAF_SUB_SC_PPE_RESET_REQ_REG,
+                       dsaf_write_sub(dsaf_dev, DSAF_SUB_SC_PPE_RESET_REQ_REG,
                                       reg_val_2);
                } else {
-                       dsaf_write_reg(dsaf_dev->sc_base,
-                                      DSAF_SUB_SC_GE_RESET_DREQ1_REG,
+                       dsaf_write_sub(dsaf_dev, DSAF_SUB_SC_GE_RESET_DREQ1_REG,
                                       reg_val_1);
 
-                       dsaf_write_reg(dsaf_dev->sc_base,
-                                      DSAF_SUB_SC_PPE_RESET_DREQ_REG,
+                       dsaf_write_sub(dsaf_dev, DSAF_SUB_SC_PPE_RESET_DREQ_REG,
                                       reg_val_2);
                }
        }
@@ -201,24 +221,23 @@ void hns_ppe_srst_by_port(struct dsaf_device *dsaf_dev, u32 port, u32 val)
        u32 reg_val = 0;
        u32 reg_addr;
 
-       reg_val |= RESET_REQ_OR_DREQ << port;
+       reg_val |= RESET_REQ_OR_DREQ << dsaf_dev->mac_cb[port]->port_rst_off;
 
        if (val == 0)
                reg_addr = DSAF_SUB_SC_PPE_RESET_REQ_REG;
        else
                reg_addr = DSAF_SUB_SC_PPE_RESET_DREQ_REG;
 
-       dsaf_write_reg(dsaf_dev->sc_base, reg_addr, reg_val);
+       dsaf_write_sub(dsaf_dev, reg_addr, reg_val);
 }
 
 void hns_ppe_com_srst(struct ppe_common_cb *ppe_common, u32 val)
 {
-       int comm_index = ppe_common->comm_index;
        struct dsaf_device *dsaf_dev = ppe_common->dsaf_dev;
        u32 reg_val;
        u32 reg_addr;
 
-       if (comm_index == HNS_DSAF_COMM_SERVICE_NW_IDX) {
+       if (!HNS_DSAF_IS_DEBUG(dsaf_dev)) {
                reg_val = RESET_REQ_OR_DREQ;
                if (val == 0)
                        reg_addr = DSAF_SUB_SC_RCB_PPE_COM_RESET_REQ_REG;
@@ -226,7 +245,7 @@ void hns_ppe_com_srst(struct ppe_common_cb *ppe_common, u32 val)
                        reg_addr = DSAF_SUB_SC_RCB_PPE_COM_RESET_DREQ_REG;
 
        } else {
-               reg_val = 0x100 << (comm_index - 1);
+               reg_val = 0x100 << dsaf_dev->reset_offset;
 
                if (val == 0)
                        reg_addr = DSAF_SUB_SC_PPE_RESET_REQ_REG;
@@ -234,7 +253,7 @@ void hns_ppe_com_srst(struct ppe_common_cb *ppe_common, u32 val)
                        reg_addr = DSAF_SUB_SC_PPE_RESET_DREQ_REG;
        }
 
-       dsaf_write_reg(dsaf_dev->sc_base, reg_addr, reg_val);
+       dsaf_write_sub(dsaf_dev, reg_addr, reg_val);
 }
 
 /**
@@ -246,36 +265,45 @@ phy_interface_t hns_mac_get_phy_if(struct hns_mac_cb *mac_cb)
 {
        u32 mode;
        u32 reg;
-       u32 shift;
        bool is_ver1 = AE_IS_VER1(mac_cb->dsaf_dev->dsaf_ver);
-       void __iomem *sys_ctl_vaddr = mac_cb->sys_ctl_vaddr;
        int mac_id = mac_cb->mac_id;
-       phy_interface_t phy_if = PHY_INTERFACE_MODE_NA;
+       phy_interface_t phy_if;
 
-       if (is_ver1 && (mac_id >= 6 && mac_id <= 7)) {
-               phy_if = PHY_INTERFACE_MODE_SGMII;
-       } else if (mac_id >= 0 && mac_id <= 3) {
-               reg = is_ver1 ? HNS_MAC_HILINK4_REG : HNS_MAC_HILINK4V2_REG;
-               mode = dsaf_read_reg(sys_ctl_vaddr, reg);
-               /* mac_id 0, 1, 2, 3 ---> hilink4 lane 0, 1, 2, 3 */
-               shift = is_ver1 ? 0 : mac_id;
-               if (dsaf_get_bit(mode, shift))
-                       phy_if = PHY_INTERFACE_MODE_XGMII;
+       if (is_ver1) {
+               if (HNS_DSAF_IS_DEBUG(mac_cb->dsaf_dev))
+                       return PHY_INTERFACE_MODE_SGMII;
+
+               if (mac_id >= 0 && mac_id <= 3)
+                       reg = HNS_MAC_HILINK4_REG;
                else
-                       phy_if = PHY_INTERFACE_MODE_SGMII;
-       } else if (mac_id >= 4 && mac_id <= 7) {
-               reg = is_ver1 ? HNS_MAC_HILINK3_REG : HNS_MAC_HILINK3V2_REG;
-               mode = dsaf_read_reg(sys_ctl_vaddr, reg);
-               /* mac_id 4, 5, 6, 7 ---> hilink3 lane 2, 3, 0, 1 */
-               shift = is_ver1 ? 0 : mac_id <= 5 ? mac_id - 2 : mac_id - 6;
-               if (dsaf_get_bit(mode, shift))
-                       phy_if = PHY_INTERFACE_MODE_XGMII;
+                       reg = HNS_MAC_HILINK3_REG;
+       } else{
+               if (!HNS_DSAF_IS_DEBUG(mac_cb->dsaf_dev) && mac_id <= 3)
+                       reg = HNS_MAC_HILINK4V2_REG;
                else
-                       phy_if = PHY_INTERFACE_MODE_SGMII;
+                       reg = HNS_MAC_HILINK3V2_REG;
        }
+
+       mode = dsaf_read_sub(mac_cb->dsaf_dev, reg);
+       if (dsaf_get_bit(mode, mac_cb->port_mode_off))
+               phy_if = PHY_INTERFACE_MODE_XGMII;
+       else
+               phy_if = PHY_INTERFACE_MODE_SGMII;
+
        return phy_if;
 }
 
+int hns_mac_get_sfp_prsnt(struct hns_mac_cb *mac_cb, int *sfp_prsnt)
+{
+       if (!mac_cb->cpld_ctrl)
+               return -ENODEV;
+
+       *sfp_prsnt = !dsaf_read_syscon(mac_cb->cpld_ctrl, mac_cb->cpld_ctrl_reg
+                                       + MAC_SFP_PORT_OFFSET);
+
+       return 0;
+}
+
 /**
  * hns_mac_config_sds_loopback - set loop back for serdes
  * @mac_cb: mac control block
@@ -312,7 +340,14 @@ int hns_mac_config_sds_loopback(struct hns_mac_cb *mac_cb, u8 en)
                                pr_info("no sfp in this eth\n");
        }
 
-       dsaf_set_reg_field(base_addr, reg_offset, 1ull << 10, 10, !!en);
+       if (mac_cb->serdes_ctrl) {
+               u32 origin = dsaf_read_syscon(mac_cb->serdes_ctrl, reg_offset);
+
+               dsaf_set_field(origin, 1ull << 10, 10, !!en);
+               dsaf_write_syscon(mac_cb->serdes_ctrl, reg_offset, origin);
+       } else {
+               dsaf_set_reg_field(base_addr, reg_offset, 1ull << 10, 10, !!en);
+       }
 
        return 0;
 }
index ab27b3b..8cd151a 100644 (file)
@@ -61,22 +61,10 @@ void hns_ppe_set_indir_table(struct hns_ppe_cb *ppe_cb,
        }
 }
 
-static void __iomem *hns_ppe_common_get_ioaddr(
-       struct ppe_common_cb *ppe_common)
+static void __iomem *
+hns_ppe_common_get_ioaddr(struct ppe_common_cb *ppe_common)
 {
-       void __iomem *base_addr;
-
-       int idx = ppe_common->comm_index;
-
-       if (idx == HNS_DSAF_COMM_SERVICE_NW_IDX)
-               base_addr = ppe_common->dsaf_dev->ppe_base
-                       + PPE_COMMON_REG_OFFSET;
-       else
-               base_addr = ppe_common->dsaf_dev->sds_base
-                       + (idx - 1) * HNS_DSAF_DEBUG_NW_REG_OFFSET
-                       + PPE_COMMON_REG_OFFSET;
-
-       return base_addr;
+       return ppe_common->dsaf_dev->ppe_base + PPE_COMMON_REG_OFFSET;
 }
 
 /**
@@ -90,7 +78,7 @@ int hns_ppe_common_get_cfg(struct dsaf_device *dsaf_dev, int comm_index)
        struct ppe_common_cb *ppe_common;
        int ppe_num;
 
-       if (comm_index == HNS_DSAF_COMM_SERVICE_NW_IDX)
+       if (!HNS_DSAF_IS_DEBUG(dsaf_dev))
                ppe_num = HNS_PPE_SERVICE_NW_ENGINE_NUM;
        else
                ppe_num = HNS_PPE_DEBUG_NW_ENGINE_NUM;
@@ -103,7 +91,7 @@ int hns_ppe_common_get_cfg(struct dsaf_device *dsaf_dev, int comm_index)
        ppe_common->ppe_num = ppe_num;
        ppe_common->dsaf_dev = dsaf_dev;
        ppe_common->comm_index = comm_index;
-       if (comm_index == HNS_DSAF_COMM_SERVICE_NW_IDX)
+       if (!HNS_DSAF_IS_DEBUG(dsaf_dev))
                ppe_common->ppe_mode = PPE_COMMON_MODE_SERVICE;
        else
                ppe_common->ppe_mode = PPE_COMMON_MODE_DEBUG;
@@ -124,32 +112,8 @@ void hns_ppe_common_free_cfg(struct dsaf_device *dsaf_dev, u32 comm_index)
 static void __iomem *hns_ppe_get_iobase(struct ppe_common_cb *ppe_common,
                                        int ppe_idx)
 {
-       void __iomem *base_addr;
-       int common_idx = ppe_common->comm_index;
-
-       if (ppe_common->ppe_mode == PPE_COMMON_MODE_SERVICE) {
-               base_addr = ppe_common->dsaf_dev->ppe_base +
-                       ppe_idx * PPE_REG_OFFSET;
-
-       } else {
-               base_addr = ppe_common->dsaf_dev->sds_base +
-                       (common_idx - 1) * HNS_DSAF_DEBUG_NW_REG_OFFSET;
-       }
 
-       return base_addr;
-}
-
-static int hns_ppe_get_port(struct ppe_common_cb *ppe_common, int idx)
-{
-       int port;
-
-       if (ppe_common->ppe_mode == PPE_COMMON_MODE_SERVICE)
-               port = idx;
-       else
-               port = HNS_PPE_SERVICE_NW_ENGINE_NUM
-                       + ppe_common->comm_index - 1;
-
-       return port;
+       return ppe_common->dsaf_dev->ppe_base + ppe_idx * PPE_REG_OFFSET;
 }
 
 static void hns_ppe_get_cfg(struct ppe_common_cb *ppe_common)
@@ -164,7 +128,6 @@ static void hns_ppe_get_cfg(struct ppe_common_cb *ppe_common)
                ppe_cb->next = NULL;
                ppe_cb->ppe_common_cb = ppe_common;
                ppe_cb->index = i;
-               ppe_cb->port = hns_ppe_get_port(ppe_common, i);
                ppe_cb->io_base = hns_ppe_get_iobase(ppe_common, i);
                ppe_cb->virq = 0;
        }
@@ -318,7 +281,7 @@ static void hns_ppe_exc_irq_en(struct hns_ppe_cb *ppe_cb, int en)
 static void hns_ppe_init_hw(struct hns_ppe_cb *ppe_cb)
 {
        struct ppe_common_cb *ppe_common_cb = ppe_cb->ppe_common_cb;
-       u32 port = ppe_cb->port;
+       u32 port = ppe_cb->index;
        struct dsaf_device *dsaf_dev = ppe_common_cb->dsaf_dev;
        int i;
 
@@ -377,7 +340,8 @@ void hns_ppe_uninit_ex(struct ppe_common_cb *ppe_common)
        u32 i;
 
        for (i = 0; i < ppe_common->ppe_num; i++) {
-               hns_ppe_uninit_hw(&ppe_common->ppe_cb[i]);
+               if (ppe_common->dsaf_dev->mac_cb[i])
+                       hns_ppe_uninit_hw(&ppe_common->ppe_cb[i]);
                memset(&ppe_common->ppe_cb[i], 0, sizeof(struct hns_ppe_cb));
        }
 }
@@ -410,8 +374,11 @@ void hns_ppe_reset_common(struct dsaf_device *dsaf_dev, u8 ppe_common_index)
        if (ret)
                return;
 
-       for (i = 0; i < ppe_common->ppe_num; i++)
-               hns_ppe_init_hw(&ppe_common->ppe_cb[i]);
+       for (i = 0; i < ppe_common->ppe_num; i++) {
+               /* We only need to initiate ppe when the port exists */
+               if (dsaf_dev->mac_cb[i])
+                       hns_ppe_init_hw(&ppe_common->ppe_cb[i]);
+       }
 
        ret = hns_rcb_common_init_hw(dsaf_dev->rcb_common[ppe_common_index]);
        if (ret)
index e9c0ec2..9d8e643 100644 (file)
@@ -80,7 +80,6 @@ struct hns_ppe_cb {
        struct hns_ppe_hw_stats hw_stats;
 
        u8 index;       /* index in a ppe common device */
-       u8 port;                         /* port id in dsaf  */
        void __iomem *io_base;
        int virq;
        u32 rss_indir_table[HNS_PPEV2_RSS_IND_TBL_SIZE]; /*shadow indir tab */
index 28ee26e..4ef6d23 100644 (file)
@@ -270,7 +270,7 @@ static void hns_rcb_set_port_timeout(
 
 static int hns_rcb_common_get_port_num(struct rcb_common_cb *rcb_common)
 {
-       if (rcb_common->comm_index == HNS_DSAF_COMM_SERVICE_NW_IDX)
+       if (!HNS_DSAF_IS_DEBUG(rcb_common->dsaf_dev))
                return HNS_RCB_SERVICE_NW_ENGINE_NUM;
        else
                return HNS_RCB_DEBUG_NW_ENGINE_NUM;
@@ -430,36 +430,20 @@ static void hns_rcb_ring_pair_get_cfg(struct ring_pair_cb *ring_pair_cb)
 static int hns_rcb_get_port_in_comm(
        struct rcb_common_cb *rcb_common, int ring_idx)
 {
-       int comm_index = rcb_common->comm_index;
-       int port;
-       int q_num;
 
-       if (comm_index == HNS_DSAF_COMM_SERVICE_NW_IDX) {
-               q_num = (int)rcb_common->max_q_per_vf * rcb_common->max_vfn;
-               port = ring_idx / q_num;
-       } else {
-               port = 0; /* config debug-ports port_id_in_comm to 0*/
-       }
-
-       return port;
+       return ring_idx / (rcb_common->max_q_per_vf * rcb_common->max_vfn);
 }
 
 #define SERVICE_RING_IRQ_IDX(v1) \
        ((v1) ? HNS_SERVICE_RING_IRQ_IDX : HNSV2_SERVICE_RING_IRQ_IDX)
-#define DEBUG_RING_IRQ_IDX(v1) \
-       ((v1) ? HNS_DEBUG_RING_IRQ_IDX : HNSV2_DEBUG_RING_IRQ_IDX)
-#define DEBUG_RING_IRQ_OFFSET(v1) \
-       ((v1) ? HNS_DEBUG_RING_IRQ_OFFSET : HNSV2_DEBUG_RING_IRQ_OFFSET)
 static int hns_rcb_get_base_irq_idx(struct rcb_common_cb *rcb_common)
 {
-       int comm_index = rcb_common->comm_index;
        bool is_ver1 = AE_IS_VER1(rcb_common->dsaf_dev->dsaf_ver);
 
-       if (comm_index == HNS_DSAF_COMM_SERVICE_NW_IDX)
+       if (!HNS_DSAF_IS_DEBUG(rcb_common->dsaf_dev))
                return SERVICE_RING_IRQ_IDX(is_ver1);
        else
-               return  DEBUG_RING_IRQ_IDX(is_ver1) +
-                       (comm_index - 1) * DEBUG_RING_IRQ_OFFSET(is_ver1);
+               return  HNS_DEBUG_RING_IRQ_IDX;
 }
 
 #define RCB_COMM_BASE_TO_RING_BASE(base, ringid)\
@@ -549,7 +533,7 @@ int hns_rcb_set_coalesce_usecs(
                return 0;
 
        if (AE_IS_VER1(rcb_common->dsaf_dev->dsaf_ver)) {
-               if (rcb_common->comm_index == HNS_DSAF_COMM_SERVICE_NW_IDX) {
+               if (!HNS_DSAF_IS_DEBUG(rcb_common->dsaf_dev)) {
                        dev_err(rcb_common->dsaf_dev->dev,
                                "error: not support coalesce_usecs setting!\n");
                        return -EINVAL;
@@ -601,113 +585,82 @@ int hns_rcb_set_coalesced_frames(
  *@max_vfn : max vfn number
  *@max_q_per_vf:max ring number per vm
  */
-void hns_rcb_get_queue_mode(enum dsaf_mode dsaf_mode, int comm_index,
-                           u16 *max_vfn, u16 *max_q_per_vf)
+void hns_rcb_get_queue_mode(enum dsaf_mode dsaf_mode, u16 *max_vfn,
+                           u16 *max_q_per_vf)
 {
-       if (comm_index == HNS_DSAF_COMM_SERVICE_NW_IDX) {
-               switch (dsaf_mode) {
-               case DSAF_MODE_DISABLE_6PORT_0VM:
-                       *max_vfn = 1;
-                       *max_q_per_vf = 16;
-                       break;
-               case DSAF_MODE_DISABLE_FIX:
-                       *max_vfn = 1;
-                       *max_q_per_vf = 1;
-                       break;
-               case DSAF_MODE_DISABLE_2PORT_64VM:
-                       *max_vfn = 64;
-                       *max_q_per_vf = 1;
-                       break;
-               case DSAF_MODE_DISABLE_6PORT_16VM:
-                       *max_vfn = 16;
-                       *max_q_per_vf = 1;
-                       break;
-               default:
-                       *max_vfn = 1;
-                       *max_q_per_vf = 16;
-                       break;
-               }
-       } else {
+       switch (dsaf_mode) {
+       case DSAF_MODE_DISABLE_6PORT_0VM:
+               *max_vfn = 1;
+               *max_q_per_vf = 16;
+               break;
+       case DSAF_MODE_DISABLE_FIX:
+       case DSAF_MODE_DISABLE_SP:
                *max_vfn = 1;
                *max_q_per_vf = 1;
+               break;
+       case DSAF_MODE_DISABLE_2PORT_64VM:
+               *max_vfn = 64;
+               *max_q_per_vf = 1;
+               break;
+       case DSAF_MODE_DISABLE_6PORT_16VM:
+               *max_vfn = 16;
+               *max_q_per_vf = 1;
+               break;
+       default:
+               *max_vfn = 1;
+               *max_q_per_vf = 16;
+               break;
        }
 }
 
-int hns_rcb_get_ring_num(struct dsaf_device *dsaf_dev, int comm_index)
+int hns_rcb_get_ring_num(struct dsaf_device *dsaf_dev)
 {
-       if (comm_index == HNS_DSAF_COMM_SERVICE_NW_IDX) {
-               switch (dsaf_dev->dsaf_mode) {
-               case DSAF_MODE_ENABLE_FIX:
-                       return 1;
-
-               case DSAF_MODE_DISABLE_FIX:
-                       return 6;
-
-               case DSAF_MODE_ENABLE_0VM:
-                       return 32;
-
-               case DSAF_MODE_DISABLE_6PORT_0VM:
-               case DSAF_MODE_ENABLE_16VM:
-               case DSAF_MODE_DISABLE_6PORT_2VM:
-               case DSAF_MODE_DISABLE_6PORT_16VM:
-               case DSAF_MODE_DISABLE_6PORT_4VM:
-               case DSAF_MODE_ENABLE_8VM:
-                       return 96;
-
-               case DSAF_MODE_DISABLE_2PORT_16VM:
-               case DSAF_MODE_DISABLE_2PORT_8VM:
-               case DSAF_MODE_ENABLE_32VM:
-               case DSAF_MODE_DISABLE_2PORT_64VM:
-               case DSAF_MODE_ENABLE_128VM:
-                       return 128;
-
-               default:
-                       dev_warn(dsaf_dev->dev,
-                                "get ring num fail,use default!dsaf_mode=%d\n",
-                                dsaf_dev->dsaf_mode);
-                       return 128;
-               }
-       } else {
+       switch (dsaf_dev->dsaf_mode) {
+       case DSAF_MODE_ENABLE_FIX:
+       case DSAF_MODE_DISABLE_SP:
                return 1;
+
+       case DSAF_MODE_DISABLE_FIX:
+               return 6;
+
+       case DSAF_MODE_ENABLE_0VM:
+               return 32;
+
+       case DSAF_MODE_DISABLE_6PORT_0VM:
+       case DSAF_MODE_ENABLE_16VM:
+       case DSAF_MODE_DISABLE_6PORT_2VM:
+       case DSAF_MODE_DISABLE_6PORT_16VM:
+       case DSAF_MODE_DISABLE_6PORT_4VM:
+       case DSAF_MODE_ENABLE_8VM:
+               return 96;
+
+       case DSAF_MODE_DISABLE_2PORT_16VM:
+       case DSAF_MODE_DISABLE_2PORT_8VM:
+       case DSAF_MODE_ENABLE_32VM:
+       case DSAF_MODE_DISABLE_2PORT_64VM:
+       case DSAF_MODE_ENABLE_128VM:
+               return 128;
+
+       default:
+               dev_warn(dsaf_dev->dev,
+                        "get ring num fail,use default!dsaf_mode=%d\n",
+                        dsaf_dev->dsaf_mode);
+               return 128;
        }
 }
 
-void __iomem *hns_rcb_common_get_vaddr(struct dsaf_device *dsaf_dev,
-                                      int comm_index)
+void __iomem *hns_rcb_common_get_vaddr(struct rcb_common_cb *rcb_common)
 {
-       void __iomem *base_addr;
-
-       if (comm_index == HNS_DSAF_COMM_SERVICE_NW_IDX)
-               base_addr = dsaf_dev->ppe_base + RCB_COMMON_REG_OFFSET;
-       else
-               base_addr = dsaf_dev->sds_base
-                       + (comm_index - 1) * HNS_DSAF_DEBUG_NW_REG_OFFSET
-                       + RCB_COMMON_REG_OFFSET;
+       struct dsaf_device *dsaf_dev = rcb_common->dsaf_dev;
 
-       return base_addr;
+       return dsaf_dev->ppe_base + RCB_COMMON_REG_OFFSET;
 }
 
-static phys_addr_t hns_rcb_common_get_paddr(struct dsaf_device *dsaf_dev,
-                                           int comm_index)
+static phys_addr_t hns_rcb_common_get_paddr(struct rcb_common_cb *rcb_common)
 {
-       struct device_node *np = dsaf_dev->dev->of_node;
-       phys_addr_t phy_addr;
-       const __be32 *tmp_addr;
-       u64 addr_offset = 0;
-       u64 size = 0;
-       int index = 0;
-
-       if (comm_index == HNS_DSAF_COMM_SERVICE_NW_IDX) {
-               index    = 2;
-               addr_offset = RCB_COMMON_REG_OFFSET;
-       } else {
-               index    = 1;
-               addr_offset = (comm_index - 1) * HNS_DSAF_DEBUG_NW_REG_OFFSET +
-                               RCB_COMMON_REG_OFFSET;
-       }
-       tmp_addr  = of_get_address(np, index, &size, NULL);
-       phy_addr  = of_translate_address(np, tmp_addr);
-       return phy_addr + addr_offset;
+       struct dsaf_device *dsaf_dev = rcb_common->dsaf_dev;
+
+       return dsaf_dev->ppe_paddr + RCB_COMMON_REG_OFFSET;
 }
 
 int hns_rcb_common_get_cfg(struct dsaf_device *dsaf_dev,
@@ -717,7 +670,7 @@ int hns_rcb_common_get_cfg(struct dsaf_device *dsaf_dev,
        enum dsaf_mode dsaf_mode = dsaf_dev->dsaf_mode;
        u16 max_vfn;
        u16 max_q_per_vf;
-       int ring_num = hns_rcb_get_ring_num(dsaf_dev, comm_index);
+       int ring_num = hns_rcb_get_ring_num(dsaf_dev);
 
        rcb_common =
                devm_kzalloc(dsaf_dev->dev, sizeof(*rcb_common) +
@@ -732,12 +685,12 @@ int hns_rcb_common_get_cfg(struct dsaf_device *dsaf_dev,
 
        rcb_common->desc_num = dsaf_dev->desc_num;
 
-       hns_rcb_get_queue_mode(dsaf_mode, comm_index, &max_vfn, &max_q_per_vf);
+       hns_rcb_get_queue_mode(dsaf_mode, &max_vfn, &max_q_per_vf);
        rcb_common->max_vfn = max_vfn;
        rcb_common->max_q_per_vf = max_q_per_vf;
 
-       rcb_common->io_base = hns_rcb_common_get_vaddr(dsaf_dev, comm_index);
-       rcb_common->phy_base = hns_rcb_common_get_paddr(dsaf_dev, comm_index);
+       rcb_common->io_base = hns_rcb_common_get_vaddr(rcb_common);
+       rcb_common->phy_base = hns_rcb_common_get_paddr(rcb_common);
 
        dsaf_dev->rcb_common[comm_index] = rcb_common;
        return 0;
@@ -932,7 +885,7 @@ void hns_rcb_get_common_regs(struct rcb_common_cb *rcb_com, void *data)
 {
        u32 *regs = data;
        bool is_ver1 = AE_IS_VER1(rcb_com->dsaf_dev->dsaf_ver);
-       bool is_dbg = (rcb_com->comm_index != HNS_DSAF_COMM_SERVICE_NW_IDX);
+       bool is_dbg = HNS_DSAF_IS_DEBUG(rcb_com->dsaf_dev);
        u32 reg_tmp;
        u32 reg_num_tmp;
        u32 i = 0;
index eb61014..bd54dac 100644 (file)
@@ -111,7 +111,7 @@ void hns_rcb_common_free_cfg(struct dsaf_device *dsaf_dev, u32 comm_index);
 int hns_rcb_common_init_hw(struct rcb_common_cb *rcb_common);
 void hns_rcb_start(struct hnae_queue *q, u32 val);
 void hns_rcb_get_cfg(struct rcb_common_cb *rcb_common);
-void hns_rcb_get_queue_mode(enum dsaf_mode dsaf_mode, int comm_index,
+void hns_rcb_get_queue_mode(enum dsaf_mode dsaf_mode,
                            u16 *max_vfn, u16 *max_q_per_vf);
 
 void hns_rcb_common_init_commit_hw(struct rcb_common_cb *rcb_common);
index 7ff195e..7c3b510 100644 (file)
 #ifndef _DSAF_REG_H_
 #define _DSAF_REG_H_
 
-#define HNS_DEBUG_RING_IRQ_IDX 55
-#define HNS_SERVICE_RING_IRQ_IDX 59
-#define HNS_DEBUG_RING_IRQ_OFFSET 2
-#define HNSV2_DEBUG_RING_IRQ_IDX 409
-#define HNSV2_SERVICE_RING_IRQ_IDX 25
-#define HNSV2_DEBUG_RING_IRQ_OFFSET 9
-
-#define DSAF_MAX_PORT_NUM_PER_CHIP 8
-#define DSAF_SERVICE_PORT_NUM_PER_DSAF 6
-#define DSAF_MAX_VM_NUM 128
-
-#define DSAF_COMM_DEV_NUM 3
-#define DSAF_PPE_INODE_BASE 6
-#define HNS_DSAF_COMM_SERVICE_NW_IDX 0
+#include <linux/regmap.h>
+#define HNS_DEBUG_RING_IRQ_IDX         0
+#define HNS_SERVICE_RING_IRQ_IDX       59
+#define HNSV2_SERVICE_RING_IRQ_IDX     25
+
+#define DSAF_MAX_PORT_NUM      6
+#define DSAF_MAX_VM_NUM                128
+
+#define DSAF_COMM_DEV_NUM      1
+#define DSAF_PPE_INODE_BASE    6
 #define DSAF_DEBUG_NW_NUM      2
 #define DSAF_SERVICE_NW_NUM    6
 #define DSAF_COMM_CHN          DSAF_SERVICE_NW_NUM
 #define DSAF_GE_NUM            ((DSAF_SERVICE_NW_NUM) + (DSAF_DEBUG_NW_NUM))
-#define DSAF_PORT_NUM          ((DSAF_SERVICE_NW_NUM) + (DSAF_DEBUG_NW_NUM))
 #define DSAF_XGE_NUM           DSAF_SERVICE_NW_NUM
 #define DSAF_PORT_TYPE_NUM 3
 #define DSAF_NODE_NUM          18
@@ -994,6 +989,19 @@ static inline u32 dsaf_read_reg(u8 __iomem *base, u32 reg)
        return readl(reg_addr + reg);
 }
 
+static inline void dsaf_write_syscon(struct regmap *base, u32 reg, u32 value)
+{
+       regmap_write(base, reg, value);
+}
+
+static inline u32 dsaf_read_syscon(struct regmap *base, u32 reg)
+{
+       unsigned int val;
+
+       regmap_read(base, reg, &val);
+       return val;
+}
+
 #define dsaf_read_dev(a, reg) \
        dsaf_read_reg((a)->io_base, (reg))
 
index 687204b..e621636 100644 (file)
@@ -1275,7 +1275,7 @@ void hns_nic_net_reinit(struct net_device *netdev)
 {
        struct hns_nic_priv *priv = netdev_priv(netdev);
 
-       priv->netdev->trans_start = jiffies;
+       netif_trans_update(priv->netdev);
        while (test_and_set_bit(NIC_STATE_REINITING, &priv->state))
                usleep_range(1000, 2000);
 
@@ -1376,7 +1376,7 @@ static netdev_tx_t hns_nic_net_xmit(struct sk_buff *skb,
        ret = hns_nic_net_xmit_hw(ndev, skb,
                                  &tx_ring_data(priv, skb->queue_mapping));
        if (ret == NETDEV_TX_OK) {
-               ndev->trans_start = jiffies;
+               netif_trans_update(ndev);
                ndev->stats.tx_bytes += skb->len;
                ndev->stats.tx_packets++;
        }
@@ -1648,7 +1648,7 @@ static void hns_nic_reset_subtask(struct hns_nic_priv *priv)
 
        rtnl_lock();
        /* put off any impending NetWatchDogTimeout */
-       priv->netdev->trans_start = jiffies;
+       netif_trans_update(priv->netdev);
 
        if (type == HNAE_PORT_DEBUG) {
                hns_nic_net_reinit(priv->netdev);
@@ -1873,6 +1873,7 @@ static int hns_nic_dev_probe(struct platform_device *pdev)
        struct net_device *ndev;
        struct hns_nic_priv *priv;
        struct device_node *node = dev->of_node;
+       u32 port_id;
        int ret;
 
        ndev = alloc_etherdev_mq(sizeof(struct hns_nic_priv), NIC_MAX_Q_PER_VF);
@@ -1896,10 +1897,18 @@ static int hns_nic_dev_probe(struct platform_device *pdev)
                dev_err(dev, "not find ae-handle\n");
                goto out_read_prop_fail;
        }
-
-       ret = of_property_read_u32(node, "port-id", &priv->port_id);
-       if (ret)
-               goto out_read_prop_fail;
+       /* try to find port-idx-in-ae first */
+       ret = of_property_read_u32(node, "port-idx-in-ae", &port_id);
+       if (ret) {
+               /* only for old code compatible */
+               ret = of_property_read_u32(node, "port-id", &port_id);
+               if (ret)
+                       goto out_read_prop_fail;
+               /* for old dts, we need to caculate the port offset */
+               port_id = port_id < HNS_SRV_OFFSET ? port_id + HNS_DEBUG_OFFSET
+                       : port_id - HNS_SRV_OFFSET;
+       }
+       priv->port_id = port_id;
 
        hns_init_mac_addr(ndev);
 
index c68ab3d..337efa5 100644 (file)
@@ -18,6 +18,9 @@
 
 #include "hnae.h"
 
+#define HNS_DEBUG_OFFSET       6
+#define HNS_SRV_OFFSET         2
+
 enum hns_nic_state {
        NIC_STATE_TESTING = 0,
        NIC_STATE_RESETTING,
index 3daf2d4..631dbc7 100644 (file)
@@ -1102,7 +1102,7 @@ static int hp100_open(struct net_device *dev)
                return -EAGAIN;
        }
 
-       dev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(dev); /* prevent tx timeout */
        netif_start_queue(dev);
 
        lp->lan_type = hp100_sense_lan(dev);
index 7ce6379..befb4ac 100644 (file)
@@ -1042,7 +1042,7 @@ static void i596_tx_timeout (struct net_device *dev)
                lp->last_restart = dev->stats.tx_packets;
        }
 
-       dev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(dev); /* prevent tx timeout */
        netif_wake_queue (dev);
 }
 
index c984998..3dbc53c 100644 (file)
@@ -960,7 +960,7 @@ static void i596_tx_timeout (struct net_device *dev)
                lp->last_restart = dev->stats.tx_packets;
        }
 
-       dev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(dev); /* prevent tx timeout */
        netif_wake_queue (dev);
 }
 
index 353f57f..21c84cc 100644 (file)
@@ -983,7 +983,7 @@ static void sun3_82586_timeout(struct net_device *dev)
                p->scb->cmd_cuc = CUC_START;
                sun3_attn586();
                WAIT_4_SCB_CMD();
-               dev->trans_start = jiffies; /* prevent tx timeout */
+               netif_trans_update(dev); /* prevent tx timeout */
                return 0;
        }
 #endif
@@ -996,7 +996,7 @@ static void sun3_82586_timeout(struct net_device *dev)
                sun3_82586_close(dev);
                sun3_82586_open(dev);
        }
-       dev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(dev); /* prevent tx timeout */
 }
 
 /******************************************************
index 5d7db6c..4c9771d 100644 (file)
@@ -301,7 +301,7 @@ static inline void emac_netif_stop(struct emac_instance *dev)
        dev->no_mcast = 1;
        netif_addr_unlock(dev->ndev);
        netif_tx_unlock_bh(dev->ndev);
-       dev->ndev->trans_start = jiffies;       /* prevent tx timeout */
+       netif_trans_update(dev->ndev);  /* prevent tx timeout */
        mal_poll_disable(dev->mal, &dev->commac);
        netif_tx_disable(dev->ndev);
 }
@@ -1377,7 +1377,7 @@ static inline int emac_xmit_finish(struct emac_instance *dev, int len)
                DBG2(dev, "stopped TX queue" NL);
        }
 
-       ndev->trans_start = jiffies;
+       netif_trans_update(ndev);
        ++dev->stats.tx_packets;
        dev->stats.tx_bytes += len;
 
index d3b9d10..5b88cc6 100644 (file)
@@ -470,12 +470,38 @@ static struct mii_phy_def m88e1112_phy_def = {
        .ops            = &m88e1112_phy_ops,
 };
 
+static int ar8035_init(struct mii_phy *phy)
+{
+       phy_write(phy, 0x1d, 0x5); /* Address debug register 5 */
+       phy_write(phy, 0x1e, 0x2d47); /* Value copied from u-boot */
+       phy_write(phy, 0x1d, 0xb);    /* Address hib ctrl */
+       phy_write(phy, 0x1e, 0xbc20); /* Value copied from u-boot */
+
+       return 0;
+}
+
+static struct mii_phy_ops ar8035_phy_ops = {
+       .init           = ar8035_init,
+       .setup_aneg     = genmii_setup_aneg,
+       .setup_forced   = genmii_setup_forced,
+       .poll_link      = genmii_poll_link,
+       .read_link      = genmii_read_link,
+};
+
+static struct mii_phy_def ar8035_phy_def = {
+       .phy_id         = 0x004dd070,
+       .phy_id_mask    = 0xfffffff0,
+       .name           = "Atheros 8035 Gigabit Ethernet",
+       .ops            = &ar8035_phy_ops,
+};
+
 static struct mii_phy_def *mii_phy_table[] = {
        &et1011c_phy_def,
        &cis8201_phy_def,
        &bcm5248_phy_def,
        &m88e1111_phy_def,
        &m88e1112_phy_def,
+       &ar8035_phy_def,
        &genmii_phy_def,
        NULL
 };
index a7f16c3..269087c 100644 (file)
@@ -242,7 +242,7 @@ static void e1000e_dump(struct e1000_adapter *adapter)
                dev_info(&adapter->pdev->dev, "Net device Info\n");
                pr_info("Device Name     state            trans_start      last_rx\n");
                pr_info("%-15s %016lX %016lX %016lX\n", netdev->name,
-                       netdev->state, netdev->trans_start, netdev->last_rx);
+                       netdev->state, dev_trans_start(netdev), netdev->last_rx);
        }
 
        /* Print Registers */
index 206a466..e05aca9 100644 (file)
@@ -145,7 +145,7 @@ static void fm10k_reinit(struct fm10k_intfc *interface)
        WARN_ON(in_interrupt());
 
        /* put off any impending NetWatchDogTimeout */
-       netdev->trans_start = jiffies;
+       netif_trans_update(netdev);
 
        while (test_and_set_bit(__FM10K_RESETTING, &interface->state))
                usleep_range(1000, 2000);
index d25b3be..2a6a5d3 100644 (file)
 #define I40E_PRIV_FLAGS_LINKPOLL_FLAG  BIT(1)
 #define I40E_PRIV_FLAGS_FD_ATR         BIT(2)
 #define I40E_PRIV_FLAGS_VEB_STATS      BIT(3)
-#define I40E_PRIV_FLAGS_PS             BIT(4)
 #define I40E_PRIV_FLAGS_HW_ATR_EVICT   BIT(5)
 
 #define I40E_NVM_VERSION_LO_SHIFT  0
 #define XSTRINGIFY(bar) STRINGIFY(bar)
 
 #define I40E_RX_DESC(R, i)                     \
-       ((ring_is_16byte_desc_enabled(R))       \
-               ? (union i40e_32byte_rx_desc *) \
-                       (&(((union i40e_16byte_rx_desc *)((R)->desc))[i])) \
-               : (&(((union i40e_32byte_rx_desc *)((R)->desc))[i])))
+       (&(((union i40e_32byte_rx_desc *)((R)->desc))[i]))
 #define I40E_TX_DESC(R, i)                     \
        (&(((struct i40e_tx_desc *)((R)->desc))[i]))
 #define I40E_TX_CTXTDESC(R, i)                 \
@@ -202,6 +198,7 @@ struct i40e_lump_tracking {
 
 #define I40E_HKEY_ARRAY_SIZE ((I40E_PFQF_HKEY_MAX_INDEX + 1) * 4)
 #define I40E_HLUT_ARRAY_SIZE ((I40E_PFQF_HLUT_MAX_INDEX + 1) * 4)
+#define I40E_VF_HLUT_ARRAY_SIZE ((I40E_VFQF_HLUT1_MAX_INDEX + 1) * 4)
 
 enum i40e_fd_stat_idx {
        I40E_FD_STAT_ATR,
@@ -319,8 +316,6 @@ struct i40e_pf {
 #define I40E_FLAG_RX_CSUM_ENABLED              BIT_ULL(1)
 #define I40E_FLAG_MSI_ENABLED                  BIT_ULL(2)
 #define I40E_FLAG_MSIX_ENABLED                 BIT_ULL(3)
-#define I40E_FLAG_RX_1BUF_ENABLED              BIT_ULL(4)
-#define I40E_FLAG_RX_PS_ENABLED                        BIT_ULL(5)
 #define I40E_FLAG_RSS_ENABLED                  BIT_ULL(6)
 #define I40E_FLAG_VMDQ_ENABLED                 BIT_ULL(7)
 #define I40E_FLAG_FDIR_REQUIRES_REINIT         BIT_ULL(8)
@@ -329,7 +324,6 @@ struct i40e_pf {
 #ifdef I40E_FCOE
 #define I40E_FLAG_FCOE_ENABLED                 BIT_ULL(11)
 #endif /* I40E_FCOE */
-#define I40E_FLAG_16BYTE_RX_DESC_ENABLED       BIT_ULL(13)
 #define I40E_FLAG_CLEAN_ADMINQ                 BIT_ULL(14)
 #define I40E_FLAG_FILTER_SYNC                  BIT_ULL(15)
 #define I40E_FLAG_SERVICE_CLIENT_REQUESTED     BIT_ULL(16)
@@ -533,9 +527,7 @@ struct i40e_vsi {
        u8  *rss_lut_user;  /* User configured lookup table entries */
 
        u16 max_frame;
-       u16 rx_hdr_len;
        u16 rx_buf_len;
-       u8  dtype;
 
        /* List of q_vectors allocated to this VSI */
        struct i40e_q_vector **q_vectors;
@@ -553,7 +545,7 @@ struct i40e_vsi {
        u16 num_queue_pairs; /* Used tx and rx pairs */
        u16 num_desc;
        enum i40e_vsi_type type;  /* VSI type, e.g., LAN, FCoE, etc */
-       u16 vf_id;              /* Virtual function ID for SRIOV VSIs */
+       s16 vf_id;              /* Virtual function ID for SRIOV VSIs */
 
        struct i40e_tc_configuration tc_config;
        struct i40e_aqc_vsi_properties_data info;
index 43bb413..738b42a 100644 (file)
@@ -617,10 +617,6 @@ i40e_status i40e_init_adminq(struct i40e_hw *hw)
        hw->nvm_release_on_done = false;
        hw->nvmupd_state = I40E_NVMUPD_STATE_INIT;
 
-       ret_code = i40e_aq_set_hmc_resource_profile(hw,
-                                                   I40E_HMC_PROFILE_DEFAULT,
-                                                   0,
-                                                   NULL);
        ret_code = 0;
 
        /* success! */
index 8d5c65a..eacbe74 100644 (file)
@@ -78,17 +78,17 @@ struct i40e_aq_desc {
 #define I40E_AQ_FLAG_EI_SHIFT  14
 #define I40E_AQ_FLAG_FE_SHIFT  15
 
-#define I40E_AQ_FLAG_DD                (1 << I40E_AQ_FLAG_DD_SHIFT)  /* 0x1    */
-#define I40E_AQ_FLAG_CMP       (1 << I40E_AQ_FLAG_CMP_SHIFT) /* 0x2    */
-#define I40E_AQ_FLAG_ERR       (1 << I40E_AQ_FLAG_ERR_SHIFT) /* 0x4    */
-#define I40E_AQ_FLAG_VFE       (1 << I40E_AQ_FLAG_VFE_SHIFT) /* 0x8    */
-#define I40E_AQ_FLAG_LB                (1 << I40E_AQ_FLAG_LB_SHIFT)  /* 0x200  */
-#define I40E_AQ_FLAG_RD                (1 << I40E_AQ_FLAG_RD_SHIFT)  /* 0x400  */
-#define I40E_AQ_FLAG_VFC       (1 << I40E_AQ_FLAG_VFC_SHIFT) /* 0x800  */
-#define I40E_AQ_FLAG_BUF       (1 << I40E_AQ_FLAG_BUF_SHIFT) /* 0x1000 */
-#define I40E_AQ_FLAG_SI                (1 << I40E_AQ_FLAG_SI_SHIFT)  /* 0x2000 */
-#define I40E_AQ_FLAG_EI                (1 << I40E_AQ_FLAG_EI_SHIFT)  /* 0x4000 */
-#define I40E_AQ_FLAG_FE                (1 << I40E_AQ_FLAG_FE_SHIFT)  /* 0x8000 */
+#define I40E_AQ_FLAG_DD                BIT(I40E_AQ_FLAG_DD_SHIFT)  /* 0x1    */
+#define I40E_AQ_FLAG_CMP       BIT(I40E_AQ_FLAG_CMP_SHIFT) /* 0x2    */
+#define I40E_AQ_FLAG_ERR       BIT(I40E_AQ_FLAG_ERR_SHIFT) /* 0x4    */
+#define I40E_AQ_FLAG_VFE       BIT(I40E_AQ_FLAG_VFE_SHIFT) /* 0x8    */
+#define I40E_AQ_FLAG_LB                BIT(I40E_AQ_FLAG_LB_SHIFT)  /* 0x200  */
+#define I40E_AQ_FLAG_RD                BIT(I40E_AQ_FLAG_RD_SHIFT)  /* 0x400  */
+#define I40E_AQ_FLAG_VFC       BIT(I40E_AQ_FLAG_VFC_SHIFT) /* 0x800  */
+#define I40E_AQ_FLAG_BUF       BIT(I40E_AQ_FLAG_BUF_SHIFT) /* 0x1000 */
+#define I40E_AQ_FLAG_SI                BIT(I40E_AQ_FLAG_SI_SHIFT)  /* 0x2000 */
+#define I40E_AQ_FLAG_EI                BIT(I40E_AQ_FLAG_EI_SHIFT)  /* 0x4000 */
+#define I40E_AQ_FLAG_FE                BIT(I40E_AQ_FLAG_FE_SHIFT)  /* 0x8000 */
 
 /* error codes */
 enum i40e_admin_queue_err {
@@ -205,10 +205,6 @@ enum i40e_admin_queue_opc {
        i40e_aqc_opc_resume_port_tx                             = 0x041C,
        i40e_aqc_opc_configure_partition_bw                     = 0x041D,
 
-       /* hmc */
-       i40e_aqc_opc_query_hmc_resource_profile = 0x0500,
-       i40e_aqc_opc_set_hmc_resource_profile   = 0x0501,
-
        /* phy commands*/
        i40e_aqc_opc_get_phy_abilities          = 0x0600,
        i40e_aqc_opc_set_phy_config             = 0x0601,
@@ -429,6 +425,7 @@ struct i40e_aqc_list_capabilities_element_resp {
 #define I40E_AQ_CAP_ID_SDP             0x0062
 #define I40E_AQ_CAP_ID_MDIO            0x0063
 #define I40E_AQ_CAP_ID_WSR_PROT                0x0064
+#define I40E_AQ_CAP_ID_NVM_MGMT                0x0080
 #define I40E_AQ_CAP_ID_FLEX10          0x00F1
 #define I40E_AQ_CAP_ID_CEM             0x00F2
 
@@ -1585,27 +1582,6 @@ struct i40e_aqc_configure_partition_bw_data {
 
 I40E_CHECK_STRUCT_LEN(0x22, i40e_aqc_configure_partition_bw_data);
 
-/* Get and set the active HMC resource profile and status.
- * (direct 0x0500) and (direct 0x0501)
- */
-struct i40e_aq_get_set_hmc_resource_profile {
-       u8      pm_profile;
-       u8      pe_vf_enabled;
-       u8      reserved[14];
-};
-
-I40E_CHECK_CMD_LENGTH(i40e_aq_get_set_hmc_resource_profile);
-
-enum i40e_aq_hmc_profile {
-       /* I40E_HMC_PROFILE_NO_CHANGE    = 0, reserved */
-       I40E_HMC_PROFILE_DEFAULT        = 1,
-       I40E_HMC_PROFILE_FAVOR_VF       = 2,
-       I40E_HMC_PROFILE_EQUAL          = 3,
-};
-
-#define I40E_AQ_GET_HMC_RESOURCE_PROFILE_PM_MASK       0xF
-#define I40E_AQ_GET_HMC_RESOURCE_PROFILE_COUNT_MASK    0x3F
-
 /* Get PHY Abilities (indirect 0x0600) uses the generic indirect struct */
 
 /* set in param0 for get phy abilities to report qualified modules */
@@ -1652,11 +1628,11 @@ enum i40e_aq_phy_type {
 
 enum i40e_aq_link_speed {
        I40E_LINK_SPEED_UNKNOWN = 0,
-       I40E_LINK_SPEED_100MB   = (1 << I40E_LINK_SPEED_100MB_SHIFT),
-       I40E_LINK_SPEED_1GB     = (1 << I40E_LINK_SPEED_1000MB_SHIFT),
-       I40E_LINK_SPEED_10GB    = (1 << I40E_LINK_SPEED_10GB_SHIFT),
-       I40E_LINK_SPEED_40GB    = (1 << I40E_LINK_SPEED_40GB_SHIFT),
-       I40E_LINK_SPEED_20GB    = (1 << I40E_LINK_SPEED_20GB_SHIFT)
+       I40E_LINK_SPEED_100MB   = BIT(I40E_LINK_SPEED_100MB_SHIFT),
+       I40E_LINK_SPEED_1GB     = BIT(I40E_LINK_SPEED_1000MB_SHIFT),
+       I40E_LINK_SPEED_10GB    = BIT(I40E_LINK_SPEED_10GB_SHIFT),
+       I40E_LINK_SPEED_40GB    = BIT(I40E_LINK_SPEED_40GB_SHIFT),
+       I40E_LINK_SPEED_20GB    = BIT(I40E_LINK_SPEED_20GB_SHIFT)
 };
 
 struct i40e_aqc_module_desc {
@@ -1927,9 +1903,9 @@ I40E_CHECK_CMD_LENGTH(i40e_aqc_nvm_config_write);
 /* Used for 0x0704 as well as for 0x0705 commands */
 #define I40E_AQ_ANVM_FEATURE_OR_IMMEDIATE_SHIFT                1
 #define I40E_AQ_ANVM_FEATURE_OR_IMMEDIATE_MASK \
-                               (1 << I40E_AQ_ANVM_FEATURE_OR_IMMEDIATE_SHIFT)
+                               BIT(I40E_AQ_ANVM_FEATURE_OR_IMMEDIATE_SHIFT)
 #define I40E_AQ_ANVM_FEATURE           0
-#define I40E_AQ_ANVM_IMMEDIATE_FIELD   (1 << FEATURE_OR_IMMEDIATE_SHIFT)
+#define I40E_AQ_ANVM_IMMEDIATE_FIELD   BIT(FEATURE_OR_IMMEDIATE_SHIFT)
 struct i40e_aqc_nvm_config_data_feature {
        __le16 feature_id;
 #define I40E_AQ_ANVM_FEATURE_OPTION_OEM_ONLY           0x01
@@ -2226,13 +2202,11 @@ I40E_CHECK_STRUCT_LEN(0x20, i40e_aqc_get_cee_dcb_cfg_resp);
  */
 struct i40e_aqc_lldp_set_local_mib {
 #define SET_LOCAL_MIB_AC_TYPE_DCBX_SHIFT       0
-#define SET_LOCAL_MIB_AC_TYPE_DCBX_MASK        (1 << SET_LOCAL_MIB_AC_TYPE_DCBX_SHIFT)
-#define SET_LOCAL_MIB_AC_TYPE_DCBX_MASK        (1 << \
-                                       SET_LOCAL_MIB_AC_TYPE_DCBX_SHIFT)
+#define SET_LOCAL_MIB_AC_TYPE_DCBX_MASK        BIT(SET_LOCAL_MIB_AC_TYPE_DCBX_SHIFT)
 #define SET_LOCAL_MIB_AC_TYPE_LOCAL_MIB        0x0
 #define SET_LOCAL_MIB_AC_TYPE_NON_WILLING_APPS_SHIFT   (1)
-#define SET_LOCAL_MIB_AC_TYPE_NON_WILLING_APPS_MASK    (1 << \
-                               SET_LOCAL_MIB_AC_TYPE_NON_WILLING_APPS_SHIFT)
+#define SET_LOCAL_MIB_AC_TYPE_NON_WILLING_APPS_MASK \
+                       BIT(SET_LOCAL_MIB_AC_TYPE_NON_WILLING_APPS_SHIFT)
 #define SET_LOCAL_MIB_AC_TYPE_NON_WILLING_APPS         0x1
        u8      type;
        u8      reserved0;
@@ -2250,7 +2224,7 @@ I40E_CHECK_CMD_LENGTH(i40e_aqc_lldp_set_local_mib);
 struct i40e_aqc_lldp_stop_start_specific_agent {
 #define I40E_AQC_START_SPECIFIC_AGENT_SHIFT    0
 #define I40E_AQC_START_SPECIFIC_AGENT_MASK \
-                               (1 << I40E_AQC_START_SPECIFIC_AGENT_SHIFT)
+                               BIT(I40E_AQC_START_SPECIFIC_AGENT_SHIFT)
        u8      command;
        u8      reserved[15];
 };
@@ -2303,7 +2277,7 @@ struct i40e_aqc_del_udp_tunnel_completion {
 I40E_CHECK_CMD_LENGTH(i40e_aqc_del_udp_tunnel_completion);
 
 struct i40e_aqc_get_set_rss_key {
-#define I40E_AQC_SET_RSS_KEY_VSI_VALID         (0x1 << 15)
+#define I40E_AQC_SET_RSS_KEY_VSI_VALID         BIT(15)
 #define I40E_AQC_SET_RSS_KEY_VSI_ID_SHIFT      0
 #define I40E_AQC_SET_RSS_KEY_VSI_ID_MASK       (0x3FF << \
                                        I40E_AQC_SET_RSS_KEY_VSI_ID_SHIFT)
@@ -2323,14 +2297,13 @@ struct i40e_aqc_get_set_rss_key_data {
 I40E_CHECK_STRUCT_LEN(0x34, i40e_aqc_get_set_rss_key_data);
 
 struct  i40e_aqc_get_set_rss_lut {
-#define I40E_AQC_SET_RSS_LUT_VSI_VALID         (0x1 << 15)
+#define I40E_AQC_SET_RSS_LUT_VSI_VALID         BIT(15)
 #define I40E_AQC_SET_RSS_LUT_VSI_ID_SHIFT      0
 #define I40E_AQC_SET_RSS_LUT_VSI_ID_MASK       (0x3FF << \
                                        I40E_AQC_SET_RSS_LUT_VSI_ID_SHIFT)
        __le16  vsi_id;
 #define I40E_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT  0
-#define I40E_AQC_SET_RSS_LUT_TABLE_TYPE_MASK   (0x1 << \
-                                       I40E_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT)
+#define I40E_AQC_SET_RSS_LUT_TABLE_TYPE_MASK   BIT(I40E_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT)
 
 #define I40E_AQC_SET_RSS_LUT_TABLE_TYPE_VSI    0
 #define I40E_AQC_SET_RSS_LUT_TABLE_TYPE_PF     1
index bf6b453..a4601d9 100644 (file)
@@ -217,7 +217,7 @@ struct i40e_client {
 #define I40E_CLIENT_FLAGS_LAUNCH_ON_PROBE      BIT(0)
 #define I40E_TX_FLAGS_NOTIFY_OTHER_EVENTS      BIT(2)
        enum i40e_client_type type;
-       struct i40e_client_ops *ops;    /* client ops provided by the client */
+       const struct i40e_client_ops *ops; /* client ops provided by the client */
 };
 
 static inline bool i40e_client_is_registered(struct i40e_client *client)
index f3c1d88..4a934e1 100644 (file)
@@ -61,6 +61,7 @@ static i40e_status i40e_set_mac_type(struct i40e_hw *hw)
                case I40E_DEV_ID_1G_BASE_T_X722:
                case I40E_DEV_ID_10G_BASE_T_X722:
                case I40E_DEV_ID_SFP_I_X722:
+               case I40E_DEV_ID_QSFP_I_X722:
                        hw->mac.type = I40E_MAC_X722;
                        break;
                default:
@@ -2037,6 +2038,76 @@ i40e_status i40e_aq_set_vsi_multicast_promiscuous(struct i40e_hw *hw,
        return status;
 }
 
+/**
+ * i40e_aq_set_vsi_mc_promisc_on_vlan
+ * @hw: pointer to the hw struct
+ * @seid: vsi number
+ * @enable: set MAC L2 layer unicast promiscuous enable/disable for a given VLAN
+ * @vid: The VLAN tag filter - capture any multicast packet with this VLAN tag
+ * @cmd_details: pointer to command details structure or NULL
+ **/
+enum i40e_status_code i40e_aq_set_vsi_mc_promisc_on_vlan(struct i40e_hw *hw,
+                                                        u16 seid, bool enable,
+                                                        u16 vid,
+                               struct i40e_asq_cmd_details *cmd_details)
+{
+       struct i40e_aq_desc desc;
+       struct i40e_aqc_set_vsi_promiscuous_modes *cmd =
+               (struct i40e_aqc_set_vsi_promiscuous_modes *)&desc.params.raw;
+       enum i40e_status_code status;
+       u16 flags = 0;
+
+       i40e_fill_default_direct_cmd_desc(&desc,
+                                         i40e_aqc_opc_set_vsi_promiscuous_modes);
+
+       if (enable)
+               flags |= I40E_AQC_SET_VSI_PROMISC_MULTICAST;
+
+       cmd->promiscuous_flags = cpu_to_le16(flags);
+       cmd->valid_flags = cpu_to_le16(I40E_AQC_SET_VSI_PROMISC_MULTICAST);
+       cmd->seid = cpu_to_le16(seid);
+       cmd->vlan_tag = cpu_to_le16(vid | I40E_AQC_SET_VSI_VLAN_VALID);
+
+       status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
+
+       return status;
+}
+
+/**
+ * i40e_aq_set_vsi_uc_promisc_on_vlan
+ * @hw: pointer to the hw struct
+ * @seid: vsi number
+ * @enable: set MAC L2 layer unicast promiscuous enable/disable for a given VLAN
+ * @vid: The VLAN tag filter - capture any unicast packet with this VLAN tag
+ * @cmd_details: pointer to command details structure or NULL
+ **/
+enum i40e_status_code i40e_aq_set_vsi_uc_promisc_on_vlan(struct i40e_hw *hw,
+                                                        u16 seid, bool enable,
+                                                        u16 vid,
+                               struct i40e_asq_cmd_details *cmd_details)
+{
+       struct i40e_aq_desc desc;
+       struct i40e_aqc_set_vsi_promiscuous_modes *cmd =
+               (struct i40e_aqc_set_vsi_promiscuous_modes *)&desc.params.raw;
+       enum i40e_status_code status;
+       u16 flags = 0;
+
+       i40e_fill_default_direct_cmd_desc(&desc,
+                                         i40e_aqc_opc_set_vsi_promiscuous_modes);
+
+       if (enable)
+               flags |= I40E_AQC_SET_VSI_PROMISC_UNICAST;
+
+       cmd->promiscuous_flags = cpu_to_le16(flags);
+       cmd->valid_flags = cpu_to_le16(I40E_AQC_SET_VSI_PROMISC_UNICAST);
+       cmd->seid = cpu_to_le16(seid);
+       cmd->vlan_tag = cpu_to_le16(vid | I40E_AQC_SET_VSI_VLAN_VALID);
+
+       status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
+
+       return status;
+}
+
 /**
  * i40e_aq_set_vsi_broadcast
  * @hw: pointer to the hw struct
@@ -2667,10 +2738,7 @@ i40e_status i40e_aq_delete_mirrorrule(struct i40e_hw *hw, u16 sw_seid,
                        u16 *rules_used, u16 *rules_free)
 {
        /* Rule ID has to be valid except rule_type: INGRESS VLAN mirroring */
-       if (rule_type != I40E_AQC_MIRROR_RULE_TYPE_VLAN) {
-               if (!rule_id)
-                       return I40E_ERR_PARAM;
-       } else {
+       if (rule_type == I40E_AQC_MIRROR_RULE_TYPE_VLAN) {
                /* count and mr_list shall be valid for rule_type INGRESS VLAN
                 * mirroring. For other rule_type, count and rule_type should
                 * not matter.
@@ -2786,36 +2854,6 @@ i40e_status i40e_aq_debug_write_register(struct i40e_hw *hw,
        return status;
 }
 
-/**
- * i40e_aq_set_hmc_resource_profile
- * @hw: pointer to the hw struct
- * @profile: type of profile the HMC is to be set as
- * @pe_vf_enabled_count: the number of PE enabled VFs the system has
- * @cmd_details: pointer to command details structure or NULL
- *
- * set the HMC profile of the device.
- **/
-i40e_status i40e_aq_set_hmc_resource_profile(struct i40e_hw *hw,
-                               enum i40e_aq_hmc_profile profile,
-                               u8 pe_vf_enabled_count,
-                               struct i40e_asq_cmd_details *cmd_details)
-{
-       struct i40e_aq_desc desc;
-       struct i40e_aq_get_set_hmc_resource_profile *cmd =
-               (struct i40e_aq_get_set_hmc_resource_profile *)&desc.params.raw;
-       i40e_status status;
-
-       i40e_fill_default_direct_cmd_desc(&desc,
-                                       i40e_aqc_opc_set_hmc_resource_profile);
-
-       cmd->pm_profile = (u8)profile;
-       cmd->pe_vf_enabled = pe_vf_enabled_count;
-
-       status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
-       return status;
-}
-
 /**
  * i40e_aq_request_resource
  * @hw: pointer to the hw struct
@@ -3138,6 +3176,12 @@ static void i40e_parse_discover_capabilities(struct i40e_hw *hw, void *buff,
                        p->wr_csr_prot = (u64)number;
                        p->wr_csr_prot |= (u64)logical_id << 32;
                        break;
+               case I40E_AQ_CAP_ID_NVM_MGMT:
+                       if (number & I40E_NVM_MGMT_SEC_REV_DISABLED)
+                               p->sec_rev_disabled = true;
+                       if (number & I40E_NVM_MGMT_UPDATE_DISABLED)
+                               p->update_disabled = true;
+                       break;
                default:
                        break;
                }
index 83dccf1..e6af8c8 100644 (file)
@@ -268,13 +268,11 @@ static void i40e_dbg_dump_vsi_seid(struct i40e_pf *pf, int seid)
                         rx_ring->queue_index,
                         rx_ring->reg_idx);
                dev_info(&pf->pdev->dev,
-                        "    rx_rings[%i]: rx_hdr_len = %d, rx_buf_len = %d, dtype = %d\n",
-                        i, rx_ring->rx_hdr_len,
-                        rx_ring->rx_buf_len,
-                        rx_ring->dtype);
+                        "    rx_rings[%i]: rx_buf_len = %d\n",
+                        i, rx_ring->rx_buf_len);
                dev_info(&pf->pdev->dev,
-                        "    rx_rings[%i]: hsplit = %d, next_to_use = %d, next_to_clean = %d, ring_active = %i\n",
-                        i, ring_is_ps_enabled(rx_ring),
+                        "    rx_rings[%i]: next_to_use = %d, next_to_clean = %d, ring_active = %i\n",
+                        i,
                         rx_ring->next_to_use,
                         rx_ring->next_to_clean,
                         rx_ring->ring_active);
@@ -325,9 +323,6 @@ static void i40e_dbg_dump_vsi_seid(struct i40e_pf *pf, int seid)
                         i, tx_ring->state,
                         tx_ring->queue_index,
                         tx_ring->reg_idx);
-               dev_info(&pf->pdev->dev,
-                        "    tx_rings[%i]: dtype = %d\n",
-                        i, tx_ring->dtype);
                dev_info(&pf->pdev->dev,
                         "    tx_rings[%i]: next_to_use = %d, next_to_clean = %d, ring_active = %i\n",
                         i,
@@ -365,8 +360,8 @@ static void i40e_dbg_dump_vsi_seid(struct i40e_pf *pf, int seid)
                 "    work_limit = %d\n",
                 vsi->work_limit);
        dev_info(&pf->pdev->dev,
-                "    max_frame = %d, rx_hdr_len = %d, rx_buf_len = %d dtype = %d\n",
-                vsi->max_frame, vsi->rx_hdr_len, vsi->rx_buf_len, vsi->dtype);
+                "    max_frame = %d, rx_buf_len = %d dtype = %d\n",
+                vsi->max_frame, vsi->rx_buf_len, 0);
        dev_info(&pf->pdev->dev,
                 "    num_q_vectors = %i, base_vector = %i\n",
                 vsi->num_q_vectors, vsi->base_vector);
@@ -591,13 +586,6 @@ static void i40e_dbg_dump_desc(int cnt, int vsi_seid, int ring_id, int desc_n,
                                         "   d[%03x] = 0x%016llx 0x%016llx\n",
                                         i, txd->buffer_addr,
                                         txd->cmd_type_offset_bsz);
-                       } else if (sizeof(union i40e_rx_desc) ==
-                                  sizeof(union i40e_16byte_rx_desc)) {
-                               rxd = I40E_RX_DESC(ring, i);
-                               dev_info(&pf->pdev->dev,
-                                        "   d[%03x] = 0x%016llx 0x%016llx\n",
-                                        i, rxd->read.pkt_addr,
-                                        rxd->read.hdr_addr);
                        } else {
                                rxd = I40E_RX_DESC(ring, i);
                                dev_info(&pf->pdev->dev,
@@ -619,13 +607,6 @@ static void i40e_dbg_dump_desc(int cnt, int vsi_seid, int ring_id, int desc_n,
                                 "vsi = %02i tx ring = %02i d[%03x] = 0x%016llx 0x%016llx\n",
                                 vsi_seid, ring_id, desc_n,
                                 txd->buffer_addr, txd->cmd_type_offset_bsz);
-               } else if (sizeof(union i40e_rx_desc) ==
-                          sizeof(union i40e_16byte_rx_desc)) {
-                       rxd = I40E_RX_DESC(ring, desc_n);
-                       dev_info(&pf->pdev->dev,
-                                "vsi = %02i rx ring = %02i d[%03x] = 0x%016llx 0x%016llx\n",
-                                vsi_seid, ring_id, desc_n,
-                                rxd->read.pkt_addr, rxd->read.hdr_addr);
                } else {
                        rxd = I40E_RX_DESC(ring, desc_n);
                        dev_info(&pf->pdev->dev,
index dd4457d..d701861 100644 (file)
@@ -45,6 +45,7 @@
 #define I40E_DEV_ID_1G_BASE_T_X722     0x37D1
 #define I40E_DEV_ID_10G_BASE_T_X722    0x37D2
 #define I40E_DEV_ID_SFP_I_X722         0x37D3
+#define I40E_DEV_ID_QSFP_I_X722                0x37D4
 
 #define i40e_is_40G_device(d)          ((d) == I40E_DEV_ID_QSFP_A  || \
                                         (d) == I40E_DEV_ID_QSFP_B  || \
index 8a83d45..51a994d 100644 (file)
@@ -235,7 +235,6 @@ static const char i40e_priv_flags_strings[][ETH_GSTRING_LEN] = {
        "LinkPolling",
        "flow-director-atr",
        "veb-stats",
-       "packet-split",
        "hw-atr-eviction",
 };
 
@@ -1275,6 +1274,13 @@ static int i40e_set_ringparam(struct net_device *netdev,
                }
 
                for (i = 0; i < vsi->num_queue_pairs; i++) {
+                       /* this is to allow wr32 to have something to write to
+                        * during early allocation of Rx buffers
+                        */
+                       u32 __iomem faketail = 0;
+                       struct i40e_ring *ring;
+                       u16 unused;
+
                        /* clone ring and setup updated count */
                        rx_rings[i] = *vsi->rx_rings[i];
                        rx_rings[i].count = new_rx_count;
@@ -1283,12 +1289,22 @@ static int i40e_set_ringparam(struct net_device *netdev,
                         */
                        rx_rings[i].desc = NULL;
                        rx_rings[i].rx_bi = NULL;
+                       rx_rings[i].tail = (u8 __iomem *)&faketail;
                        err = i40e_setup_rx_descriptors(&rx_rings[i]);
+                       if (err)
+                               goto rx_unwind;
+
+                       /* now allocate the Rx buffers to make sure the OS
+                        * has enough memory, any failure here means abort
+                        */
+                       ring = &rx_rings[i];
+                       unused = I40E_DESC_UNUSED(ring);
+                       err = i40e_alloc_rx_buffers(ring, unused);
+rx_unwind:
                        if (err) {
-                               while (i) {
-                                       i--;
+                               do {
                                        i40e_free_rx_resources(&rx_rings[i]);
-                               }
+                               } while (i--);
                                kfree(rx_rings);
                                rx_rings = NULL;
 
@@ -1314,6 +1330,17 @@ static int i40e_set_ringparam(struct net_device *netdev,
        if (rx_rings) {
                for (i = 0; i < vsi->num_queue_pairs; i++) {
                        i40e_free_rx_resources(vsi->rx_rings[i]);
+                       /* get the real tail offset */
+                       rx_rings[i].tail = vsi->rx_rings[i]->tail;
+                       /* this is to fake out the allocation routine
+                        * into thinking it has to realloc everything
+                        * but the recycling logic will let us re-use
+                        * the buffers allocated above
+                        */
+                       rx_rings[i].next_to_use = 0;
+                       rx_rings[i].next_to_clean = 0;
+                       rx_rings[i].next_to_alloc = 0;
+                       /* do a struct copy */
                        *vsi->rx_rings[i] = rx_rings[i];
                }
                kfree(rx_rings);
@@ -2506,7 +2533,6 @@ static int i40e_add_fdir_ethtool(struct i40e_vsi *vsi,
 
        if (!vsi)
                return -EINVAL;
-
        pf = vsi->back;
 
        if (!(pf->flags & I40E_FLAG_FD_SB_ENABLED))
@@ -2564,15 +2590,18 @@ static int i40e_add_fdir_ethtool(struct i40e_vsi *vsi,
        input->src_ip[0] = fsp->h_u.tcp_ip4_spec.ip4dst;
 
        if (ntohl(fsp->m_ext.data[1])) {
-               if (ntohl(fsp->h_ext.data[1]) >= pf->num_alloc_vfs) {
-                       netif_info(pf, drv, vsi->netdev, "Invalid VF id\n");
+               vf_id = ntohl(fsp->h_ext.data[1]);
+               if (vf_id >= pf->num_alloc_vfs) {
+                       netif_info(pf, drv, vsi->netdev,
+                                  "Invalid VF id %d\n", vf_id);
                        goto free_input;
                }
-               vf_id = ntohl(fsp->h_ext.data[1]);
                /* Find vsi id from vf id and override dest vsi */
                input->dest_vsi = pf->vf[vf_id].lan_vsi_id;
                if (input->q_index >= pf->vf[vf_id].num_queue_pairs) {
-                       netif_info(pf, drv, vsi->netdev, "Invalid queue id\n");
+                       netif_info(pf, drv, vsi->netdev,
+                                  "Invalid queue id %d for VF %d\n",
+                                  input->q_index, vf_id);
                        goto free_input;
                }
        }
@@ -2827,8 +2856,6 @@ static u32 i40e_get_priv_flags(struct net_device *dev)
                I40E_PRIV_FLAGS_FD_ATR : 0;
        ret_flags |= pf->flags & I40E_FLAG_VEB_STATS_ENABLED ?
                I40E_PRIV_FLAGS_VEB_STATS : 0;
-       ret_flags |= pf->flags & I40E_FLAG_RX_PS_ENABLED ?
-               I40E_PRIV_FLAGS_PS : 0;
        ret_flags |= pf->auto_disable_flags & I40E_FLAG_HW_ATR_EVICT_CAPABLE ?
                0 : I40E_PRIV_FLAGS_HW_ATR_EVICT;
 
@@ -2849,23 +2876,6 @@ static int i40e_set_priv_flags(struct net_device *dev, u32 flags)
 
        /* NOTE: MFP is not settable */
 
-       /* allow the user to control the method of receive
-        * buffer DMA, whether the packet is split at header
-        * boundaries into two separate buffers.  In some cases
-        * one routine or the other will perform better.
-        */
-       if ((flags & I40E_PRIV_FLAGS_PS) &&
-           !(pf->flags & I40E_FLAG_RX_PS_ENABLED)) {
-               pf->flags |= I40E_FLAG_RX_PS_ENABLED;
-               pf->flags &= ~I40E_FLAG_RX_1BUF_ENABLED;
-               reset_required = true;
-       } else if (!(flags & I40E_PRIV_FLAGS_PS) &&
-                  (pf->flags & I40E_FLAG_RX_PS_ENABLED)) {
-               pf->flags &= ~I40E_FLAG_RX_PS_ENABLED;
-               pf->flags |= I40E_FLAG_RX_1BUF_ENABLED;
-               reset_required = true;
-       }
-
        if (flags & I40E_PRIV_FLAGS_LINKPOLL_FLAG)
                pf->flags |= I40E_FLAG_LINK_POLLING_ENABLED;
        else
index 0b071ce..46a3a67 100644 (file)
@@ -46,7 +46,7 @@ static const char i40e_driver_string[] =
 
 #define DRV_VERSION_MAJOR 1
 #define DRV_VERSION_MINOR 5
-#define DRV_VERSION_BUILD 5
+#define DRV_VERSION_BUILD 10
 #define DRV_VERSION __stringify(DRV_VERSION_MAJOR) "." \
             __stringify(DRV_VERSION_MINOR) "." \
             __stringify(DRV_VERSION_BUILD)    DRV_KERN
@@ -91,6 +91,7 @@ static const struct pci_device_id i40e_pci_tbl[] = {
        {PCI_VDEVICE(INTEL, I40E_DEV_ID_1G_BASE_T_X722), 0},
        {PCI_VDEVICE(INTEL, I40E_DEV_ID_10G_BASE_T_X722), 0},
        {PCI_VDEVICE(INTEL, I40E_DEV_ID_SFP_I_X722), 0},
+       {PCI_VDEVICE(INTEL, I40E_DEV_ID_QSFP_I_X722), 0},
        {PCI_VDEVICE(INTEL, I40E_DEV_ID_20G_KR2), 0},
        {PCI_VDEVICE(INTEL, I40E_DEV_ID_20G_KR2_A), 0},
        /* required last entry */
@@ -327,7 +328,7 @@ static void i40e_tx_timeout(struct net_device *netdev)
                unsigned long trans_start;
 
                q = netdev_get_tx_queue(netdev, i);
-               trans_start = q->trans_start ? : netdev->trans_start;
+               trans_start = q->trans_start;
                if (netif_xmit_stopped(q) &&
                    time_after(jiffies,
                               (trans_start + netdev->watchdog_timeo))) {
@@ -396,24 +397,6 @@ static void i40e_tx_timeout(struct net_device *netdev)
        pf->tx_timeout_recovery_level++;
 }
 
-/**
- * i40e_release_rx_desc - Store the new tail and head values
- * @rx_ring: ring to bump
- * @val: new head index
- **/
-static inline void i40e_release_rx_desc(struct i40e_ring *rx_ring, u32 val)
-{
-       rx_ring->next_to_use = val;
-
-       /* Force memory writes to complete before letting h/w
-        * know there are new descriptors to fetch.  (Only
-        * applicable for weak-ordered memory model archs,
-        * such as IA-64).
-        */
-       wmb();
-       writel(val, rx_ring->tail);
-}
-
 /**
  * i40e_get_vsi_stats_struct - Get System Network Statistics
  * @vsi: the VSI we care about
@@ -2098,6 +2081,12 @@ int i40e_sync_vsi_filters(struct i40e_vsi *vsi)
                }
        }
 
+       /* if the VF is not trusted do not do promisc */
+       if ((vsi->type == I40E_VSI_SRIOV) && !pf->vf[vsi->vf_id].trusted) {
+               clear_bit(__I40E_FILTER_OVERFLOW_PROMISC, &vsi->state);
+               goto out;
+       }
+
        /* check for changes in promiscuous modes */
        if (changed_flags & IFF_ALLMULTI) {
                bool cur_multipromisc;
@@ -2866,34 +2855,21 @@ static int i40e_configure_rx_ring(struct i40e_ring *ring)
        memset(&rx_ctx, 0, sizeof(rx_ctx));
 
        ring->rx_buf_len = vsi->rx_buf_len;
-       ring->rx_hdr_len = vsi->rx_hdr_len;
 
        rx_ctx.dbuff = ring->rx_buf_len >> I40E_RXQ_CTX_DBUFF_SHIFT;
-       rx_ctx.hbuff = ring->rx_hdr_len >> I40E_RXQ_CTX_HBUFF_SHIFT;
 
        rx_ctx.base = (ring->dma / 128);
        rx_ctx.qlen = ring->count;
 
-       if (vsi->back->flags & I40E_FLAG_16BYTE_RX_DESC_ENABLED) {
-               set_ring_16byte_desc_enabled(ring);
-               rx_ctx.dsize = 0;
-       } else {
-               rx_ctx.dsize = 1;
-       }
+       /* use 32 byte descriptors */
+       rx_ctx.dsize = 1;
 
-       rx_ctx.dtype = vsi->dtype;
-       if (vsi->dtype) {
-               set_ring_ps_enabled(ring);
-               rx_ctx.hsplit_0 = I40E_RX_SPLIT_L2      |
-                                 I40E_RX_SPLIT_IP      |
-                                 I40E_RX_SPLIT_TCP_UDP |
-                                 I40E_RX_SPLIT_SCTP;
-       } else {
-               rx_ctx.hsplit_0 = 0;
-       }
+       /* descriptor type is always zero
+        * rx_ctx.dtype = 0;
+        */
+       rx_ctx.hsplit_0 = 0;
 
-       rx_ctx.rxmax = min_t(u16, vsi->max_frame,
-                                 (chain_len * ring->rx_buf_len));
+       rx_ctx.rxmax = min_t(u16, vsi->max_frame, chain_len * ring->rx_buf_len);
        if (hw->revision_id == 0)
                rx_ctx.lrxqthresh = 0;
        else
@@ -2930,12 +2906,7 @@ static int i40e_configure_rx_ring(struct i40e_ring *ring)
        ring->tail = hw->hw_addr + I40E_QRX_TAIL(pf_q);
        writel(0, ring->tail);
 
-       if (ring_is_ps_enabled(ring)) {
-               i40e_alloc_rx_headers(ring);
-               i40e_alloc_rx_buffers_ps(ring, I40E_DESC_UNUSED(ring));
-       } else {
-               i40e_alloc_rx_buffers_1buf(ring, I40E_DESC_UNUSED(ring));
-       }
+       i40e_alloc_rx_buffers(ring, I40E_DESC_UNUSED(ring));
 
        return 0;
 }
@@ -2974,40 +2945,18 @@ static int i40e_vsi_configure_rx(struct i40e_vsi *vsi)
        else
                vsi->max_frame = I40E_RXBUFFER_2048;
 
-       /* figure out correct receive buffer length */
-       switch (vsi->back->flags & (I40E_FLAG_RX_1BUF_ENABLED |
-                                   I40E_FLAG_RX_PS_ENABLED)) {
-       case I40E_FLAG_RX_1BUF_ENABLED:
-               vsi->rx_hdr_len = 0;
-               vsi->rx_buf_len = vsi->max_frame;
-               vsi->dtype = I40E_RX_DTYPE_NO_SPLIT;
-               break;
-       case I40E_FLAG_RX_PS_ENABLED:
-               vsi->rx_hdr_len = I40E_RX_HDR_SIZE;
-               vsi->rx_buf_len = I40E_RXBUFFER_2048;
-               vsi->dtype = I40E_RX_DTYPE_HEADER_SPLIT;
-               break;
-       default:
-               vsi->rx_hdr_len = I40E_RX_HDR_SIZE;
-               vsi->rx_buf_len = I40E_RXBUFFER_2048;
-               vsi->dtype = I40E_RX_DTYPE_SPLIT_ALWAYS;
-               break;
-       }
+       vsi->rx_buf_len = I40E_RXBUFFER_2048;
 
 #ifdef I40E_FCOE
        /* setup rx buffer for FCoE */
        if ((vsi->type == I40E_VSI_FCOE) &&
            (vsi->back->flags & I40E_FLAG_FCOE_ENABLED)) {
-               vsi->rx_hdr_len = 0;
                vsi->rx_buf_len = I40E_RXBUFFER_3072;
                vsi->max_frame = I40E_RXBUFFER_3072;
-               vsi->dtype = I40E_RX_DTYPE_NO_SPLIT;
        }
 
 #endif /* I40E_FCOE */
        /* round up for the chip's needs */
-       vsi->rx_hdr_len = ALIGN(vsi->rx_hdr_len,
-                               BIT_ULL(I40E_RXQ_CTX_HBUFF_SHIFT));
        vsi->rx_buf_len = ALIGN(vsi->rx_buf_len,
                                BIT_ULL(I40E_RXQ_CTX_DBUFF_SHIFT));
 
@@ -7523,10 +7472,6 @@ static int i40e_alloc_rings(struct i40e_vsi *vsi)
                rx_ring->count = vsi->num_desc;
                rx_ring->size = 0;
                rx_ring->dcb_tc = 0;
-               if (pf->flags & I40E_FLAG_16BYTE_RX_DESC_ENABLED)
-                       set_ring_16byte_desc_enabled(rx_ring);
-               else
-                       clear_ring_16byte_desc_enabled(rx_ring);
                rx_ring->rx_itr_setting = pf->rx_itr_default;
                vsi->rx_rings[i] = rx_ring;
        }
@@ -8082,24 +8027,45 @@ static int i40e_config_rss_reg(struct i40e_vsi *vsi, const u8 *seed,
 {
        struct i40e_pf *pf = vsi->back;
        struct i40e_hw *hw = &pf->hw;
+       u16 vf_id = vsi->vf_id;
        u8 i;
 
        /* Fill out hash function seed */
        if (seed) {
                u32 *seed_dw = (u32 *)seed;
 
-               for (i = 0; i <= I40E_PFQF_HKEY_MAX_INDEX; i++)
-                       i40e_write_rx_ctl(hw, I40E_PFQF_HKEY(i), seed_dw[i]);
+               if (vsi->type == I40E_VSI_MAIN) {
+                       for (i = 0; i <= I40E_PFQF_HKEY_MAX_INDEX; i++)
+                               i40e_write_rx_ctl(hw, I40E_PFQF_HKEY(i),
+                                                 seed_dw[i]);
+               } else if (vsi->type == I40E_VSI_SRIOV) {
+                       for (i = 0; i <= I40E_VFQF_HKEY1_MAX_INDEX; i++)
+                               i40e_write_rx_ctl(hw,
+                                                 I40E_VFQF_HKEY1(i, vf_id),
+                                                 seed_dw[i]);
+               } else {
+                       dev_err(&pf->pdev->dev, "Cannot set RSS seed - invalid VSI type\n");
+               }
        }
 
        if (lut) {
                u32 *lut_dw = (u32 *)lut;
 
-               if (lut_size != I40E_HLUT_ARRAY_SIZE)
-                       return -EINVAL;
-
-               for (i = 0; i <= I40E_PFQF_HLUT_MAX_INDEX; i++)
-                       wr32(hw, I40E_PFQF_HLUT(i), lut_dw[i]);
+               if (vsi->type == I40E_VSI_MAIN) {
+                       if (lut_size != I40E_HLUT_ARRAY_SIZE)
+                               return -EINVAL;
+                       for (i = 0; i <= I40E_PFQF_HLUT_MAX_INDEX; i++)
+                               wr32(hw, I40E_PFQF_HLUT(i), lut_dw[i]);
+               } else if (vsi->type == I40E_VSI_SRIOV) {
+                       if (lut_size != I40E_VF_HLUT_ARRAY_SIZE)
+                               return -EINVAL;
+                       for (i = 0; i <= I40E_VFQF_HLUT_MAX_INDEX; i++)
+                               i40e_write_rx_ctl(hw,
+                                                 I40E_VFQF_HLUT1(i, vf_id),
+                                                 lut_dw[i]);
+               } else {
+                       dev_err(&pf->pdev->dev, "Cannot set RSS LUT - invalid VSI type\n");
+               }
        }
        i40e_flush(hw);
 
@@ -8450,11 +8416,6 @@ static int i40e_sw_init(struct i40e_pf *pf)
                    I40E_FLAG_MSI_ENABLED     |
                    I40E_FLAG_MSIX_ENABLED;
 
-       if (iommu_present(&pci_bus_type))
-               pf->flags |= I40E_FLAG_RX_PS_ENABLED;
-       else
-               pf->flags |= I40E_FLAG_RX_1BUF_ENABLED;
-
        /* Set default ITR */
        pf->rx_itr_default = I40E_ITR_DYNAMIC | I40E_ITR_RX_DEF;
        pf->tx_itr_default = I40E_ITR_DYNAMIC | I40E_ITR_TX_DEF;
@@ -9111,40 +9072,44 @@ static int i40e_config_netdev(struct i40e_vsi *vsi)
        np = netdev_priv(netdev);
        np->vsi = vsi;
 
-       netdev->hw_enc_features |= NETIF_F_IP_CSUM             |
-                                  NETIF_F_IPV6_CSUM           |
-                                  NETIF_F_TSO                 |
-                                  NETIF_F_TSO6                |
-                                  NETIF_F_TSO_ECN             |
-                                  NETIF_F_GSO_GRE             |
-                                  NETIF_F_GSO_UDP_TUNNEL      |
-                                  NETIF_F_GSO_UDP_TUNNEL_CSUM |
+       netdev->hw_enc_features |= NETIF_F_SG                   |
+                                  NETIF_F_IP_CSUM              |
+                                  NETIF_F_IPV6_CSUM            |
+                                  NETIF_F_HIGHDMA              |
+                                  NETIF_F_SOFT_FEATURES        |
+                                  NETIF_F_TSO                  |
+                                  NETIF_F_TSO_ECN              |
+                                  NETIF_F_TSO6                 |
+                                  NETIF_F_GSO_GRE              |
+                                  NETIF_F_GSO_GRE_CSUM         |
+                                  NETIF_F_GSO_IPIP             |
+                                  NETIF_F_GSO_SIT              |
+                                  NETIF_F_GSO_UDP_TUNNEL       |
+                                  NETIF_F_GSO_UDP_TUNNEL_CSUM  |
+                                  NETIF_F_GSO_PARTIAL          |
+                                  NETIF_F_SCTP_CRC             |
+                                  NETIF_F_RXHASH               |
+                                  NETIF_F_RXCSUM               |
                                   0;
 
-       netdev->features = NETIF_F_SG                  |
-                          NETIF_F_IP_CSUM             |
-                          NETIF_F_SCTP_CRC            |
-                          NETIF_F_HIGHDMA             |
-                          NETIF_F_GSO_UDP_TUNNEL      |
-                          NETIF_F_GSO_GRE             |
-                          NETIF_F_HW_VLAN_CTAG_TX     |
-                          NETIF_F_HW_VLAN_CTAG_RX     |
-                          NETIF_F_HW_VLAN_CTAG_FILTER |
-                          NETIF_F_IPV6_CSUM           |
-                          NETIF_F_TSO                 |
-                          NETIF_F_TSO_ECN             |
-                          NETIF_F_TSO6                |
-                          NETIF_F_RXCSUM              |
-                          NETIF_F_RXHASH              |
-                          0;
+       if (!(pf->flags & I40E_FLAG_OUTER_UDP_CSUM_CAPABLE))
+               netdev->gso_partial_features |= NETIF_F_GSO_UDP_TUNNEL_CSUM;
+
+       netdev->gso_partial_features |= NETIF_F_GSO_GRE_CSUM;
+
+       /* record features VLANs can make use of */
+       netdev->vlan_features |= netdev->hw_enc_features |
+                                NETIF_F_TSO_MANGLEID;
 
        if (!(pf->flags & I40E_FLAG_MFP_ENABLED))
-               netdev->features |= NETIF_F_NTUPLE;
-       if (pf->flags & I40E_FLAG_OUTER_UDP_CSUM_CAPABLE)
-               netdev->features |= NETIF_F_GSO_UDP_TUNNEL_CSUM;
+               netdev->hw_features |= NETIF_F_NTUPLE;
+
+       netdev->hw_features |= netdev->hw_enc_features  |
+                              NETIF_F_HW_VLAN_CTAG_TX  |
+                              NETIF_F_HW_VLAN_CTAG_RX;
 
-       /* copy netdev features into list of user selectable features */
-       netdev->hw_features |= netdev->features;
+       netdev->features |= netdev->hw_features | NETIF_F_HW_VLAN_CTAG_FILTER;
+       netdev->hw_enc_features |= NETIF_F_TSO_MANGLEID;
 
        if (vsi->type == I40E_VSI_MAIN) {
                SET_NETDEV_DEV(netdev, &pf->pdev->dev);
@@ -9183,12 +9148,7 @@ static int i40e_config_netdev(struct i40e_vsi *vsi)
 
        ether_addr_copy(netdev->dev_addr, mac_addr);
        ether_addr_copy(netdev->perm_addr, mac_addr);
-       /* vlan gets same features (except vlan offload)
-        * after any tweaks for specific VSI types
-        */
-       netdev->vlan_features = netdev->features & ~(NETIF_F_HW_VLAN_CTAG_TX |
-                                                    NETIF_F_HW_VLAN_CTAG_RX |
-                                                  NETIF_F_HW_VLAN_CTAG_FILTER);
+
        netdev->priv_flags |= IFF_UNICAST_FLT;
        netdev->priv_flags |= IFF_SUPP_NOFCS;
        /* Setup netdev TC information */
@@ -10687,11 +10647,9 @@ static void i40e_print_features(struct i40e_pf *pf)
 #ifdef CONFIG_PCI_IOV
        i += snprintf(&buf[i], REMAIN(i), " VFs: %d", pf->num_req_vfs);
 #endif
-       i += snprintf(&buf[i], REMAIN(i), " VSIs: %d QP: %d RX: %s",
+       i += snprintf(&buf[i], REMAIN(i), " VSIs: %d QP: %d",
                      pf->hw.func_caps.num_vsis,
-                     pf->vsi[pf->lan_vsi]->num_queue_pairs,
-                     pf->flags & I40E_FLAG_RX_PS_ENABLED ? "PS" : "1BUF");
-
+                     pf->vsi[pf->lan_vsi]->num_queue_pairs);
        if (pf->flags & I40E_FLAG_RSS_ENABLED)
                i += snprintf(&buf[i], REMAIN(i), " RSS");
        if (pf->flags & I40E_FLAG_FD_ATR_ENABLED)
index f2cea3d..954efe3 100644 (file)
@@ -693,10 +693,10 @@ i40e_status i40e_nvmupd_command(struct i40e_hw *hw,
        /* early check for status command and debug msgs */
        upd_cmd = i40e_nvmupd_validate_command(hw, cmd, perrno);
 
-       i40e_debug(hw, I40E_DEBUG_NVM, "%s state %d nvm_release_on_hold %d cmd 0x%08x config 0x%08x offset 0x%08x data_size 0x%08x\n",
+       i40e_debug(hw, I40E_DEBUG_NVM, "%s state %d nvm_release_on_hold %d opc 0x%04x cmd 0x%08x config 0x%08x offset 0x%08x data_size 0x%08x\n",
                   i40e_nvm_update_state_str[upd_cmd],
                   hw->nvmupd_state,
-                  hw->nvm_release_on_done,
+                  hw->nvm_release_on_done, hw->nvm_wait_opcode,
                   cmd->command, cmd->config, cmd->offset, cmd->data_size);
 
        if (upd_cmd == I40E_NVMUPD_INVALID) {
@@ -710,7 +710,18 @@ i40e_status i40e_nvmupd_command(struct i40e_hw *hw,
         * going into the state machine
         */
        if (upd_cmd == I40E_NVMUPD_STATUS) {
+               if (!cmd->data_size) {
+                       *perrno = -EFAULT;
+                       return I40E_ERR_BUF_TOO_SHORT;
+               }
+
                bytes[0] = hw->nvmupd_state;
+
+               if (cmd->data_size >= 4) {
+                       bytes[1] = 0;
+                       *((u16 *)&bytes[2]) = hw->nvm_wait_opcode;
+               }
+
                return 0;
        }
 
@@ -729,6 +740,14 @@ i40e_status i40e_nvmupd_command(struct i40e_hw *hw,
 
        case I40E_NVMUPD_STATE_INIT_WAIT:
        case I40E_NVMUPD_STATE_WRITE_WAIT:
+               /* if we need to stop waiting for an event, clear
+                * the wait info and return before doing anything else
+                */
+               if (cmd->offset == 0xffff) {
+                       i40e_nvmupd_check_wait_event(hw, hw->nvm_wait_opcode);
+                       return 0;
+               }
+
                status = I40E_ERR_NOT_READY;
                *perrno = -EBUSY;
                break;
@@ -800,6 +819,7 @@ static i40e_status i40e_nvmupd_state_init(struct i40e_hw *hw,
                                i40e_release_nvm(hw);
                        } else {
                                hw->nvm_release_on_done = true;
+                               hw->nvm_wait_opcode = i40e_aqc_opc_nvm_erase;
                                hw->nvmupd_state = I40E_NVMUPD_STATE_INIT_WAIT;
                        }
                }
@@ -816,6 +836,7 @@ static i40e_status i40e_nvmupd_state_init(struct i40e_hw *hw,
                                i40e_release_nvm(hw);
                        } else {
                                hw->nvm_release_on_done = true;
+                               hw->nvm_wait_opcode = i40e_aqc_opc_nvm_update;
                                hw->nvmupd_state = I40E_NVMUPD_STATE_INIT_WAIT;
                        }
                }
@@ -828,10 +849,12 @@ static i40e_status i40e_nvmupd_state_init(struct i40e_hw *hw,
                                                     hw->aq.asq_last_status);
                } else {
                        status = i40e_nvmupd_nvm_write(hw, cmd, bytes, perrno);
-                       if (status)
+                       if (status) {
                                i40e_release_nvm(hw);
-                       else
+                       } else {
+                               hw->nvm_wait_opcode = i40e_aqc_opc_nvm_update;
                                hw->nvmupd_state = I40E_NVMUPD_STATE_WRITE_WAIT;
+                       }
                }
                break;
 
@@ -850,6 +873,7 @@ static i40e_status i40e_nvmupd_state_init(struct i40e_hw *hw,
                                i40e_release_nvm(hw);
                        } else {
                                hw->nvm_release_on_done = true;
+                               hw->nvm_wait_opcode = i40e_aqc_opc_nvm_update;
                                hw->nvmupd_state = I40E_NVMUPD_STATE_INIT_WAIT;
                        }
                }
@@ -940,8 +964,10 @@ retry:
        switch (upd_cmd) {
        case I40E_NVMUPD_WRITE_CON:
                status = i40e_nvmupd_nvm_write(hw, cmd, bytes, perrno);
-               if (!status)
+               if (!status) {
+                       hw->nvm_wait_opcode = i40e_aqc_opc_nvm_update;
                        hw->nvmupd_state = I40E_NVMUPD_STATE_WRITE_WAIT;
+               }
                break;
 
        case I40E_NVMUPD_WRITE_LCB:
@@ -954,6 +980,7 @@ retry:
                        hw->nvmupd_state = I40E_NVMUPD_STATE_INIT;
                } else {
                        hw->nvm_release_on_done = true;
+                       hw->nvm_wait_opcode = i40e_aqc_opc_nvm_update;
                        hw->nvmupd_state = I40E_NVMUPD_STATE_INIT_WAIT;
                }
                break;
@@ -967,6 +994,7 @@ retry:
                                   -EIO;
                        hw->nvmupd_state = I40E_NVMUPD_STATE_INIT;
                } else {
+                       hw->nvm_wait_opcode = i40e_aqc_opc_nvm_update;
                        hw->nvmupd_state = I40E_NVMUPD_STATE_WRITE_WAIT;
                }
                break;
@@ -981,6 +1009,7 @@ retry:
                        hw->nvmupd_state = I40E_NVMUPD_STATE_INIT;
                } else {
                        hw->nvm_release_on_done = true;
+                       hw->nvm_wait_opcode = i40e_aqc_opc_nvm_update;
                        hw->nvmupd_state = I40E_NVMUPD_STATE_INIT_WAIT;
                }
                break;
@@ -1036,14 +1065,14 @@ retry:
  **/
 void i40e_nvmupd_check_wait_event(struct i40e_hw *hw, u16 opcode)
 {
-       if (opcode == i40e_aqc_opc_nvm_erase ||
-           opcode == i40e_aqc_opc_nvm_update) {
+       if (opcode == hw->nvm_wait_opcode) {
                i40e_debug(hw, I40E_DEBUG_NVM,
                           "NVMUPD: clearing wait on opcode 0x%04x\n", opcode);
                if (hw->nvm_release_on_done) {
                        i40e_release_nvm(hw);
                        hw->nvm_release_on_done = false;
                }
+               hw->nvm_wait_opcode = 0;
 
                switch (hw->nvmupd_state) {
                case I40E_NVMUPD_STATE_INIT_WAIT:
@@ -1220,6 +1249,12 @@ static i40e_status i40e_nvmupd_exec_aq(struct i40e_hw *hw,
                *perrno = i40e_aq_rc_to_posix(status, hw->aq.asq_last_status);
        }
 
+       /* should we wait for a followup event? */
+       if (cmd->offset) {
+               hw->nvm_wait_opcode = cmd->offset;
+               hw->nvmupd_state = I40E_NVMUPD_STATE_INIT_WAIT;
+       }
+
        return status;
 }
 
index 134035f..4c8977c 100644 (file)
@@ -133,6 +133,14 @@ i40e_status i40e_aq_set_vsi_unicast_promiscuous(struct i40e_hw *hw,
                u16 vsi_id, bool set, struct i40e_asq_cmd_details *cmd_details);
 i40e_status i40e_aq_set_vsi_multicast_promiscuous(struct i40e_hw *hw,
                u16 vsi_id, bool set, struct i40e_asq_cmd_details *cmd_details);
+enum i40e_status_code i40e_aq_set_vsi_mc_promisc_on_vlan(struct i40e_hw *hw,
+                                                        u16 seid, bool enable,
+                                                        u16 vid,
+                               struct i40e_asq_cmd_details *cmd_details);
+enum i40e_status_code i40e_aq_set_vsi_uc_promisc_on_vlan(struct i40e_hw *hw,
+                                                        u16 seid, bool enable,
+                                                        u16 vid,
+                               struct i40e_asq_cmd_details *cmd_details);
 i40e_status i40e_aq_set_vsi_vlan_promisc(struct i40e_hw *hw,
                                u16 seid, bool enable,
                                struct i40e_asq_cmd_details *cmd_details);
@@ -228,10 +236,6 @@ i40e_status i40e_aq_config_vsi_bw_limit(struct i40e_hw *hw,
                                struct i40e_asq_cmd_details *cmd_details);
 i40e_status i40e_aq_dcb_updated(struct i40e_hw *hw,
                                struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_aq_set_hmc_resource_profile(struct i40e_hw *hw,
-                               enum i40e_aq_hmc_profile profile,
-                               u8 pe_vf_enabled_count,
-                               struct i40e_asq_cmd_details *cmd_details);
 i40e_status i40e_aq_config_switch_comp_bw_limit(struct i40e_hw *hw,
                                u16 seid, u16 credit, u8 max_bw,
                                struct i40e_asq_cmd_details *cmd_details);
index 565ca7c..a1b878a 100644 (file)
@@ -158,9 +158,10 @@ static int i40e_ptp_adjfreq(struct ptp_clock_info *ptp, s32 ppb)
 static int i40e_ptp_adjtime(struct ptp_clock_info *ptp, s64 delta)
 {
        struct i40e_pf *pf = container_of(ptp, struct i40e_pf, ptp_caps);
-       struct timespec64 now, then = ns_to_timespec64(delta);
+       struct timespec64 now, then;
        unsigned long flags;
 
+       then = ns_to_timespec64(delta);
        spin_lock_irqsave(&pf->tmreg_lock, flags);
 
        i40e_ptp_read(pf, &now);
index 39efba0..b0edffe 100644 (file)
@@ -1024,7 +1024,6 @@ err:
 void i40e_clean_rx_ring(struct i40e_ring *rx_ring)
 {
        struct device *dev = rx_ring->dev;
-       struct i40e_rx_buffer *rx_bi;
        unsigned long bi_size;
        u16 i;
 
@@ -1032,48 +1031,22 @@ void i40e_clean_rx_ring(struct i40e_ring *rx_ring)
        if (!rx_ring->rx_bi)
                return;
 
-       if (ring_is_ps_enabled(rx_ring)) {
-               int bufsz = ALIGN(rx_ring->rx_hdr_len, 256) * rx_ring->count;
-
-               rx_bi = &rx_ring->rx_bi[0];
-               if (rx_bi->hdr_buf) {
-                       dma_free_coherent(dev,
-                                         bufsz,
-                                         rx_bi->hdr_buf,
-                                         rx_bi->dma);
-                       for (i = 0; i < rx_ring->count; i++) {
-                               rx_bi = &rx_ring->rx_bi[i];
-                               rx_bi->dma = 0;
-                               rx_bi->hdr_buf = NULL;
-                       }
-               }
-       }
        /* Free all the Rx ring sk_buffs */
        for (i = 0; i < rx_ring->count; i++) {
-               rx_bi = &rx_ring->rx_bi[i];
-               if (rx_bi->dma) {
-                       dma_unmap_single(dev,
-                                        rx_bi->dma,
-                                        rx_ring->rx_buf_len,
-                                        DMA_FROM_DEVICE);
-                       rx_bi->dma = 0;
-               }
+               struct i40e_rx_buffer *rx_bi = &rx_ring->rx_bi[i];
+
                if (rx_bi->skb) {
                        dev_kfree_skb(rx_bi->skb);
                        rx_bi->skb = NULL;
                }
-               if (rx_bi->page) {
-                       if (rx_bi->page_dma) {
-                               dma_unmap_page(dev,
-                                              rx_bi->page_dma,
-                                              PAGE_SIZE,
-                                              DMA_FROM_DEVICE);
-                               rx_bi->page_dma = 0;
-                       }
-                       __free_page(rx_bi->page);
-                       rx_bi->page = NULL;
-                       rx_bi->page_offset = 0;
-               }
+               if (!rx_bi->page)
+                       continue;
+
+               dma_unmap_page(dev, rx_bi->dma, PAGE_SIZE, DMA_FROM_DEVICE);
+               __free_pages(rx_bi->page, 0);
+
+               rx_bi->page = NULL;
+               rx_bi->page_offset = 0;
        }
 
        bi_size = sizeof(struct i40e_rx_buffer) * rx_ring->count;
@@ -1082,6 +1055,7 @@ void i40e_clean_rx_ring(struct i40e_ring *rx_ring)
        /* Zero out the descriptor ring */
        memset(rx_ring->desc, 0, rx_ring->size);
 
+       rx_ring->next_to_alloc = 0;
        rx_ring->next_to_clean = 0;
        rx_ring->next_to_use = 0;
 }
@@ -1105,37 +1079,6 @@ void i40e_free_rx_resources(struct i40e_ring *rx_ring)
        }
 }
 
-/**
- * i40e_alloc_rx_headers - allocate rx header buffers
- * @rx_ring: ring to alloc buffers
- *
- * Allocate rx header buffers for the entire ring. As these are static,
- * this is only called when setting up a new ring.
- **/
-void i40e_alloc_rx_headers(struct i40e_ring *rx_ring)
-{
-       struct device *dev = rx_ring->dev;
-       struct i40e_rx_buffer *rx_bi;
-       dma_addr_t dma;
-       void *buffer;
-       int buf_size;
-       int i;
-
-       if (rx_ring->rx_bi[0].hdr_buf)
-               return;
-       /* Make sure the buffers don't cross cache line boundaries. */
-       buf_size = ALIGN(rx_ring->rx_hdr_len, 256);
-       buffer = dma_alloc_coherent(dev, buf_size * rx_ring->count,
-                                   &dma, GFP_KERNEL);
-       if (!buffer)
-               return;
-       for (i = 0; i < rx_ring->count; i++) {
-               rx_bi = &rx_ring->rx_bi[i];
-               rx_bi->dma = dma + (i * buf_size);
-               rx_bi->hdr_buf = buffer + (i * buf_size);
-       }
-}
-
 /**
  * i40e_setup_rx_descriptors - Allocate Rx descriptors
  * @rx_ring: Rx descriptor ring (for a specific queue) to setup
@@ -1157,9 +1100,7 @@ int i40e_setup_rx_descriptors(struct i40e_ring *rx_ring)
        u64_stats_init(&rx_ring->syncp);
 
        /* Round up to nearest 4K */
-       rx_ring->size = ring_is_16byte_desc_enabled(rx_ring)
-               ? rx_ring->count * sizeof(union i40e_16byte_rx_desc)
-               : rx_ring->count * sizeof(union i40e_32byte_rx_desc);
+       rx_ring->size = rx_ring->count * sizeof(union i40e_32byte_rx_desc);
        rx_ring->size = ALIGN(rx_ring->size, 4096);
        rx_ring->desc = dma_alloc_coherent(dev, rx_ring->size,
                                           &rx_ring->dma, GFP_KERNEL);
@@ -1170,6 +1111,7 @@ int i40e_setup_rx_descriptors(struct i40e_ring *rx_ring)
                goto err;
        }
 
+       rx_ring->next_to_alloc = 0;
        rx_ring->next_to_clean = 0;
        rx_ring->next_to_use = 0;
 
@@ -1188,6 +1130,10 @@ err:
 static inline void i40e_release_rx_desc(struct i40e_ring *rx_ring, u32 val)
 {
        rx_ring->next_to_use = val;
+
+       /* update next to alloc since we have filled the ring */
+       rx_ring->next_to_alloc = val;
+
        /* Force memory writes to complete before letting h/w
         * know there are new descriptors to fetch.  (Only
         * applicable for weak-ordered memory model archs,
@@ -1198,160 +1144,122 @@ static inline void i40e_release_rx_desc(struct i40e_ring *rx_ring, u32 val)
 }
 
 /**
- * i40e_alloc_rx_buffers_ps - Replace used receive buffers; packet split
- * @rx_ring: ring to place buffers on
- * @cleaned_count: number of buffers to replace
+ * i40e_alloc_mapped_page - recycle or make a new page
+ * @rx_ring: ring to use
+ * @bi: rx_buffer struct to modify
  *
- * Returns true if any errors on allocation
+ * Returns true if the page was successfully allocated or
+ * reused.
  **/
-bool i40e_alloc_rx_buffers_ps(struct i40e_ring *rx_ring, u16 cleaned_count)
+static bool i40e_alloc_mapped_page(struct i40e_ring *rx_ring,
+                                  struct i40e_rx_buffer *bi)
 {
-       u16 i = rx_ring->next_to_use;
-       union i40e_rx_desc *rx_desc;
-       struct i40e_rx_buffer *bi;
-       const int current_node = numa_node_id();
+       struct page *page = bi->page;
+       dma_addr_t dma;
 
-       /* do nothing if no valid netdev defined */
-       if (!rx_ring->netdev || !cleaned_count)
-               return false;
+       /* since we are recycling buffers we should seldom need to alloc */
+       if (likely(page)) {
+               rx_ring->rx_stats.page_reuse_count++;
+               return true;
+       }
 
-       while (cleaned_count--) {
-               rx_desc = I40E_RX_DESC(rx_ring, i);
-               bi = &rx_ring->rx_bi[i];
+       /* alloc new page for storage */
+       page = dev_alloc_page();
+       if (unlikely(!page)) {
+               rx_ring->rx_stats.alloc_page_failed++;
+               return false;
+       }
 
-               if (bi->skb) /* desc is in use */
-                       goto no_buffers;
+       /* map page for use */
+       dma = dma_map_page(rx_ring->dev, page, 0, PAGE_SIZE, DMA_FROM_DEVICE);
 
-       /* If we've been moved to a different NUMA node, release the
-        * page so we can get a new one on the current node.
+       /* if mapping failed free memory back to system since
+        * there isn't much point in holding memory we can't use
         */
-               if (bi->page &&  page_to_nid(bi->page) != current_node) {
-                       dma_unmap_page(rx_ring->dev,
-                                      bi->page_dma,
-                                      PAGE_SIZE,
-                                      DMA_FROM_DEVICE);
-                       __free_page(bi->page);
-                       bi->page = NULL;
-                       bi->page_dma = 0;
-                       rx_ring->rx_stats.realloc_count++;
-               } else if (bi->page) {
-                       rx_ring->rx_stats.page_reuse_count++;
-               }
-
-               if (!bi->page) {
-                       bi->page = alloc_page(GFP_ATOMIC);
-                       if (!bi->page) {
-                               rx_ring->rx_stats.alloc_page_failed++;
-                               goto no_buffers;
-                       }
-                       bi->page_dma = dma_map_page(rx_ring->dev,
-                                                   bi->page,
-                                                   0,
-                                                   PAGE_SIZE,
-                                                   DMA_FROM_DEVICE);
-                       if (dma_mapping_error(rx_ring->dev, bi->page_dma)) {
-                               rx_ring->rx_stats.alloc_page_failed++;
-                               __free_page(bi->page);
-                               bi->page = NULL;
-                               bi->page_dma = 0;
-                               bi->page_offset = 0;
-                               goto no_buffers;
-                       }
-                       bi->page_offset = 0;
-               }
-
-               /* Refresh the desc even if buffer_addrs didn't change
-                * because each write-back erases this info.
-                */
-               rx_desc->read.pkt_addr =
-                               cpu_to_le64(bi->page_dma + bi->page_offset);
-               rx_desc->read.hdr_addr = cpu_to_le64(bi->dma);
-               i++;
-               if (i == rx_ring->count)
-                       i = 0;
+       if (dma_mapping_error(rx_ring->dev, dma)) {
+               __free_pages(page, 0);
+               rx_ring->rx_stats.alloc_page_failed++;
+               return false;
        }
 
-       if (rx_ring->next_to_use != i)
-               i40e_release_rx_desc(rx_ring, i);
+       bi->dma = dma;
+       bi->page = page;
+       bi->page_offset = 0;
 
-       return false;
+       return true;
+}
 
-no_buffers:
-       if (rx_ring->next_to_use != i)
-               i40e_release_rx_desc(rx_ring, i);
+/**
+ * i40e_receive_skb - Send a completed packet up the stack
+ * @rx_ring:  rx ring in play
+ * @skb: packet to send up
+ * @vlan_tag: vlan tag for packet
+ **/
+static void i40e_receive_skb(struct i40e_ring *rx_ring,
+                            struct sk_buff *skb, u16 vlan_tag)
+{
+       struct i40e_q_vector *q_vector = rx_ring->q_vector;
 
-       /* make sure to come back via polling to try again after
-        * allocation failure
-        */
-       return true;
+       if ((rx_ring->netdev->features & NETIF_F_HW_VLAN_CTAG_RX) &&
+           (vlan_tag & VLAN_VID_MASK))
+               __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), vlan_tag);
+
+       napi_gro_receive(&q_vector->napi, skb);
 }
 
 /**
- * i40e_alloc_rx_buffers_1buf - Replace used receive buffers; single buffer
+ * i40e_alloc_rx_buffers - Replace used receive buffers
  * @rx_ring: ring to place buffers on
  * @cleaned_count: number of buffers to replace
  *
- * Returns true if any errors on allocation
+ * Returns false if all allocations were successful, true if any fail
  **/
-bool i40e_alloc_rx_buffers_1buf(struct i40e_ring *rx_ring, u16 cleaned_count)
+bool i40e_alloc_rx_buffers(struct i40e_ring *rx_ring, u16 cleaned_count)
 {
-       u16 i = rx_ring->next_to_use;
+       u16 ntu = rx_ring->next_to_use;
        union i40e_rx_desc *rx_desc;
        struct i40e_rx_buffer *bi;
-       struct sk_buff *skb;
 
        /* do nothing if no valid netdev defined */
        if (!rx_ring->netdev || !cleaned_count)
                return false;
 
-       while (cleaned_count--) {
-               rx_desc = I40E_RX_DESC(rx_ring, i);
-               bi = &rx_ring->rx_bi[i];
-               skb = bi->skb;
-
-               if (!skb) {
-                       skb = __netdev_alloc_skb_ip_align(rx_ring->netdev,
-                                                         rx_ring->rx_buf_len,
-                                                         GFP_ATOMIC |
-                                                         __GFP_NOWARN);
-                       if (!skb) {
-                               rx_ring->rx_stats.alloc_buff_failed++;
-                               goto no_buffers;
-                       }
-                       /* initialize queue mapping */
-                       skb_record_rx_queue(skb, rx_ring->queue_index);
-                       bi->skb = skb;
-               }
+       rx_desc = I40E_RX_DESC(rx_ring, ntu);
+       bi = &rx_ring->rx_bi[ntu];
 
-               if (!bi->dma) {
-                       bi->dma = dma_map_single(rx_ring->dev,
-                                                skb->data,
-                                                rx_ring->rx_buf_len,
-                                                DMA_FROM_DEVICE);
-                       if (dma_mapping_error(rx_ring->dev, bi->dma)) {
-                               rx_ring->rx_stats.alloc_buff_failed++;
-                               bi->dma = 0;
-                               dev_kfree_skb(bi->skb);
-                               bi->skb = NULL;
-                               goto no_buffers;
-                       }
-               }
+       do {
+               if (!i40e_alloc_mapped_page(rx_ring, bi))
+                       goto no_buffers;
 
-               rx_desc->read.pkt_addr = cpu_to_le64(bi->dma);
+               /* Refresh the desc even if buffer_addrs didn't change
+                * because each write-back erases this info.
+                */
+               rx_desc->read.pkt_addr = cpu_to_le64(bi->dma + bi->page_offset);
                rx_desc->read.hdr_addr = 0;
-               i++;
-               if (i == rx_ring->count)
-                       i = 0;
-       }
 
-       if (rx_ring->next_to_use != i)
-               i40e_release_rx_desc(rx_ring, i);
+               rx_desc++;
+               bi++;
+               ntu++;
+               if (unlikely(ntu == rx_ring->count)) {
+                       rx_desc = I40E_RX_DESC(rx_ring, 0);
+                       bi = rx_ring->rx_bi;
+                       ntu = 0;
+               }
+
+               /* clear the status bits for the next_to_use descriptor */
+               rx_desc->wb.qword1.status_error_len = 0;
+
+               cleaned_count--;
+       } while (cleaned_count);
+
+       if (rx_ring->next_to_use != ntu)
+               i40e_release_rx_desc(rx_ring, ntu);
 
        return false;
 
 no_buffers:
-       if (rx_ring->next_to_use != i)
-               i40e_release_rx_desc(rx_ring, i);
+       if (rx_ring->next_to_use != ntu)
+               i40e_release_rx_desc(rx_ring, ntu);
 
        /* make sure to come back via polling to try again after
         * allocation failure
@@ -1359,42 +1267,36 @@ no_buffers:
        return true;
 }
 
-/**
- * i40e_receive_skb - Send a completed packet up the stack
- * @rx_ring:  rx ring in play
- * @skb: packet to send up
- * @vlan_tag: vlan tag for packet
- **/
-static void i40e_receive_skb(struct i40e_ring *rx_ring,
-                            struct sk_buff *skb, u16 vlan_tag)
-{
-       struct i40e_q_vector *q_vector = rx_ring->q_vector;
-
-       if (vlan_tag & VLAN_VID_MASK)
-               __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), vlan_tag);
-
-       napi_gro_receive(&q_vector->napi, skb);
-}
-
 /**
  * i40e_rx_checksum - Indicate in skb if hw indicated a good cksum
  * @vsi: the VSI we care about
  * @skb: skb currently being received and modified
- * @rx_status: status value of last descriptor in packet
- * @rx_error: error value of last descriptor in packet
- * @rx_ptype: ptype value of last descriptor in packet
+ * @rx_desc: the receive descriptor
+ *
+ * skb->protocol must be set before this function is called
  **/
 static inline void i40e_rx_checksum(struct i40e_vsi *vsi,
                                    struct sk_buff *skb,
-                                   u32 rx_status,
-                                   u32 rx_error,
-                                   u16 rx_ptype)
+                                   union i40e_rx_desc *rx_desc)
 {
-       struct i40e_rx_ptype_decoded decoded = decode_rx_desc_ptype(rx_ptype);
-       bool ipv4, ipv6, ipv4_tunnel, ipv6_tunnel;
+       struct i40e_rx_ptype_decoded decoded;
+       bool ipv4, ipv6, tunnel = false;
+       u32 rx_error, rx_status;
+       u8 ptype;
+       u64 qword;
+
+       qword = le64_to_cpu(rx_desc->wb.qword1.status_error_len);
+       ptype = (qword & I40E_RXD_QW1_PTYPE_MASK) >> I40E_RXD_QW1_PTYPE_SHIFT;
+       rx_error = (qword & I40E_RXD_QW1_ERROR_MASK) >>
+                  I40E_RXD_QW1_ERROR_SHIFT;
+       rx_status = (qword & I40E_RXD_QW1_STATUS_MASK) >>
+                   I40E_RXD_QW1_STATUS_SHIFT;
+       decoded = decode_rx_desc_ptype(ptype);
 
        skb->ip_summed = CHECKSUM_NONE;
 
+       skb_checksum_none_assert(skb);
+
        /* Rx csum enabled and ip headers found? */
        if (!(vsi->netdev->features & NETIF_F_RXCSUM))
                return;
@@ -1440,14 +1342,13 @@ static inline void i40e_rx_checksum(struct i40e_vsi *vsi,
         * doesn't make it a hard requirement so if we have validated the
         * inner checksum report CHECKSUM_UNNECESSARY.
         */
-
-       ipv4_tunnel = (rx_ptype >= I40E_RX_PTYPE_GRENAT4_MAC_PAY3) &&
-                    (rx_ptype <= I40E_RX_PTYPE_GRENAT4_MACVLAN_IPV6_ICMP_PAY4);
-       ipv6_tunnel = (rx_ptype >= I40E_RX_PTYPE_GRENAT6_MAC_PAY3) &&
-                    (rx_ptype <= I40E_RX_PTYPE_GRENAT6_MACVLAN_IPV6_ICMP_PAY4);
+       if (decoded.inner_prot & (I40E_RX_PTYPE_INNER_PROT_TCP |
+                                 I40E_RX_PTYPE_INNER_PROT_UDP |
+                                 I40E_RX_PTYPE_INNER_PROT_SCTP))
+               tunnel = true;
 
        skb->ip_summed = CHECKSUM_UNNECESSARY;
-       skb->csum_level = ipv4_tunnel || ipv6_tunnel;
+       skb->csum_level = tunnel ? 1 : 0;
 
        return;
 
@@ -1461,7 +1362,7 @@ checksum_fail:
  *
  * Returns a hash type to be used by skb_set_hash
  **/
-static inline enum pkt_hash_types i40e_ptype_to_htype(u8 ptype)
+static inline int i40e_ptype_to_htype(u8 ptype)
 {
        struct i40e_rx_ptype_decoded decoded = decode_rx_desc_ptype(ptype);
 
@@ -1489,7 +1390,7 @@ static inline void i40e_rx_hash(struct i40e_ring *ring,
                                u8 rx_ptype)
 {
        u32 hash;
-       const __le64 rss_mask  =
+       const __le64 rss_mask =
                cpu_to_le64((u64)I40E_RX_DESC_FLTSTAT_RSS_HASH <<
                            I40E_RX_DESC_STATUS_FLTSTAT_SHIFT);
 
@@ -1503,338 +1404,419 @@ static inline void i40e_rx_hash(struct i40e_ring *ring,
 }
 
 /**
- * i40e_clean_rx_irq_ps - Reclaim resources after receive; packet split
- * @rx_ring:  rx ring to clean
- * @budget:   how many cleans we're allowed
+ * i40e_process_skb_fields - Populate skb header fields from Rx descriptor
+ * @rx_ring: rx descriptor ring packet is being transacted on
+ * @rx_desc: pointer to the EOP Rx descriptor
+ * @skb: pointer to current skb being populated
+ * @rx_ptype: the packet type decoded by hardware
  *
- * Returns true if there's any budget left (e.g. the clean is finished)
+ * This function checks the ring, descriptor, and packet information in
+ * order to populate the hash, checksum, VLAN, protocol, and
+ * other fields within the skb.
  **/
-static int i40e_clean_rx_irq_ps(struct i40e_ring *rx_ring, const int budget)
+static inline
+void i40e_process_skb_fields(struct i40e_ring *rx_ring,
+                            union i40e_rx_desc *rx_desc, struct sk_buff *skb,
+                            u8 rx_ptype)
 {
-       unsigned int total_rx_bytes = 0, total_rx_packets = 0;
-       u16 rx_packet_len, rx_header_len, rx_sph, rx_hbo;
-       u16 cleaned_count = I40E_DESC_UNUSED(rx_ring);
-       struct i40e_vsi *vsi = rx_ring->vsi;
-       u16 i = rx_ring->next_to_clean;
-       union i40e_rx_desc *rx_desc;
-       u32 rx_error, rx_status;
-       bool failure = false;
-       u8 rx_ptype;
-       u64 qword;
-       u32 copysize;
+       u64 qword = le64_to_cpu(rx_desc->wb.qword1.status_error_len);
+       u32 rx_status = (qword & I40E_RXD_QW1_STATUS_MASK) >>
+                       I40E_RXD_QW1_STATUS_SHIFT;
+       u32 rsyn = (rx_status & I40E_RXD_QW1_STATUS_TSYNINDX_MASK) >>
+                  I40E_RXD_QW1_STATUS_TSYNINDX_SHIFT;
 
-       if (budget <= 0)
-               return 0;
+       if (unlikely(rsyn)) {
+               i40e_ptp_rx_hwtstamp(rx_ring->vsi->back, skb, rsyn);
+               rx_ring->last_rx_timestamp = jiffies;
+       }
 
-       do {
-               struct i40e_rx_buffer *rx_bi;
-               struct sk_buff *skb;
-               u16 vlan_tag;
-               /* return some buffers to hardware, one at a time is too slow */
-               if (cleaned_count >= I40E_RX_BUFFER_WRITE) {
-                       failure = failure ||
-                                 i40e_alloc_rx_buffers_ps(rx_ring,
-                                                          cleaned_count);
-                       cleaned_count = 0;
-               }
+       i40e_rx_hash(rx_ring, rx_desc, skb, rx_ptype);
 
-               i = rx_ring->next_to_clean;
-               rx_desc = I40E_RX_DESC(rx_ring, i);
-               qword = le64_to_cpu(rx_desc->wb.qword1.status_error_len);
-               rx_status = (qword & I40E_RXD_QW1_STATUS_MASK) >>
-                       I40E_RXD_QW1_STATUS_SHIFT;
+       /* modifies the skb - consumes the enet header */
+       skb->protocol = eth_type_trans(skb, rx_ring->netdev);
 
-               if (!(rx_status & BIT(I40E_RX_DESC_STATUS_DD_SHIFT)))
-                       break;
+       i40e_rx_checksum(rx_ring->vsi, skb, rx_desc);
 
-               /* This memory barrier is needed to keep us from reading
-                * any other fields out of the rx_desc until we know the
-                * DD bit is set.
-                */
-               dma_rmb();
-               /* sync header buffer for reading */
-               dma_sync_single_range_for_cpu(rx_ring->dev,
-                                             rx_ring->rx_bi[0].dma,
-                                             i * rx_ring->rx_hdr_len,
-                                             rx_ring->rx_hdr_len,
-                                             DMA_FROM_DEVICE);
-               if (i40e_rx_is_programming_status(qword)) {
-                       i40e_clean_programming_status(rx_ring, rx_desc);
-                       I40E_RX_INCREMENT(rx_ring, i);
-                       continue;
-               }
-               rx_bi = &rx_ring->rx_bi[i];
-               skb = rx_bi->skb;
-               if (likely(!skb)) {
-                       skb = __netdev_alloc_skb_ip_align(rx_ring->netdev,
-                                                         rx_ring->rx_hdr_len,
-                                                         GFP_ATOMIC |
-                                                         __GFP_NOWARN);
-                       if (!skb) {
-                               rx_ring->rx_stats.alloc_buff_failed++;
-                               failure = true;
-                               break;
-                       }
+       skb_record_rx_queue(skb, rx_ring->queue_index);
+}
 
-                       /* initialize queue mapping */
-                       skb_record_rx_queue(skb, rx_ring->queue_index);
-                       /* we are reusing so sync this buffer for CPU use */
-                       dma_sync_single_range_for_cpu(rx_ring->dev,
-                                                     rx_ring->rx_bi[0].dma,
-                                                     i * rx_ring->rx_hdr_len,
-                                                     rx_ring->rx_hdr_len,
-                                                     DMA_FROM_DEVICE);
-               }
-               rx_packet_len = (qword & I40E_RXD_QW1_LENGTH_PBUF_MASK) >>
-                               I40E_RXD_QW1_LENGTH_PBUF_SHIFT;
-               rx_header_len = (qword & I40E_RXD_QW1_LENGTH_HBUF_MASK) >>
-                               I40E_RXD_QW1_LENGTH_HBUF_SHIFT;
-               rx_sph = (qword & I40E_RXD_QW1_LENGTH_SPH_MASK) >>
-                        I40E_RXD_QW1_LENGTH_SPH_SHIFT;
-
-               rx_error = (qword & I40E_RXD_QW1_ERROR_MASK) >>
-                          I40E_RXD_QW1_ERROR_SHIFT;
-               rx_hbo = rx_error & BIT(I40E_RX_DESC_ERROR_HBO_SHIFT);
-               rx_error &= ~BIT(I40E_RX_DESC_ERROR_HBO_SHIFT);
+/**
+ * i40e_pull_tail - i40e specific version of skb_pull_tail
+ * @rx_ring: rx descriptor ring packet is being transacted on
+ * @skb: pointer to current skb being adjusted
+ *
+ * This function is an i40e specific version of __pskb_pull_tail.  The
+ * main difference between this version and the original function is that
+ * this function can make several assumptions about the state of things
+ * that allow for significant optimizations versus the standard function.
+ * As a result we can do things like drop a frag and maintain an accurate
+ * truesize for the skb.
+ */
+static void i40e_pull_tail(struct i40e_ring *rx_ring, struct sk_buff *skb)
+{
+       struct skb_frag_struct *frag = &skb_shinfo(skb)->frags[0];
+       unsigned char *va;
+       unsigned int pull_len;
 
-               rx_ptype = (qword & I40E_RXD_QW1_PTYPE_MASK) >>
-                          I40E_RXD_QW1_PTYPE_SHIFT;
-               /* sync half-page for reading */
-               dma_sync_single_range_for_cpu(rx_ring->dev,
-                                             rx_bi->page_dma,
-                                             rx_bi->page_offset,
-                                             PAGE_SIZE / 2,
-                                             DMA_FROM_DEVICE);
-               prefetch(page_address(rx_bi->page) + rx_bi->page_offset);
-               rx_bi->skb = NULL;
-               cleaned_count++;
-               copysize = 0;
-               if (rx_hbo || rx_sph) {
-                       int len;
+       /* it is valid to use page_address instead of kmap since we are
+        * working with pages allocated out of the lomem pool per
+        * alloc_page(GFP_ATOMIC)
+        */
+       va = skb_frag_address(frag);
 
-                       if (rx_hbo)
-                               len = I40E_RX_HDR_SIZE;
-                       else
-                               len = rx_header_len;
-                       memcpy(__skb_put(skb, len), rx_bi->hdr_buf, len);
-               } else if (skb->len == 0) {
-                       int len;
-                       unsigned char *va = page_address(rx_bi->page) +
-                                           rx_bi->page_offset;
-
-                       len = min(rx_packet_len, rx_ring->rx_hdr_len);
-                       memcpy(__skb_put(skb, len), va, len);
-                       copysize = len;
-                       rx_packet_len -= len;
-               }
-               /* Get the rest of the data if this was a header split */
-               if (rx_packet_len) {
-                       skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,
-                                       rx_bi->page,
-                                       rx_bi->page_offset + copysize,
-                                       rx_packet_len, I40E_RXBUFFER_2048);
-
-                       /* If the page count is more than 2, then both halves
-                        * of the page are used and we need to free it. Do it
-                        * here instead of in the alloc code. Otherwise one
-                        * of the half-pages might be released between now and
-                        * then, and we wouldn't know which one to use.
-                        * Don't call get_page and free_page since those are
-                        * both expensive atomic operations that just change
-                        * the refcount in opposite directions. Just give the
-                        * page to the stack; he can have our refcount.
-                        */
-                       if (page_count(rx_bi->page) > 2) {
-                               dma_unmap_page(rx_ring->dev,
-                                              rx_bi->page_dma,
-                                              PAGE_SIZE,
-                                              DMA_FROM_DEVICE);
-                               rx_bi->page = NULL;
-                               rx_bi->page_dma = 0;
-                               rx_ring->rx_stats.realloc_count++;
-                       } else {
-                               get_page(rx_bi->page);
-                               /* switch to the other half-page here; the
-                                * allocation code programs the right addr
-                                * into HW. If we haven't used this half-page,
-                                * the address won't be changed, and HW can
-                                * just use it next time through.
-                                */
-                               rx_bi->page_offset ^= PAGE_SIZE / 2;
-                       }
+       /* we need the header to contain the greater of either ETH_HLEN or
+        * 60 bytes if the skb->len is less than 60 for skb_pad.
+        */
+       pull_len = eth_get_headlen(va, I40E_RX_HDR_SIZE);
 
-               }
-               I40E_RX_INCREMENT(rx_ring, i);
+       /* align pull length to size of long to optimize memcpy performance */
+       skb_copy_to_linear_data(skb, va, ALIGN(pull_len, sizeof(long)));
 
-               if (unlikely(
-                   !(rx_status & BIT(I40E_RX_DESC_STATUS_EOF_SHIFT)))) {
-                       struct i40e_rx_buffer *next_buffer;
+       /* update all of the pointers */
+       skb_frag_size_sub(frag, pull_len);
+       frag->page_offset += pull_len;
+       skb->data_len -= pull_len;
+       skb->tail += pull_len;
+}
 
-                       next_buffer = &rx_ring->rx_bi[i];
-                       next_buffer->skb = skb;
-                       rx_ring->rx_stats.non_eop_descs++;
-                       continue;
-               }
+/**
+ * i40e_cleanup_headers - Correct empty headers
+ * @rx_ring: rx descriptor ring packet is being transacted on
+ * @skb: pointer to current skb being fixed
+ *
+ * Also address the case where we are pulling data in on pages only
+ * and as such no data is present in the skb header.
+ *
+ * In addition if skb is not at least 60 bytes we need to pad it so that
+ * it is large enough to qualify as a valid Ethernet frame.
+ *
+ * Returns true if an error was encountered and skb was freed.
+ **/
+static bool i40e_cleanup_headers(struct i40e_ring *rx_ring, struct sk_buff *skb)
+{
+       /* place header in linear portion of buffer */
+       if (skb_is_nonlinear(skb))
+               i40e_pull_tail(rx_ring, skb);
 
-               /* ERR_MASK will only have valid bits if EOP set */
-               if (unlikely(rx_error & BIT(I40E_RX_DESC_ERROR_RXE_SHIFT))) {
-                       dev_kfree_skb_any(skb);
-                       continue;
-               }
+       /* if eth_skb_pad returns an error the skb was freed */
+       if (eth_skb_pad(skb))
+               return true;
 
-               i40e_rx_hash(rx_ring, rx_desc, skb, rx_ptype);
+       return false;
+}
 
-               if (unlikely(rx_status & I40E_RXD_QW1_STATUS_TSYNVALID_MASK)) {
-                       i40e_ptp_rx_hwtstamp(vsi->back, skb, (rx_status &
-                                          I40E_RXD_QW1_STATUS_TSYNINDX_MASK) >>
-                                          I40E_RXD_QW1_STATUS_TSYNINDX_SHIFT);
-                       rx_ring->last_rx_timestamp = jiffies;
-               }
+/**
+ * i40e_reuse_rx_page - page flip buffer and store it back on the ring
+ * @rx_ring: rx descriptor ring to store buffers on
+ * @old_buff: donor buffer to have page reused
+ *
+ * Synchronizes page for reuse by the adapter
+ **/
+static void i40e_reuse_rx_page(struct i40e_ring *rx_ring,
+                              struct i40e_rx_buffer *old_buff)
+{
+       struct i40e_rx_buffer *new_buff;
+       u16 nta = rx_ring->next_to_alloc;
 
-               /* probably a little skewed due to removing CRC */
-               total_rx_bytes += skb->len;
-               total_rx_packets++;
+       new_buff = &rx_ring->rx_bi[nta];
 
-               skb->protocol = eth_type_trans(skb, rx_ring->netdev);
+       /* update, and store next to alloc */
+       nta++;
+       rx_ring->next_to_alloc = (nta < rx_ring->count) ? nta : 0;
 
-               i40e_rx_checksum(vsi, skb, rx_status, rx_error, rx_ptype);
+       /* transfer page from old buffer to new buffer */
+       *new_buff = *old_buff;
+}
 
-               vlan_tag = rx_status & BIT(I40E_RX_DESC_STATUS_L2TAG1P_SHIFT)
-                        ? le16_to_cpu(rx_desc->wb.qword0.lo_dword.l2tag1)
-                        : 0;
-#ifdef I40E_FCOE
-               if (unlikely(
-                   i40e_rx_is_fcoe(rx_ptype) &&
-                   !i40e_fcoe_handle_offload(rx_ring, rx_desc, skb))) {
-                       dev_kfree_skb_any(skb);
-                       continue;
-               }
+/**
+ * i40e_page_is_reserved - check if reuse is possible
+ * @page: page struct to check
+ */
+static inline bool i40e_page_is_reserved(struct page *page)
+{
+       return (page_to_nid(page) != numa_mem_id()) || page_is_pfmemalloc(page);
+}
+
+/**
+ * i40e_add_rx_frag - Add contents of Rx buffer to sk_buff
+ * @rx_ring: rx descriptor ring to transact packets on
+ * @rx_buffer: buffer containing page to add
+ * @rx_desc: descriptor containing length of buffer written by hardware
+ * @skb: sk_buff to place the data into
+ *
+ * This function will add the data contained in rx_buffer->page to the skb.
+ * This is done either through a direct copy if the data in the buffer is
+ * less than the skb header size, otherwise it will just attach the page as
+ * a frag to the skb.
+ *
+ * The function will then update the page offset if necessary and return
+ * true if the buffer can be reused by the adapter.
+ **/
+static bool i40e_add_rx_frag(struct i40e_ring *rx_ring,
+                            struct i40e_rx_buffer *rx_buffer,
+                            union i40e_rx_desc *rx_desc,
+                            struct sk_buff *skb)
+{
+       struct page *page = rx_buffer->page;
+       u64 qword = le64_to_cpu(rx_desc->wb.qword1.status_error_len);
+       unsigned int size = (qword & I40E_RXD_QW1_LENGTH_PBUF_MASK) >>
+                           I40E_RXD_QW1_LENGTH_PBUF_SHIFT;
+#if (PAGE_SIZE < 8192)
+       unsigned int truesize = I40E_RXBUFFER_2048;
+#else
+       unsigned int truesize = ALIGN(size, L1_CACHE_BYTES);
+       unsigned int last_offset = PAGE_SIZE - I40E_RXBUFFER_2048;
 #endif
-               i40e_receive_skb(rx_ring, skb, vlan_tag);
 
-               rx_desc->wb.qword1.status_error_len = 0;
+       /* will the data fit in the skb we allocated? if so, just
+        * copy it as it is pretty small anyway
+        */
+       if ((size <= I40E_RX_HDR_SIZE) && !skb_is_nonlinear(skb)) {
+               unsigned char *va = page_address(page) + rx_buffer->page_offset;
 
-       } while (likely(total_rx_packets < budget));
+               memcpy(__skb_put(skb, size), va, ALIGN(size, sizeof(long)));
 
-       u64_stats_update_begin(&rx_ring->syncp);
-       rx_ring->stats.packets += total_rx_packets;
-       rx_ring->stats.bytes += total_rx_bytes;
-       u64_stats_update_end(&rx_ring->syncp);
-       rx_ring->q_vector->rx.total_packets += total_rx_packets;
-       rx_ring->q_vector->rx.total_bytes += total_rx_bytes;
+               /* page is not reserved, we can reuse buffer as-is */
+               if (likely(!i40e_page_is_reserved(page)))
+                       return true;
 
-       return failure ? budget : total_rx_packets;
+               /* this page cannot be reused so discard it */
+               __free_pages(page, 0);
+               return false;
+       }
+
+       skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page,
+                       rx_buffer->page_offset, size, truesize);
+
+       /* avoid re-using remote pages */
+       if (unlikely(i40e_page_is_reserved(page)))
+               return false;
+
+#if (PAGE_SIZE < 8192)
+       /* if we are only owner of page we can reuse it */
+       if (unlikely(page_count(page) != 1))
+               return false;
+
+       /* flip page offset to other buffer */
+       rx_buffer->page_offset ^= truesize;
+#else
+       /* move offset up to the next cache line */
+       rx_buffer->page_offset += truesize;
+
+       if (rx_buffer->page_offset > last_offset)
+               return false;
+#endif
+
+       /* Even if we own the page, we are not allowed to use atomic_set()
+        * This would break get_page_unless_zero() users.
+        */
+       get_page(rx_buffer->page);
+
+       return true;
+}
+
+/**
+ * i40e_fetch_rx_buffer - Allocate skb and populate it
+ * @rx_ring: rx descriptor ring to transact packets on
+ * @rx_desc: descriptor containing info written by hardware
+ *
+ * This function allocates an skb on the fly, and populates it with the page
+ * data from the current receive descriptor, taking care to set up the skb
+ * correctly, as well as handling calling the page recycle function if
+ * necessary.
+ */
+static inline
+struct sk_buff *i40e_fetch_rx_buffer(struct i40e_ring *rx_ring,
+                                    union i40e_rx_desc *rx_desc)
+{
+       struct i40e_rx_buffer *rx_buffer;
+       struct sk_buff *skb;
+       struct page *page;
+
+       rx_buffer = &rx_ring->rx_bi[rx_ring->next_to_clean];
+       page = rx_buffer->page;
+       prefetchw(page);
+
+       skb = rx_buffer->skb;
+
+       if (likely(!skb)) {
+               void *page_addr = page_address(page) + rx_buffer->page_offset;
+
+               /* prefetch first cache line of first page */
+               prefetch(page_addr);
+#if L1_CACHE_BYTES < 128
+               prefetch(page_addr + L1_CACHE_BYTES);
+#endif
+
+               /* allocate a skb to store the frags */
+               skb = __napi_alloc_skb(&rx_ring->q_vector->napi,
+                                      I40E_RX_HDR_SIZE,
+                                      GFP_ATOMIC | __GFP_NOWARN);
+               if (unlikely(!skb)) {
+                       rx_ring->rx_stats.alloc_buff_failed++;
+                       return NULL;
+               }
+
+               /* we will be copying header into skb->data in
+                * pskb_may_pull so it is in our interest to prefetch
+                * it now to avoid a possible cache miss
+                */
+               prefetchw(skb->data);
+       } else {
+               rx_buffer->skb = NULL;
+       }
+
+       /* we are reusing so sync this buffer for CPU use */
+       dma_sync_single_range_for_cpu(rx_ring->dev,
+                                     rx_buffer->dma,
+                                     rx_buffer->page_offset,
+                                     I40E_RXBUFFER_2048,
+                                     DMA_FROM_DEVICE);
+
+       /* pull page into skb */
+       if (i40e_add_rx_frag(rx_ring, rx_buffer, rx_desc, skb)) {
+               /* hand second half of page back to the ring */
+               i40e_reuse_rx_page(rx_ring, rx_buffer);
+               rx_ring->rx_stats.page_reuse_count++;
+       } else {
+               /* we are not reusing the buffer so unmap it */
+               dma_unmap_page(rx_ring->dev, rx_buffer->dma, PAGE_SIZE,
+                              DMA_FROM_DEVICE);
+       }
+
+       /* clear contents of buffer_info */
+       rx_buffer->page = NULL;
+
+       return skb;
+}
+
+/**
+ * i40e_is_non_eop - process handling of non-EOP buffers
+ * @rx_ring: Rx ring being processed
+ * @rx_desc: Rx descriptor for current buffer
+ * @skb: Current socket buffer containing buffer in progress
+ *
+ * This function updates next to clean.  If the buffer is an EOP buffer
+ * this function exits returning false, otherwise it will place the
+ * sk_buff in the next buffer to be chained and return true indicating
+ * that this is in fact a non-EOP buffer.
+ **/
+static bool i40e_is_non_eop(struct i40e_ring *rx_ring,
+                           union i40e_rx_desc *rx_desc,
+                           struct sk_buff *skb)
+{
+       u32 ntc = rx_ring->next_to_clean + 1;
+
+       /* fetch, update, and store next to clean */
+       ntc = (ntc < rx_ring->count) ? ntc : 0;
+       rx_ring->next_to_clean = ntc;
+
+       prefetch(I40E_RX_DESC(rx_ring, ntc));
+
+#define staterrlen rx_desc->wb.qword1.status_error_len
+       if (unlikely(i40e_rx_is_programming_status(le64_to_cpu(staterrlen)))) {
+               i40e_clean_programming_status(rx_ring, rx_desc);
+               rx_ring->rx_bi[ntc].skb = skb;
+               return true;
+       }
+       /* if we are the last buffer then there is nothing else to do */
+#define I40E_RXD_EOF BIT(I40E_RX_DESC_STATUS_EOF_SHIFT)
+       if (likely(i40e_test_staterr(rx_desc, I40E_RXD_EOF)))
+               return false;
+
+       /* place skb in next buffer to be received */
+       rx_ring->rx_bi[ntc].skb = skb;
+       rx_ring->rx_stats.non_eop_descs++;
+
+       return true;
 }
 
 /**
- * i40e_clean_rx_irq_1buf - Reclaim resources after receive; single buffer
- * @rx_ring:  rx ring to clean
- * @budget:   how many cleans we're allowed
+ * i40e_clean_rx_irq - Clean completed descriptors from Rx ring - bounce buf
+ * @rx_ring: rx descriptor ring to transact packets on
+ * @budget: Total limit on number of packets to process
+ *
+ * This function provides a "bounce buffer" approach to Rx interrupt
+ * processing.  The advantage to this is that on systems that have
+ * expensive overhead for IOMMU access this provides a means of avoiding
+ * it by maintaining the mapping of the page to the system.
  *
- * Returns number of packets cleaned
+ * Returns amount of work completed
  **/
-static int i40e_clean_rx_irq_1buf(struct i40e_ring *rx_ring, int budget)
+static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget)
 {
        unsigned int total_rx_bytes = 0, total_rx_packets = 0;
        u16 cleaned_count = I40E_DESC_UNUSED(rx_ring);
-       struct i40e_vsi *vsi = rx_ring->vsi;
-       union i40e_rx_desc *rx_desc;
-       u32 rx_error, rx_status;
-       u16 rx_packet_len;
        bool failure = false;
-       u8 rx_ptype;
-       u64 qword;
-       u16 i;
 
-       do {
-               struct i40e_rx_buffer *rx_bi;
+       while (likely(total_rx_packets < budget)) {
+               union i40e_rx_desc *rx_desc;
                struct sk_buff *skb;
+               u32 rx_status;
                u16 vlan_tag;
+               u8 rx_ptype;
+               u64 qword;
+
                /* return some buffers to hardware, one at a time is too slow */
                if (cleaned_count >= I40E_RX_BUFFER_WRITE) {
                        failure = failure ||
-                                 i40e_alloc_rx_buffers_1buf(rx_ring,
-                                                            cleaned_count);
+                                 i40e_alloc_rx_buffers(rx_ring, cleaned_count);
                        cleaned_count = 0;
                }
 
-               i = rx_ring->next_to_clean;
-               rx_desc = I40E_RX_DESC(rx_ring, i);
+               rx_desc = I40E_RX_DESC(rx_ring, rx_ring->next_to_clean);
+
                qword = le64_to_cpu(rx_desc->wb.qword1.status_error_len);
+               rx_ptype = (qword & I40E_RXD_QW1_PTYPE_MASK) >>
+                          I40E_RXD_QW1_PTYPE_SHIFT;
                rx_status = (qword & I40E_RXD_QW1_STATUS_MASK) >>
-                       I40E_RXD_QW1_STATUS_SHIFT;
+                           I40E_RXD_QW1_STATUS_SHIFT;
 
                if (!(rx_status & BIT(I40E_RX_DESC_STATUS_DD_SHIFT)))
                        break;
 
+               /* status_error_len will always be zero for unused descriptors
+                * because it's cleared in cleanup, and overlaps with hdr_addr
+                * which is always zero because packet split isn't used, if the
+                * hardware wrote DD then it will be non-zero
+                */
+               if (!rx_desc->wb.qword1.status_error_len)
+                       break;
+
                /* This memory barrier is needed to keep us from reading
                 * any other fields out of the rx_desc until we know the
                 * DD bit is set.
                 */
                dma_rmb();
 
-               if (i40e_rx_is_programming_status(qword)) {
-                       i40e_clean_programming_status(rx_ring, rx_desc);
-                       I40E_RX_INCREMENT(rx_ring, i);
-                       continue;
-               }
-               rx_bi = &rx_ring->rx_bi[i];
-               skb = rx_bi->skb;
-               prefetch(skb->data);
-
-               rx_packet_len = (qword & I40E_RXD_QW1_LENGTH_PBUF_MASK) >>
-                               I40E_RXD_QW1_LENGTH_PBUF_SHIFT;
-
-               rx_error = (qword & I40E_RXD_QW1_ERROR_MASK) >>
-                          I40E_RXD_QW1_ERROR_SHIFT;
-               rx_error &= ~BIT(I40E_RX_DESC_ERROR_HBO_SHIFT);
+               skb = i40e_fetch_rx_buffer(rx_ring, rx_desc);
+               if (!skb)
+                       break;
 
-               rx_ptype = (qword & I40E_RXD_QW1_PTYPE_MASK) >>
-                          I40E_RXD_QW1_PTYPE_SHIFT;
-               rx_bi->skb = NULL;
                cleaned_count++;
 
-               /* Get the header and possibly the whole packet
-                * If this is an skb from previous receive dma will be 0
-                */
-               skb_put(skb, rx_packet_len);
-               dma_unmap_single(rx_ring->dev, rx_bi->dma, rx_ring->rx_buf_len,
-                                DMA_FROM_DEVICE);
-               rx_bi->dma = 0;
-
-               I40E_RX_INCREMENT(rx_ring, i);
-
-               if (unlikely(
-                   !(rx_status & BIT(I40E_RX_DESC_STATUS_EOF_SHIFT)))) {
-                       rx_ring->rx_stats.non_eop_descs++;
+               if (i40e_is_non_eop(rx_ring, rx_desc, skb))
                        continue;
-               }
 
-               /* ERR_MASK will only have valid bits if EOP set */
-               if (unlikely(rx_error & BIT(I40E_RX_DESC_ERROR_RXE_SHIFT))) {
+               /* ERR_MASK will only have valid bits if EOP set, and
+                * what we are doing here is actually checking
+                * I40E_RX_DESC_ERROR_RXE_SHIFT, since it is the zeroth bit in
+                * the error field
+                */
+               if (unlikely(i40e_test_staterr(rx_desc, BIT(I40E_RXD_QW1_ERROR_SHIFT)))) {
                        dev_kfree_skb_any(skb);
                        continue;
                }
 
-               i40e_rx_hash(rx_ring, rx_desc, skb, rx_ptype);
-               if (unlikely(rx_status & I40E_RXD_QW1_STATUS_TSYNVALID_MASK)) {
-                       i40e_ptp_rx_hwtstamp(vsi->back, skb, (rx_status &
-                                          I40E_RXD_QW1_STATUS_TSYNINDX_MASK) >>
-                                          I40E_RXD_QW1_STATUS_TSYNINDX_SHIFT);
-                       rx_ring->last_rx_timestamp = jiffies;
-               }
+               if (i40e_cleanup_headers(rx_ring, skb))
+                       continue;
 
                /* probably a little skewed due to removing CRC */
                total_rx_bytes += skb->len;
-               total_rx_packets++;
-
-               skb->protocol = eth_type_trans(skb, rx_ring->netdev);
 
-               i40e_rx_checksum(vsi, skb, rx_status, rx_error, rx_ptype);
+               /* populate checksum, VLAN, and protocol */
+               i40e_process_skb_fields(rx_ring, rx_desc, skb, rx_ptype);
 
-               vlan_tag = rx_status & BIT(I40E_RX_DESC_STATUS_L2TAG1P_SHIFT)
-                        ? le16_to_cpu(rx_desc->wb.qword0.lo_dword.l2tag1)
-                        : 0;
 #ifdef I40E_FCOE
                if (unlikely(
                    i40e_rx_is_fcoe(rx_ptype) &&
@@ -1843,10 +1825,15 @@ static int i40e_clean_rx_irq_1buf(struct i40e_ring *rx_ring, int budget)
                        continue;
                }
 #endif
+
+               vlan_tag = (qword & BIT(I40E_RX_DESC_STATUS_L2TAG1P_SHIFT)) ?
+                          le16_to_cpu(rx_desc->wb.qword0.lo_dword.l2tag1) : 0;
+
                i40e_receive_skb(rx_ring, skb, vlan_tag);
 
-               rx_desc->wb.qword1.status_error_len = 0;
-       } while (likely(total_rx_packets < budget));
+               /* update budget accounting */
+               total_rx_packets++;
+       }
 
        u64_stats_update_begin(&rx_ring->syncp);
        rx_ring->stats.packets += total_rx_packets;
@@ -1855,6 +1842,7 @@ static int i40e_clean_rx_irq_1buf(struct i40e_ring *rx_ring, int budget)
        rx_ring->q_vector->rx.total_packets += total_rx_packets;
        rx_ring->q_vector->rx.total_bytes += total_rx_bytes;
 
+       /* guarantee a trip back through this routine if there was a failure */
        return failure ? budget : total_rx_packets;
 }
 
@@ -1999,12 +1987,7 @@ int i40e_napi_poll(struct napi_struct *napi, int budget)
        budget_per_ring = max(budget/q_vector->num_ringpairs, 1);
 
        i40e_for_each_ring(ring, q_vector->rx) {
-               int cleaned;
-
-               if (ring_is_ps_enabled(ring))
-                       cleaned = i40e_clean_rx_irq_ps(ring, budget_per_ring);
-               else
-                       cleaned = i40e_clean_rx_irq_1buf(ring, budget_per_ring);
+               int cleaned = i40e_clean_rx_irq(ring, budget_per_ring);
 
                work_done += cleaned;
                /* if we clean as many as budgeted, we must not be done */
@@ -2299,9 +2282,16 @@ static int i40e_tso(struct sk_buff *skb, u8 *hdr_len, u64 *cd_type_cmd_tso_mss)
                ip.v6->payload_len = 0;
        }
 
-       if (skb_shinfo(skb)->gso_type & (SKB_GSO_UDP_TUNNEL | SKB_GSO_GRE |
+       if (skb_shinfo(skb)->gso_type & (SKB_GSO_GRE |
+                                        SKB_GSO_GRE_CSUM |
+                                        SKB_GSO_IPIP |
+                                        SKB_GSO_SIT |
+                                        SKB_GSO_UDP_TUNNEL |
                                         SKB_GSO_UDP_TUNNEL_CSUM)) {
-               if (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_TUNNEL_CSUM) {
+               if (!(skb_shinfo(skb)->gso_type & SKB_GSO_PARTIAL) &&
+                   (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_TUNNEL_CSUM)) {
+                       l4.udp->len = 0;
+
                        /* determine offset of outer transport header */
                        l4_offset = l4.hdr - skb->data;
 
@@ -2442,13 +2432,6 @@ static int i40e_tx_enable_csum(struct sk_buff *skb, u32 *tx_flags,
                                                 &l4_proto, &frag_off);
                }
 
-               /* compute outer L3 header size */
-               tunnel |= ((l4.hdr - ip.hdr) / 4) <<
-                         I40E_TXD_CTX_QW0_EXT_IPLEN_SHIFT;
-
-               /* switch IP header pointer from outer to inner header */
-               ip.hdr = skb_inner_network_header(skb);
-
                /* define outer transport */
                switch (l4_proto) {
                case IPPROTO_UDP:
@@ -2459,6 +2442,11 @@ static int i40e_tx_enable_csum(struct sk_buff *skb, u32 *tx_flags,
                        tunnel |= I40E_TXD_CTX_GRE_TUNNELING;
                        *tx_flags |= I40E_TX_FLAGS_UDP_TUNNEL;
                        break;
+               case IPPROTO_IPIP:
+               case IPPROTO_IPV6:
+                       *tx_flags |= I40E_TX_FLAGS_UDP_TUNNEL;
+                       l4.hdr = skb_inner_network_header(skb);
+                       break;
                default:
                        if (*tx_flags & I40E_TX_FLAGS_TSO)
                                return -1;
@@ -2467,12 +2455,20 @@ static int i40e_tx_enable_csum(struct sk_buff *skb, u32 *tx_flags,
                        return 0;
                }
 
+               /* compute outer L3 header size */
+               tunnel |= ((l4.hdr - ip.hdr) / 4) <<
+                         I40E_TXD_CTX_QW0_EXT_IPLEN_SHIFT;
+
+               /* switch IP header pointer from outer to inner header */
+               ip.hdr = skb_inner_network_header(skb);
+
                /* compute tunnel header size */
                tunnel |= ((ip.hdr - l4.hdr) / 2) <<
                          I40E_TXD_CTX_QW0_NATLEN_SHIFT;
 
                /* indicate if we need to offload outer UDP header */
                if ((*tx_flags & I40E_TX_FLAGS_TSO) &&
+                   !(skb_shinfo(skb)->gso_type & SKB_GSO_PARTIAL) &&
                    (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_TUNNEL_CSUM))
                        tunnel |= I40E_TXD_CTX_QW0_L4T_CS_MASK;
 
index 6b2b191..b78c810 100644 (file)
@@ -102,8 +102,8 @@ enum i40e_dyn_idx_t {
        (((pf)->flags & I40E_FLAG_MULTIPLE_TCP_UDP_RSS_PCTYPE) ? \
          I40E_DEFAULT_RSS_HENA_EXPANDED : I40E_DEFAULT_RSS_HENA)
 
-/* Supported Rx Buffer Sizes */
-#define I40E_RXBUFFER_512   512    /* Used for packet split */
+/* Supported Rx Buffer Sizes (a multiple of 128) */
+#define I40E_RXBUFFER_256   256
 #define I40E_RXBUFFER_2048  2048
 #define I40E_RXBUFFER_3072  3072   /* For FCoE MTU of 2158 */
 #define I40E_RXBUFFER_4096  4096
@@ -114,9 +114,28 @@ enum i40e_dyn_idx_t {
  * reserve 2 more, and skb_shared_info adds an additional 384 bytes more,
  * this adds up to 512 bytes of extra data meaning the smallest allocation
  * we could have is 1K.
- * i.e. RXBUFFER_512 --> size-1024 slab
+ * i.e. RXBUFFER_256 --> 960 byte skb (size-1024 slab)
+ * i.e. RXBUFFER_512 --> 1216 byte skb (size-2048 slab)
  */
-#define I40E_RX_HDR_SIZE  I40E_RXBUFFER_512
+#define I40E_RX_HDR_SIZE I40E_RXBUFFER_256
+#define i40e_rx_desc i40e_32byte_rx_desc
+
+/**
+ * i40e_test_staterr - tests bits in Rx descriptor status and error fields
+ * @rx_desc: pointer to receive descriptor (in le64 format)
+ * @stat_err_bits: value to mask
+ *
+ * This function does some fast chicanery in order to return the
+ * value of the mask which is really only used for boolean tests.
+ * The status_error_len doesn't need to be shifted because it begins
+ * at offset zero.
+ */
+static inline bool i40e_test_staterr(union i40e_rx_desc *rx_desc,
+                                    const u64 stat_err_bits)
+{
+       return !!(rx_desc->wb.qword1.status_error_len &
+                 cpu_to_le64(stat_err_bits));
+}
 
 /* How many Rx Buffers do we bundle into one write to the hardware ? */
 #define I40E_RX_BUFFER_WRITE   16      /* Must be power of 2 */
@@ -142,8 +161,6 @@ enum i40e_dyn_idx_t {
                prefetch((n));                          \
        } while (0)
 
-#define i40e_rx_desc i40e_32byte_rx_desc
-
 #define I40E_MAX_BUFFER_TXD    8
 #define I40E_MIN_TX_LEN                17
 
@@ -213,10 +230,8 @@ struct i40e_tx_buffer {
 
 struct i40e_rx_buffer {
        struct sk_buff *skb;
-       void *hdr_buf;
        dma_addr_t dma;
        struct page *page;
-       dma_addr_t page_dma;
        unsigned int page_offset;
 };
 
@@ -245,22 +260,18 @@ struct i40e_rx_queue_stats {
 enum i40e_ring_state_t {
        __I40E_TX_FDIR_INIT_DONE,
        __I40E_TX_XPS_INIT_DONE,
-       __I40E_RX_PS_ENABLED,
-       __I40E_RX_16BYTE_DESC_ENABLED,
 };
 
-#define ring_is_ps_enabled(ring) \
-       test_bit(__I40E_RX_PS_ENABLED, &(ring)->state)
-#define set_ring_ps_enabled(ring) \
-       set_bit(__I40E_RX_PS_ENABLED, &(ring)->state)
-#define clear_ring_ps_enabled(ring) \
-       clear_bit(__I40E_RX_PS_ENABLED, &(ring)->state)
-#define ring_is_16byte_desc_enabled(ring) \
-       test_bit(__I40E_RX_16BYTE_DESC_ENABLED, &(ring)->state)
-#define set_ring_16byte_desc_enabled(ring) \
-       set_bit(__I40E_RX_16BYTE_DESC_ENABLED, &(ring)->state)
-#define clear_ring_16byte_desc_enabled(ring) \
-       clear_bit(__I40E_RX_16BYTE_DESC_ENABLED, &(ring)->state)
+/* some useful defines for virtchannel interface, which
+ * is the only remaining user of header split
+ */
+#define I40E_RX_DTYPE_NO_SPLIT      0
+#define I40E_RX_DTYPE_HEADER_SPLIT  1
+#define I40E_RX_DTYPE_SPLIT_ALWAYS  2
+#define I40E_RX_SPLIT_L2      0x1
+#define I40E_RX_SPLIT_IP      0x2
+#define I40E_RX_SPLIT_TCP_UDP 0x4
+#define I40E_RX_SPLIT_SCTP    0x8
 
 /* struct that defines a descriptor ring, associated with a VSI */
 struct i40e_ring {
@@ -287,16 +298,7 @@ struct i40e_ring {
 
        u16 count;                      /* Number of descriptors */
        u16 reg_idx;                    /* HW register index of the ring */
-       u16 rx_hdr_len;
        u16 rx_buf_len;
-       u8  dtype;
-#define I40E_RX_DTYPE_NO_SPLIT      0
-#define I40E_RX_DTYPE_HEADER_SPLIT  1
-#define I40E_RX_DTYPE_SPLIT_ALWAYS  2
-#define I40E_RX_SPLIT_L2      0x1
-#define I40E_RX_SPLIT_IP      0x2
-#define I40E_RX_SPLIT_TCP_UDP 0x4
-#define I40E_RX_SPLIT_SCTP    0x8
 
        /* used in interrupt processing */
        u16 next_to_use;
@@ -330,6 +332,7 @@ struct i40e_ring {
        struct i40e_q_vector *q_vector; /* Backreference to associated vector */
 
        struct rcu_head rcu;            /* to avoid race on free */
+       u16 next_to_alloc;
 } ____cacheline_internodealigned_in_smp;
 
 enum i40e_latency_range {
@@ -353,9 +356,7 @@ struct i40e_ring_container {
 #define i40e_for_each_ring(pos, head) \
        for (pos = (head).ring; pos != NULL; pos = pos->next)
 
-bool i40e_alloc_rx_buffers_ps(struct i40e_ring *rxr, u16 cleaned_count);
-bool i40e_alloc_rx_buffers_1buf(struct i40e_ring *rxr, u16 cleaned_count);
-void i40e_alloc_rx_headers(struct i40e_ring *rxr);
+bool i40e_alloc_rx_buffers(struct i40e_ring *rxr, u16 cleaned_count);
 netdev_tx_t i40e_lan_xmit_frame(struct sk_buff *skb, struct net_device *netdev);
 void i40e_clean_tx_ring(struct i40e_ring *tx_ring);
 void i40e_clean_rx_ring(struct i40e_ring *rx_ring);
index 793036b..bd5f13b 100644 (file)
@@ -36,7 +36,7 @@
 #include "i40e_devids.h"
 
 /* I40E_MASK is a macro used on 32 bit registers */
-#define I40E_MASK(mask, shift) (mask << shift)
+#define I40E_MASK(mask, shift) ((u32)(mask) << (shift))
 
 #define I40E_MAX_VSI_QP                        16
 #define I40E_MAX_VF_VSI                        3
@@ -275,6 +275,11 @@ struct i40e_hw_capabilities {
 #define I40E_FLEX10_STATUS_DCC_ERROR   0x1
 #define I40E_FLEX10_STATUS_VC_MODE     0x2
 
+       bool sec_rev_disabled;
+       bool update_disabled;
+#define I40E_NVM_MGMT_SEC_REV_DISABLED 0x1
+#define I40E_NVM_MGMT_UPDATE_DISABLED  0x2
+
        bool mgmt_cem;
        bool ieee_1588;
        bool iwarp;
@@ -550,6 +555,7 @@ struct i40e_hw {
        struct i40e_aq_desc nvm_wb_desc;
        struct i40e_virt_mem nvm_buff;
        bool nvm_release_on_done;
+       u16 nvm_wait_opcode;
 
        /* HMC info */
        struct i40e_hmc_info hmc; /* HMC info struct */
index 30f8cbe..a9b04e7 100644 (file)
@@ -48,7 +48,7 @@ static void i40e_vc_vf_broadcast(struct i40e_pf *pf,
        int i;
 
        for (i = 0; i < pf->num_alloc_vfs; i++, vf++) {
-               int abs_vf_id = vf->vf_id + hw->func_caps.vf_base_id;
+               int abs_vf_id = vf->vf_id + (int)hw->func_caps.vf_base_id;
                /* Not all vfs are enabled so skip the ones that are not */
                if (!test_bit(I40E_VF_STAT_INIT, &vf->vf_states) &&
                    !test_bit(I40E_VF_STAT_ACTIVE, &vf->vf_states))
@@ -74,7 +74,7 @@ static void i40e_vc_notify_vf_link_state(struct i40e_vf *vf)
        struct i40e_pf *pf = vf->pf;
        struct i40e_hw *hw = &pf->hw;
        struct i40e_link_status *ls = &pf->hw.phy.link_info;
-       int abs_vf_id = vf->vf_id + hw->func_caps.vf_base_id;
+       int abs_vf_id = vf->vf_id + (int)hw->func_caps.vf_base_id;
 
        pfe.event = I40E_VIRTCHNL_EVENT_LINK_CHANGE;
        pfe.severity = I40E_PF_EVENT_SEVERITY_INFO;
@@ -141,7 +141,7 @@ void i40e_vc_notify_vf_reset(struct i40e_vf *vf)
            !test_bit(I40E_VF_STAT_ACTIVE, &vf->vf_states))
                return;
 
-       abs_vf_id = vf->vf_id + vf->pf->hw.func_caps.vf_base_id;
+       abs_vf_id = vf->vf_id + (int)vf->pf->hw.func_caps.vf_base_id;
 
        pfe.event = I40E_VIRTCHNL_EVENT_RESET_IMPENDING;
        pfe.severity = I40E_PF_EVENT_SEVERITY_CERTAIN_DOOM;
@@ -590,7 +590,7 @@ static int i40e_config_vsi_rx_queue(struct i40e_vf *vf, u16 vsi_id,
                }
                rx_ctx.hbuff = info->hdr_size >> I40E_RXQ_CTX_HBUFF_SHIFT;
 
-               /* set splitalways mode 10b */
+               /* set split mode 10b */
                rx_ctx.dtype = I40E_RX_DTYPE_HEADER_SPLIT;
        }
 
@@ -860,7 +860,11 @@ static int i40e_alloc_vf_res(struct i40e_vf *vf)
        if (ret)
                goto error_alloc;
        total_queue_pairs += pf->vsi[vf->lan_vsi_idx]->alloc_queue_pairs;
-       set_bit(I40E_VIRTCHNL_VF_CAP_PRIVILEGE, &vf->vf_caps);
+
+       if (vf->trusted)
+               set_bit(I40E_VIRTCHNL_VF_CAP_PRIVILEGE, &vf->vf_caps);
+       else
+               clear_bit(I40E_VIRTCHNL_VF_CAP_PRIVILEGE, &vf->vf_caps);
 
        /* store the total qps number for the runtime
         * VF req validation
@@ -1348,12 +1352,16 @@ static int i40e_vc_get_vf_resources_msg(struct i40e_vf *vf, u8 *msg)
                set_bit(I40E_VF_STAT_IWARPENA, &vf->vf_states);
        }
 
-       if (pf->flags & I40E_FLAG_RSS_AQ_CAPABLE) {
-               if (vf->driver_caps & I40E_VIRTCHNL_VF_OFFLOAD_RSS_AQ)
-                       vfres->vf_offload_flags |=
-                               I40E_VIRTCHNL_VF_OFFLOAD_RSS_AQ;
+       if (vf->driver_caps & I40E_VIRTCHNL_VF_OFFLOAD_RSS_PF) {
+               vfres->vf_offload_flags |= I40E_VIRTCHNL_VF_OFFLOAD_RSS_PF;
        } else {
-               vfres->vf_offload_flags |= I40E_VIRTCHNL_VF_OFFLOAD_RSS_REG;
+               if ((pf->flags & I40E_FLAG_RSS_AQ_CAPABLE) &&
+                   (vf->driver_caps & I40E_VIRTCHNL_VF_OFFLOAD_RSS_AQ))
+                       vfres->vf_offload_flags |=
+                                       I40E_VIRTCHNL_VF_OFFLOAD_RSS_AQ;
+               else
+                       vfres->vf_offload_flags |=
+                                       I40E_VIRTCHNL_VF_OFFLOAD_RSS_REG;
        }
 
        if (pf->flags & I40E_FLAG_MULTIPLE_TCP_UDP_RSS_PCTYPE) {
@@ -1382,6 +1390,9 @@ static int i40e_vc_get_vf_resources_msg(struct i40e_vf *vf, u8 *msg)
        vfres->num_vsis = num_vsis;
        vfres->num_queue_pairs = vf->num_queue_pairs;
        vfres->max_vectors = pf->hw.func_caps.num_msix_vectors_vf;
+       vfres->rss_key_size = I40E_HKEY_ARRAY_SIZE;
+       vfres->rss_lut_size = I40E_VF_HLUT_ARRAY_SIZE;
+
        if (vf->lan_vsi_idx) {
                vfres->vsi_res[0].vsi_id = vf->lan_vsi_id;
                vfres->vsi_res[0].vsi_type = I40E_VSI_SRIOV;
@@ -1419,6 +1430,25 @@ static void i40e_vc_reset_vf_msg(struct i40e_vf *vf)
                i40e_reset_vf(vf, false);
 }
 
+/**
+ * i40e_getnum_vf_vsi_vlan_filters
+ * @vsi: pointer to the vsi
+ *
+ * called to get the number of VLANs offloaded on this VF
+ **/
+static inline int i40e_getnum_vf_vsi_vlan_filters(struct i40e_vsi *vsi)
+{
+       struct i40e_mac_filter *f;
+       int num_vlans = 0;
+
+       list_for_each_entry(f, &vsi->mac_filter_list, list) {
+               if (f->vlan >= 0 && f->vlan <= I40E_MAX_VLANID)
+                       num_vlans++;
+       }
+
+       return num_vlans;
+}
+
 /**
  * i40e_vc_config_promiscuous_mode_msg
  * @vf: pointer to the VF info
@@ -1435,22 +1465,123 @@ static int i40e_vc_config_promiscuous_mode_msg(struct i40e_vf *vf,
            (struct i40e_virtchnl_promisc_info *)msg;
        struct i40e_pf *pf = vf->pf;
        struct i40e_hw *hw = &pf->hw;
-       struct i40e_vsi *vsi;
+       struct i40e_mac_filter *f;
+       i40e_status aq_ret = 0;
        bool allmulti = false;
-       i40e_status aq_ret;
+       struct i40e_vsi *vsi;
+       bool alluni = false;
+       int aq_err = 0;
 
        vsi = i40e_find_vsi_from_id(pf, info->vsi_id);
        if (!test_bit(I40E_VF_STAT_ACTIVE, &vf->vf_states) ||
            !test_bit(I40E_VIRTCHNL_VF_CAP_PRIVILEGE, &vf->vf_caps) ||
-           !i40e_vc_isvalid_vsi_id(vf, info->vsi_id) ||
-           (vsi->type != I40E_VSI_FCOE)) {
+           !i40e_vc_isvalid_vsi_id(vf, info->vsi_id)) {
+               dev_err(&pf->pdev->dev,
+                       "VF %d doesn't meet requirements to enter promiscuous mode\n",
+                       vf->vf_id);
                aq_ret = I40E_ERR_PARAM;
                goto error_param;
        }
+       /* Multicast promiscuous handling*/
        if (info->flags & I40E_FLAG_VF_MULTICAST_PROMISC)
                allmulti = true;
-       aq_ret = i40e_aq_set_vsi_multicast_promiscuous(hw, vsi->seid,
-                                                      allmulti, NULL);
+
+       if (vf->port_vlan_id) {
+               aq_ret = i40e_aq_set_vsi_mc_promisc_on_vlan(hw, vsi->seid,
+                                                           allmulti,
+                                                           vf->port_vlan_id,
+                                                           NULL);
+       } else if (i40e_getnum_vf_vsi_vlan_filters(vsi)) {
+               list_for_each_entry(f, &vsi->mac_filter_list, list) {
+                       if (f->vlan < 0 || f->vlan > I40E_MAX_VLANID)
+                               continue;
+                       aq_ret = i40e_aq_set_vsi_mc_promisc_on_vlan(hw,
+                                                                   vsi->seid,
+                                                                   allmulti,
+                                                                   f->vlan,
+                                                                   NULL);
+                       aq_err = pf->hw.aq.asq_last_status;
+                       if (aq_ret) {
+                               dev_err(&pf->pdev->dev,
+                                       "Could not add VLAN %d to multicast promiscuous domain err %s aq_err %s\n",
+                                       f->vlan,
+                                       i40e_stat_str(&pf->hw, aq_ret),
+                                       i40e_aq_str(&pf->hw, aq_err));
+                               break;
+                       }
+               }
+       } else {
+               aq_ret = i40e_aq_set_vsi_multicast_promiscuous(hw, vsi->seid,
+                                                              allmulti, NULL);
+               aq_err = pf->hw.aq.asq_last_status;
+               if (aq_ret) {
+                       dev_err(&pf->pdev->dev,
+                               "VF %d failed to set multicast promiscuous mode err %s aq_err %s\n",
+                               vf->vf_id,
+                               i40e_stat_str(&pf->hw, aq_ret),
+                               i40e_aq_str(&pf->hw, aq_err));
+                       goto error_param_int;
+               }
+       }
+
+       if (!aq_ret) {
+               dev_info(&pf->pdev->dev,
+                        "VF %d successfully set multicast promiscuous mode\n",
+                        vf->vf_id);
+               if (allmulti)
+                       set_bit(I40E_VF_STAT_MC_PROMISC, &vf->vf_states);
+               else
+                       clear_bit(I40E_VF_STAT_MC_PROMISC, &vf->vf_states);
+       }
+
+       if (info->flags & I40E_FLAG_VF_UNICAST_PROMISC)
+               alluni = true;
+       if (vf->port_vlan_id) {
+               aq_ret = i40e_aq_set_vsi_uc_promisc_on_vlan(hw, vsi->seid,
+                                                           alluni,
+                                                           vf->port_vlan_id,
+                                                           NULL);
+       } else if (i40e_getnum_vf_vsi_vlan_filters(vsi)) {
+               list_for_each_entry(f, &vsi->mac_filter_list, list) {
+                       aq_ret = 0;
+                       if (f->vlan >= 0 && f->vlan <= I40E_MAX_VLANID) {
+                               aq_ret =
+                               i40e_aq_set_vsi_uc_promisc_on_vlan(hw,
+                                                                  vsi->seid,
+                                                                  alluni,
+                                                                  f->vlan,
+                                                                  NULL);
+                               aq_err = pf->hw.aq.asq_last_status;
+                       }
+                       if (aq_ret)
+                               dev_err(&pf->pdev->dev,
+                                       "Could not add VLAN %d to Unicast promiscuous domain err %s aq_err %s\n",
+                                       f->vlan,
+                                       i40e_stat_str(&pf->hw, aq_ret),
+                                       i40e_aq_str(&pf->hw, aq_err));
+               }
+       } else {
+               aq_ret = i40e_aq_set_vsi_unicast_promiscuous(hw, vsi->seid,
+                                                            allmulti, NULL);
+               aq_err = pf->hw.aq.asq_last_status;
+               if (aq_ret)
+                       dev_err(&pf->pdev->dev,
+                               "VF %d failed to set unicast promiscuous mode %8.8x err %s aq_err %s\n",
+                               vf->vf_id, info->flags,
+                               i40e_stat_str(&pf->hw, aq_ret),
+                               i40e_aq_str(&pf->hw, aq_err));
+       }
+
+error_param_int:
+       if (!aq_ret) {
+               dev_info(&pf->pdev->dev,
+                        "VF %d successfully set unicast promiscuous mode\n",
+                        vf->vf_id);
+               if (alluni)
+                       set_bit(I40E_VF_STAT_UC_PROMISC, &vf->vf_states);
+               else
+                       clear_bit(I40E_VF_STAT_UC_PROMISC, &vf->vf_states);
+       }
 
 error_param:
        /* send the response to the VF */
@@ -1701,6 +1832,10 @@ error_param:
                                      (u8 *)&stats, sizeof(stats));
 }
 
+/* If the VF is not trusted restrict the number of MAC/VLAN it can program */
+#define I40E_VC_MAX_MAC_ADDR_PER_VF 8
+#define I40E_VC_MAX_VLAN_PER_VF 8
+
 /**
  * i40e_check_vf_permission
  * @vf: pointer to the VF info
@@ -1721,15 +1856,22 @@ static inline int i40e_check_vf_permission(struct i40e_vf *vf, u8 *macaddr)
                dev_err(&pf->pdev->dev, "invalid VF MAC addr %pM\n", macaddr);
                ret = I40E_ERR_INVALID_MAC_ADDR;
        } else if (vf->pf_set_mac && !is_multicast_ether_addr(macaddr) &&
+                  !test_bit(I40E_VIRTCHNL_VF_CAP_PRIVILEGE, &vf->vf_caps) &&
                   !ether_addr_equal(macaddr, vf->default_lan_addr.addr)) {
                /* If the host VMM administrator has set the VF MAC address
                 * administratively via the ndo_set_vf_mac command then deny
                 * permission to the VF to add or delete unicast MAC addresses.
+                * Unless the VF is privileged and then it can do whatever.
                 * The VF may request to set the MAC address filter already
                 * assigned to it so do not return an error in that case.
                 */
                dev_err(&pf->pdev->dev,
-                       "VF attempting to override administratively set MAC address\nPlease reload the VF driver to resume normal operation\n");
+                       "VF attempting to override administratively set MAC address, reload the VF driver to resume normal operation\n");
+               ret = -EPERM;
+       } else if ((vf->num_mac >= I40E_VC_MAX_MAC_ADDR_PER_VF) &&
+                  !test_bit(I40E_VIRTCHNL_VF_CAP_PRIVILEGE, &vf->vf_caps)) {
+               dev_err(&pf->pdev->dev,
+                       "VF is not trusted, switch the VF to trusted to add more functionality\n");
                ret = -EPERM;
        }
        return ret;
@@ -1754,7 +1896,6 @@ static int i40e_vc_add_mac_addr_msg(struct i40e_vf *vf, u8 *msg, u16 msglen)
        int i;
 
        if (!test_bit(I40E_VF_STAT_ACTIVE, &vf->vf_states) ||
-           !test_bit(I40E_VIRTCHNL_VF_CAP_PRIVILEGE, &vf->vf_caps) ||
            !i40e_vc_isvalid_vsi_id(vf, vsi_id)) {
                ret = I40E_ERR_PARAM;
                goto error_param;
@@ -1793,6 +1934,8 @@ static int i40e_vc_add_mac_addr_msg(struct i40e_vf *vf, u8 *msg, u16 msglen)
                        ret = I40E_ERR_PARAM;
                        spin_unlock_bh(&vsi->mac_filter_list_lock);
                        goto error_param;
+               } else {
+                       vf->num_mac++;
                }
        }
        spin_unlock_bh(&vsi->mac_filter_list_lock);
@@ -1828,7 +1971,6 @@ static int i40e_vc_del_mac_addr_msg(struct i40e_vf *vf, u8 *msg, u16 msglen)
        int i;
 
        if (!test_bit(I40E_VF_STAT_ACTIVE, &vf->vf_states) ||
-           !test_bit(I40E_VIRTCHNL_VF_CAP_PRIVILEGE, &vf->vf_caps) ||
            !i40e_vc_isvalid_vsi_id(vf, vsi_id)) {
                ret = I40E_ERR_PARAM;
                goto error_param;
@@ -1852,6 +1994,8 @@ static int i40e_vc_del_mac_addr_msg(struct i40e_vf *vf, u8 *msg, u16 msglen)
                        ret = I40E_ERR_INVALID_MAC_ADDR;
                        spin_unlock_bh(&vsi->mac_filter_list_lock);
                        goto error_param;
+               } else {
+                       vf->num_mac--;
                }
 
        spin_unlock_bh(&vsi->mac_filter_list_lock);
@@ -1886,8 +2030,13 @@ static int i40e_vc_add_vlan_msg(struct i40e_vf *vf, u8 *msg, u16 msglen)
        i40e_status aq_ret = 0;
        int i;
 
+       if ((vf->num_vlan >= I40E_VC_MAX_VLAN_PER_VF) &&
+           !test_bit(I40E_VIRTCHNL_VF_CAP_PRIVILEGE, &vf->vf_caps)) {
+               dev_err(&pf->pdev->dev,
+                       "VF is not trusted, switch the VF to trusted to add more VLAN addresses\n");
+               goto error_param;
+       }
        if (!test_bit(I40E_VF_STAT_ACTIVE, &vf->vf_states) ||
-           !test_bit(I40E_VIRTCHNL_VF_CAP_PRIVILEGE, &vf->vf_caps) ||
            !i40e_vc_isvalid_vsi_id(vf, vsi_id)) {
                aq_ret = I40E_ERR_PARAM;
                goto error_param;
@@ -1911,6 +2060,19 @@ static int i40e_vc_add_vlan_msg(struct i40e_vf *vf, u8 *msg, u16 msglen)
        for (i = 0; i < vfl->num_elements; i++) {
                /* add new VLAN filter */
                int ret = i40e_vsi_add_vlan(vsi, vfl->vlan_id[i]);
+               if (!ret)
+                       vf->num_vlan++;
+
+               if (test_bit(I40E_VF_STAT_UC_PROMISC, &vf->vf_states))
+                       i40e_aq_set_vsi_uc_promisc_on_vlan(&pf->hw, vsi->seid,
+                                                          true,
+                                                          vfl->vlan_id[i],
+                                                          NULL);
+               if (test_bit(I40E_VF_STAT_MC_PROMISC, &vf->vf_states))
+                       i40e_aq_set_vsi_mc_promisc_on_vlan(&pf->hw, vsi->seid,
+                                                          true,
+                                                          vfl->vlan_id[i],
+                                                          NULL);
 
                if (ret)
                        dev_err(&pf->pdev->dev,
@@ -1942,7 +2104,6 @@ static int i40e_vc_remove_vlan_msg(struct i40e_vf *vf, u8 *msg, u16 msglen)
        int i;
 
        if (!test_bit(I40E_VF_STAT_ACTIVE, &vf->vf_states) ||
-           !test_bit(I40E_VIRTCHNL_VF_CAP_PRIVILEGE, &vf->vf_caps) ||
            !i40e_vc_isvalid_vsi_id(vf, vsi_id)) {
                aq_ret = I40E_ERR_PARAM;
                goto error_param;
@@ -1963,6 +2124,19 @@ static int i40e_vc_remove_vlan_msg(struct i40e_vf *vf, u8 *msg, u16 msglen)
 
        for (i = 0; i < vfl->num_elements; i++) {
                int ret = i40e_vsi_kill_vlan(vsi, vfl->vlan_id[i]);
+               if (!ret)
+                       vf->num_vlan--;
+
+               if (test_bit(I40E_VF_STAT_UC_PROMISC, &vf->vf_states))
+                       i40e_aq_set_vsi_uc_promisc_on_vlan(&pf->hw, vsi->seid,
+                                                          false,
+                                                          vfl->vlan_id[i],
+                                                          NULL);
+               if (test_bit(I40E_VF_STAT_MC_PROMISC, &vf->vf_states))
+                       i40e_aq_set_vsi_mc_promisc_on_vlan(&pf->hw, vsi->seid,
+                                                          false,
+                                                          vfl->vlan_id[i],
+                                                          NULL);
 
                if (ret)
                        dev_err(&pf->pdev->dev,
@@ -2041,6 +2215,135 @@ error_param:
                               aq_ret);
 }
 
+/**
+ * i40e_vc_config_rss_key
+ * @vf: pointer to the VF info
+ * @msg: pointer to the msg buffer
+ * @msglen: msg length
+ *
+ * Configure the VF's RSS key
+ **/
+static int i40e_vc_config_rss_key(struct i40e_vf *vf, u8 *msg, u16 msglen)
+{
+       struct i40e_virtchnl_rss_key *vrk =
+               (struct i40e_virtchnl_rss_key *)msg;
+       struct i40e_pf *pf = vf->pf;
+       struct i40e_vsi *vsi = NULL;
+       u16 vsi_id = vrk->vsi_id;
+       i40e_status aq_ret = 0;
+
+       if (!test_bit(I40E_VF_STAT_ACTIVE, &vf->vf_states) ||
+           !i40e_vc_isvalid_vsi_id(vf, vsi_id) ||
+           (vrk->key_len != I40E_HKEY_ARRAY_SIZE)) {
+               aq_ret = I40E_ERR_PARAM;
+               goto err;
+       }
+
+       vsi = pf->vsi[vf->lan_vsi_idx];
+       aq_ret = i40e_config_rss(vsi, vrk->key, NULL, 0);
+err:
+       /* send the response to the VF */
+       return i40e_vc_send_resp_to_vf(vf, I40E_VIRTCHNL_OP_CONFIG_RSS_KEY,
+                                      aq_ret);
+}
+
+/**
+ * i40e_vc_config_rss_lut
+ * @vf: pointer to the VF info
+ * @msg: pointer to the msg buffer
+ * @msglen: msg length
+ *
+ * Configure the VF's RSS LUT
+ **/
+static int i40e_vc_config_rss_lut(struct i40e_vf *vf, u8 *msg, u16 msglen)
+{
+       struct i40e_virtchnl_rss_lut *vrl =
+               (struct i40e_virtchnl_rss_lut *)msg;
+       struct i40e_pf *pf = vf->pf;
+       struct i40e_vsi *vsi = NULL;
+       u16 vsi_id = vrl->vsi_id;
+       i40e_status aq_ret = 0;
+
+       if (!test_bit(I40E_VF_STAT_ACTIVE, &vf->vf_states) ||
+           !i40e_vc_isvalid_vsi_id(vf, vsi_id) ||
+           (vrl->lut_entries != I40E_VF_HLUT_ARRAY_SIZE)) {
+               aq_ret = I40E_ERR_PARAM;
+               goto err;
+       }
+
+       vsi = pf->vsi[vf->lan_vsi_idx];
+       aq_ret = i40e_config_rss(vsi, NULL, vrl->lut, I40E_VF_HLUT_ARRAY_SIZE);
+       /* send the response to the VF */
+err:
+       return i40e_vc_send_resp_to_vf(vf, I40E_VIRTCHNL_OP_CONFIG_RSS_LUT,
+                                      aq_ret);
+}
+
+/**
+ * i40e_vc_get_rss_hena
+ * @vf: pointer to the VF info
+ * @msg: pointer to the msg buffer
+ * @msglen: msg length
+ *
+ * Return the RSS HENA bits allowed by the hardware
+ **/
+static int i40e_vc_get_rss_hena(struct i40e_vf *vf, u8 *msg, u16 msglen)
+{
+       struct i40e_virtchnl_rss_hena *vrh = NULL;
+       struct i40e_pf *pf = vf->pf;
+       i40e_status aq_ret = 0;
+       int len = 0;
+
+       if (!test_bit(I40E_VF_STAT_ACTIVE, &vf->vf_states)) {
+               aq_ret = I40E_ERR_PARAM;
+               goto err;
+       }
+       len = sizeof(struct i40e_virtchnl_rss_hena);
+
+       vrh = kzalloc(len, GFP_KERNEL);
+       if (!vrh) {
+               aq_ret = I40E_ERR_NO_MEMORY;
+               len = 0;
+               goto err;
+       }
+       vrh->hena = i40e_pf_get_default_rss_hena(pf);
+err:
+       /* send the response back to the VF */
+       aq_ret = i40e_vc_send_msg_to_vf(vf, I40E_VIRTCHNL_OP_GET_RSS_HENA_CAPS,
+                                       aq_ret, (u8 *)vrh, len);
+       return aq_ret;
+}
+
+/**
+ * i40e_vc_set_rss_hena
+ * @vf: pointer to the VF info
+ * @msg: pointer to the msg buffer
+ * @msglen: msg length
+ *
+ * Set the RSS HENA bits for the VF
+ **/
+static int i40e_vc_set_rss_hena(struct i40e_vf *vf, u8 *msg, u16 msglen)
+{
+       struct i40e_virtchnl_rss_hena *vrh =
+               (struct i40e_virtchnl_rss_hena *)msg;
+       struct i40e_pf *pf = vf->pf;
+       struct i40e_hw *hw = &pf->hw;
+       i40e_status aq_ret = 0;
+
+       if (!test_bit(I40E_VF_STAT_ACTIVE, &vf->vf_states)) {
+               aq_ret = I40E_ERR_PARAM;
+               goto err;
+       }
+       i40e_write_rx_ctl(hw, I40E_VFQF_HENA1(0, vf->vf_id), (u32)vrh->hena);
+       i40e_write_rx_ctl(hw, I40E_VFQF_HENA1(1, vf->vf_id),
+                         (u32)(vrh->hena >> 32));
+
+       /* send the response to the VF */
+err:
+       return i40e_vc_send_resp_to_vf(vf, I40E_VIRTCHNL_OP_SET_RSS_HENA,
+                                      aq_ret);
+}
+
 /**
  * i40e_vc_validate_vf_msg
  * @vf: pointer to the VF info
@@ -2054,7 +2357,7 @@ static int i40e_vc_validate_vf_msg(struct i40e_vf *vf, u32 v_opcode,
                                   u32 v_retval, u8 *msg, u16 msglen)
 {
        bool err_msg_format = false;
-       int valid_len;
+       int valid_len = 0;
 
        /* Check if VF is disabled. */
        if (test_bit(I40E_VF_STAT_DISABLED, &vf->vf_states))
@@ -2066,13 +2369,10 @@ static int i40e_vc_validate_vf_msg(struct i40e_vf *vf, u32 v_opcode,
                valid_len = sizeof(struct i40e_virtchnl_version_info);
                break;
        case I40E_VIRTCHNL_OP_RESET_VF:
-               valid_len = 0;
                break;
        case I40E_VIRTCHNL_OP_GET_VF_RESOURCES:
                if (VF_IS_V11(vf))
                        valid_len = sizeof(u32);
-               else
-                       valid_len = 0;
                break;
        case I40E_VIRTCHNL_OP_CONFIG_TX_QUEUE:
                valid_len = sizeof(struct i40e_virtchnl_txq_info);
@@ -2162,6 +2462,35 @@ static int i40e_vc_validate_vf_msg(struct i40e_vf *vf, u32 v_opcode,
                                sizeof(struct i40e_virtchnl_iwarp_qv_info));
                }
                break;
+       case I40E_VIRTCHNL_OP_CONFIG_RSS_KEY:
+               valid_len = sizeof(struct i40e_virtchnl_rss_key);
+               if (msglen >= valid_len) {
+                       struct i40e_virtchnl_rss_key *vrk =
+                               (struct i40e_virtchnl_rss_key *)msg;
+                       if (vrk->key_len != I40E_HKEY_ARRAY_SIZE) {
+                               err_msg_format = true;
+                               break;
+                       }
+                       valid_len += vrk->key_len - 1;
+               }
+               break;
+       case I40E_VIRTCHNL_OP_CONFIG_RSS_LUT:
+               valid_len = sizeof(struct i40e_virtchnl_rss_lut);
+               if (msglen >= valid_len) {
+                       struct i40e_virtchnl_rss_lut *vrl =
+                               (struct i40e_virtchnl_rss_lut *)msg;
+                       if (vrl->lut_entries != I40E_VF_HLUT_ARRAY_SIZE) {
+                               err_msg_format = true;
+                               break;
+                       }
+                       valid_len += vrl->lut_entries - 1;
+               }
+               break;
+       case I40E_VIRTCHNL_OP_GET_RSS_HENA_CAPS:
+               break;
+       case I40E_VIRTCHNL_OP_SET_RSS_HENA:
+               valid_len = sizeof(struct i40e_virtchnl_rss_hena);
+               break;
        /* These are always errors coming from the VF. */
        case I40E_VIRTCHNL_OP_EVENT:
        case I40E_VIRTCHNL_OP_UNKNOWN:
@@ -2188,11 +2517,11 @@ static int i40e_vc_validate_vf_msg(struct i40e_vf *vf, u32 v_opcode,
  * called from the common aeq/arq handler to
  * process request from VF
  **/
-int i40e_vc_process_vf_msg(struct i40e_pf *pf, u16 vf_id, u32 v_opcode,
+int i40e_vc_process_vf_msg(struct i40e_pf *pf, s16 vf_id, u32 v_opcode,
                           u32 v_retval, u8 *msg, u16 msglen)
 {
        struct i40e_hw *hw = &pf->hw;
-       unsigned int local_vf_id = vf_id - hw->func_caps.vf_base_id;
+       int local_vf_id = vf_id - (s16)hw->func_caps.vf_base_id;
        struct i40e_vf *vf;
        int ret;
 
@@ -2260,6 +2589,19 @@ int i40e_vc_process_vf_msg(struct i40e_pf *pf, u16 vf_id, u32 v_opcode,
        case I40E_VIRTCHNL_OP_RELEASE_IWARP_IRQ_MAP:
                ret = i40e_vc_iwarp_qvmap_msg(vf, msg, msglen, false);
                break;
+       case I40E_VIRTCHNL_OP_CONFIG_RSS_KEY:
+               ret = i40e_vc_config_rss_key(vf, msg, msglen);
+               break;
+       case I40E_VIRTCHNL_OP_CONFIG_RSS_LUT:
+               ret = i40e_vc_config_rss_lut(vf, msg, msglen);
+               break;
+       case I40E_VIRTCHNL_OP_GET_RSS_HENA_CAPS:
+               ret = i40e_vc_get_rss_hena(vf, msg, msglen);
+               break;
+       case I40E_VIRTCHNL_OP_SET_RSS_HENA:
+               ret = i40e_vc_set_rss_hena(vf, msg, msglen);
+               break;
+
        case I40E_VIRTCHNL_OP_UNKNOWN:
        default:
                dev_err(&pf->pdev->dev, "Unsupported opcode %d from VF %d\n",
@@ -2281,9 +2623,10 @@ int i40e_vc_process_vf_msg(struct i40e_pf *pf, u16 vf_id, u32 v_opcode,
  **/
 int i40e_vc_process_vflr_event(struct i40e_pf *pf)
 {
-       u32 reg, reg_idx, bit_idx, vf_id;
        struct i40e_hw *hw = &pf->hw;
+       u32 reg, reg_idx, bit_idx;
        struct i40e_vf *vf;
+       int vf_id;
 
        if (!test_bit(__I40E_VFLR_EVENT_PENDING, &pf->state))
                return 0;
index 838cbd2..8751741 100644 (file)
@@ -61,6 +61,8 @@ enum i40e_vf_states {
        I40E_VF_STAT_IWARPENA,
        I40E_VF_STAT_FCOEENA,
        I40E_VF_STAT_DISABLED,
+       I40E_VF_STAT_MC_PROMISC,
+       I40E_VF_STAT_UC_PROMISC,
 };
 
 /* VF capabilities */
@@ -75,7 +77,7 @@ struct i40e_vf {
        struct i40e_pf *pf;
 
        /* VF id in the PF space */
-       u16 vf_id;
+       s16 vf_id;
        /* all VF vsis connect to the same parent */
        enum i40e_switch_element_types parent_type;
        struct i40e_virtchnl_version_info vf_ver;
@@ -109,6 +111,9 @@ struct i40e_vf {
        bool link_forced;
        bool link_up;           /* only valid if VF link is forced */
        bool spoofchk;
+       u16 num_mac;
+       u16 num_vlan;
+
        /* RDMA Client */
        struct i40e_virtchnl_iwarp_qvlist_info *qvlist_info;
 };
@@ -116,7 +121,7 @@ struct i40e_vf {
 void i40e_free_vfs(struct i40e_pf *pf);
 int i40e_pci_sriov_configure(struct pci_dev *dev, int num_vfs);
 int i40e_alloc_vfs(struct i40e_pf *pf, u16 num_alloc_vfs);
-int i40e_vc_process_vf_msg(struct i40e_pf *pf, u16 vf_id, u32 v_opcode,
+int i40e_vc_process_vf_msg(struct i40e_pf *pf, s16 vf_id, u32 v_opcode,
                           u32 v_retval, u8 *msg, u16 msglen);
 int i40e_vc_process_vflr_event(struct i40e_pf *pf);
 void i40e_reset_vf(struct i40e_vf *vf, bool flr);
index aad8d62..3114dcf 100644 (file)
@@ -78,17 +78,17 @@ struct i40e_aq_desc {
 #define I40E_AQ_FLAG_EI_SHIFT  14
 #define I40E_AQ_FLAG_FE_SHIFT  15
 
-#define I40E_AQ_FLAG_DD                (1 << I40E_AQ_FLAG_DD_SHIFT)  /* 0x1    */
-#define I40E_AQ_FLAG_CMP       (1 << I40E_AQ_FLAG_CMP_SHIFT) /* 0x2    */
-#define I40E_AQ_FLAG_ERR       (1 << I40E_AQ_FLAG_ERR_SHIFT) /* 0x4    */
-#define I40E_AQ_FLAG_VFE       (1 << I40E_AQ_FLAG_VFE_SHIFT) /* 0x8    */
-#define I40E_AQ_FLAG_LB                (1 << I40E_AQ_FLAG_LB_SHIFT)  /* 0x200  */
-#define I40E_AQ_FLAG_RD                (1 << I40E_AQ_FLAG_RD_SHIFT)  /* 0x400  */
-#define I40E_AQ_FLAG_VFC       (1 << I40E_AQ_FLAG_VFC_SHIFT) /* 0x800  */
-#define I40E_AQ_FLAG_BUF       (1 << I40E_AQ_FLAG_BUF_SHIFT) /* 0x1000 */
-#define I40E_AQ_FLAG_SI                (1 << I40E_AQ_FLAG_SI_SHIFT)  /* 0x2000 */
-#define I40E_AQ_FLAG_EI                (1 << I40E_AQ_FLAG_EI_SHIFT)  /* 0x4000 */
-#define I40E_AQ_FLAG_FE                (1 << I40E_AQ_FLAG_FE_SHIFT)  /* 0x8000 */
+#define I40E_AQ_FLAG_DD                BIT(I40E_AQ_FLAG_DD_SHIFT)  /* 0x1    */
+#define I40E_AQ_FLAG_CMP       BIT(I40E_AQ_FLAG_CMP_SHIFT) /* 0x2    */
+#define I40E_AQ_FLAG_ERR       BIT(I40E_AQ_FLAG_ERR_SHIFT) /* 0x4    */
+#define I40E_AQ_FLAG_VFE       BIT(I40E_AQ_FLAG_VFE_SHIFT) /* 0x8    */
+#define I40E_AQ_FLAG_LB                BIT(I40E_AQ_FLAG_LB_SHIFT)  /* 0x200  */
+#define I40E_AQ_FLAG_RD                BIT(I40E_AQ_FLAG_RD_SHIFT)  /* 0x400  */
+#define I40E_AQ_FLAG_VFC       BIT(I40E_AQ_FLAG_VFC_SHIFT) /* 0x800  */
+#define I40E_AQ_FLAG_BUF       BIT(I40E_AQ_FLAG_BUF_SHIFT) /* 0x1000 */
+#define I40E_AQ_FLAG_SI                BIT(I40E_AQ_FLAG_SI_SHIFT)  /* 0x2000 */
+#define I40E_AQ_FLAG_EI                BIT(I40E_AQ_FLAG_EI_SHIFT)  /* 0x4000 */
+#define I40E_AQ_FLAG_FE                BIT(I40E_AQ_FLAG_FE_SHIFT)  /* 0x8000 */
 
 /* error codes */
 enum i40e_admin_queue_err {
@@ -205,10 +205,6 @@ enum i40e_admin_queue_opc {
        i40e_aqc_opc_resume_port_tx                             = 0x041C,
        i40e_aqc_opc_configure_partition_bw                     = 0x041D,
 
-       /* hmc */
-       i40e_aqc_opc_query_hmc_resource_profile = 0x0500,
-       i40e_aqc_opc_set_hmc_resource_profile   = 0x0501,
-
        /* phy commands*/
        i40e_aqc_opc_get_phy_abilities          = 0x0600,
        i40e_aqc_opc_set_phy_config             = 0x0601,
@@ -426,6 +422,7 @@ struct i40e_aqc_list_capabilities_element_resp {
 #define I40E_AQ_CAP_ID_SDP             0x0062
 #define I40E_AQ_CAP_ID_MDIO            0x0063
 #define I40E_AQ_CAP_ID_WSR_PROT                0x0064
+#define I40E_AQ_CAP_ID_NVM_MGMT                0x0080
 #define I40E_AQ_CAP_ID_FLEX10          0x00F1
 #define I40E_AQ_CAP_ID_CEM             0x00F2
 
@@ -1582,27 +1579,6 @@ struct i40e_aqc_configure_partition_bw_data {
 
 I40E_CHECK_STRUCT_LEN(0x22, i40e_aqc_configure_partition_bw_data);
 
-/* Get and set the active HMC resource profile and status.
- * (direct 0x0500) and (direct 0x0501)
- */
-struct i40e_aq_get_set_hmc_resource_profile {
-       u8      pm_profile;
-       u8      pe_vf_enabled;
-       u8      reserved[14];
-};
-
-I40E_CHECK_CMD_LENGTH(i40e_aq_get_set_hmc_resource_profile);
-
-enum i40e_aq_hmc_profile {
-       /* I40E_HMC_PROFILE_NO_CHANGE    = 0, reserved */
-       I40E_HMC_PROFILE_DEFAULT        = 1,
-       I40E_HMC_PROFILE_FAVOR_VF       = 2,
-       I40E_HMC_PROFILE_EQUAL          = 3,
-};
-
-#define I40E_AQ_GET_HMC_RESOURCE_PROFILE_PM_MASK       0xF
-#define I40E_AQ_GET_HMC_RESOURCE_PROFILE_COUNT_MASK    0x3F
-
 /* Get PHY Abilities (indirect 0x0600) uses the generic indirect struct */
 
 /* set in param0 for get phy abilities to report qualified modules */
@@ -1649,11 +1625,11 @@ enum i40e_aq_phy_type {
 
 enum i40e_aq_link_speed {
        I40E_LINK_SPEED_UNKNOWN = 0,
-       I40E_LINK_SPEED_100MB   = (1 << I40E_LINK_SPEED_100MB_SHIFT),
-       I40E_LINK_SPEED_1GB     = (1 << I40E_LINK_SPEED_1000MB_SHIFT),
-       I40E_LINK_SPEED_10GB    = (1 << I40E_LINK_SPEED_10GB_SHIFT),
-       I40E_LINK_SPEED_40GB    = (1 << I40E_LINK_SPEED_40GB_SHIFT),
-       I40E_LINK_SPEED_20GB    = (1 << I40E_LINK_SPEED_20GB_SHIFT)
+       I40E_LINK_SPEED_100MB   = BIT(I40E_LINK_SPEED_100MB_SHIFT),
+       I40E_LINK_SPEED_1GB     = BIT(I40E_LINK_SPEED_1000MB_SHIFT),
+       I40E_LINK_SPEED_10GB    = BIT(I40E_LINK_SPEED_10GB_SHIFT),
+       I40E_LINK_SPEED_40GB    = BIT(I40E_LINK_SPEED_40GB_SHIFT),
+       I40E_LINK_SPEED_20GB    = BIT(I40E_LINK_SPEED_20GB_SHIFT)
 };
 
 struct i40e_aqc_module_desc {
@@ -1924,9 +1900,9 @@ I40E_CHECK_CMD_LENGTH(i40e_aqc_nvm_config_write);
 /* Used for 0x0704 as well as for 0x0705 commands */
 #define I40E_AQ_ANVM_FEATURE_OR_IMMEDIATE_SHIFT                1
 #define I40E_AQ_ANVM_FEATURE_OR_IMMEDIATE_MASK \
-                               (1 << I40E_AQ_ANVM_FEATURE_OR_IMMEDIATE_SHIFT)
+                               BIT(I40E_AQ_ANVM_FEATURE_OR_IMMEDIATE_SHIFT)
 #define I40E_AQ_ANVM_FEATURE           0
-#define I40E_AQ_ANVM_IMMEDIATE_FIELD   (1 << FEATURE_OR_IMMEDIATE_SHIFT)
+#define I40E_AQ_ANVM_IMMEDIATE_FIELD   BIT(FEATURE_OR_IMMEDIATE_SHIFT)
 struct i40e_aqc_nvm_config_data_feature {
        __le16 feature_id;
 #define I40E_AQ_ANVM_FEATURE_OPTION_OEM_ONLY           0x01
@@ -2195,7 +2171,7 @@ struct i40e_aqc_del_udp_tunnel_completion {
 I40E_CHECK_CMD_LENGTH(i40e_aqc_del_udp_tunnel_completion);
 
 struct i40e_aqc_get_set_rss_key {
-#define I40E_AQC_SET_RSS_KEY_VSI_VALID         (0x1 << 15)
+#define I40E_AQC_SET_RSS_KEY_VSI_VALID         BIT(15)
 #define I40E_AQC_SET_RSS_KEY_VSI_ID_SHIFT      0
 #define I40E_AQC_SET_RSS_KEY_VSI_ID_MASK       (0x3FF << \
                                        I40E_AQC_SET_RSS_KEY_VSI_ID_SHIFT)
@@ -2215,14 +2191,14 @@ struct i40e_aqc_get_set_rss_key_data {
 I40E_CHECK_STRUCT_LEN(0x34, i40e_aqc_get_set_rss_key_data);
 
 struct  i40e_aqc_get_set_rss_lut {
-#define I40E_AQC_SET_RSS_LUT_VSI_VALID         (0x1 << 15)
+#define I40E_AQC_SET_RSS_LUT_VSI_VALID         BIT(15)
 #define I40E_AQC_SET_RSS_LUT_VSI_ID_SHIFT      0
 #define I40E_AQC_SET_RSS_LUT_VSI_ID_MASK       (0x3FF << \
                                        I40E_AQC_SET_RSS_LUT_VSI_ID_SHIFT)
        __le16  vsi_id;
 #define I40E_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT  0
-#define I40E_AQC_SET_RSS_LUT_TABLE_TYPE_MASK   (0x1 << \
-                                       I40E_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT)
+#define I40E_AQC_SET_RSS_LUT_TABLE_TYPE_MASK \
+                               BIT(I40E_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT)
 
 #define I40E_AQC_SET_RSS_LUT_TABLE_TYPE_VSI    0
 #define I40E_AQC_SET_RSS_LUT_TABLE_TYPE_PF     1
index 4db0c03..8f64204 100644 (file)
@@ -59,6 +59,7 @@ i40e_status i40e_set_mac_type(struct i40e_hw *hw)
                case I40E_DEV_ID_1G_BASE_T_X722:
                case I40E_DEV_ID_10G_BASE_T_X722:
                case I40E_DEV_ID_SFP_I_X722:
+               case I40E_DEV_ID_QSFP_I_X722:
                        hw->mac.type = I40E_MAC_X722;
                        break;
                case I40E_DEV_ID_X722_VF:
index 7023570..d34972b 100644 (file)
@@ -45,6 +45,7 @@
 #define I40E_DEV_ID_1G_BASE_T_X722     0x37D1
 #define I40E_DEV_ID_10G_BASE_T_X722    0x37D2
 #define I40E_DEV_ID_SFP_I_X722         0x37D3
+#define I40E_DEV_ID_QSFP_I_X722                0x37D4
 #define I40E_DEV_ID_X722_VF            0x37CD
 #define I40E_DEV_ID_X722_VF_HV         0x37D9
 
index fc22818..fd7dae4 100644 (file)
@@ -496,7 +496,6 @@ err:
 void i40evf_clean_rx_ring(struct i40e_ring *rx_ring)
 {
        struct device *dev = rx_ring->dev;
-       struct i40e_rx_buffer *rx_bi;
        unsigned long bi_size;
        u16 i;
 
@@ -504,48 +503,22 @@ void i40evf_clean_rx_ring(struct i40e_ring *rx_ring)
        if (!rx_ring->rx_bi)
                return;
 
-       if (ring_is_ps_enabled(rx_ring)) {
-               int bufsz = ALIGN(rx_ring->rx_hdr_len, 256) * rx_ring->count;
-
-               rx_bi = &rx_ring->rx_bi[0];
-               if (rx_bi->hdr_buf) {
-                       dma_free_coherent(dev,
-                                         bufsz,
-                                         rx_bi->hdr_buf,
-                                         rx_bi->dma);
-                       for (i = 0; i < rx_ring->count; i++) {
-                               rx_bi = &rx_ring->rx_bi[i];
-                               rx_bi->dma = 0;
-                               rx_bi->hdr_buf = NULL;
-                       }
-               }
-       }
        /* Free all the Rx ring sk_buffs */
        for (i = 0; i < rx_ring->count; i++) {
-               rx_bi = &rx_ring->rx_bi[i];
-               if (rx_bi->dma) {
-                       dma_unmap_single(dev,
-                                        rx_bi->dma,
-                                        rx_ring->rx_buf_len,
-                                        DMA_FROM_DEVICE);
-                       rx_bi->dma = 0;
-               }
+               struct i40e_rx_buffer *rx_bi = &rx_ring->rx_bi[i];
+
                if (rx_bi->skb) {
                        dev_kfree_skb(rx_bi->skb);
                        rx_bi->skb = NULL;
                }
-               if (rx_bi->page) {
-                       if (rx_bi->page_dma) {
-                               dma_unmap_page(dev,
-                                              rx_bi->page_dma,
-                                              PAGE_SIZE,
-                                              DMA_FROM_DEVICE);
-                               rx_bi->page_dma = 0;
-                       }
-                       __free_page(rx_bi->page);
-                       rx_bi->page = NULL;
-                       rx_bi->page_offset = 0;
-               }
+               if (!rx_bi->page)
+                       continue;
+
+               dma_unmap_page(dev, rx_bi->dma, PAGE_SIZE, DMA_FROM_DEVICE);
+               __free_pages(rx_bi->page, 0);
+
+               rx_bi->page = NULL;
+               rx_bi->page_offset = 0;
        }
 
        bi_size = sizeof(struct i40e_rx_buffer) * rx_ring->count;
@@ -554,6 +527,7 @@ void i40evf_clean_rx_ring(struct i40e_ring *rx_ring)
        /* Zero out the descriptor ring */
        memset(rx_ring->desc, 0, rx_ring->size);
 
+       rx_ring->next_to_alloc = 0;
        rx_ring->next_to_clean = 0;
        rx_ring->next_to_use = 0;
 }
@@ -577,37 +551,6 @@ void i40evf_free_rx_resources(struct i40e_ring *rx_ring)
        }
 }
 
-/**
- * i40evf_alloc_rx_headers - allocate rx header buffers
- * @rx_ring: ring to alloc buffers
- *
- * Allocate rx header buffers for the entire ring. As these are static,
- * this is only called when setting up a new ring.
- **/
-void i40evf_alloc_rx_headers(struct i40e_ring *rx_ring)
-{
-       struct device *dev = rx_ring->dev;
-       struct i40e_rx_buffer *rx_bi;
-       dma_addr_t dma;
-       void *buffer;
-       int buf_size;
-       int i;
-
-       if (rx_ring->rx_bi[0].hdr_buf)
-               return;
-       /* Make sure the buffers don't cross cache line boundaries. */
-       buf_size = ALIGN(rx_ring->rx_hdr_len, 256);
-       buffer = dma_alloc_coherent(dev, buf_size * rx_ring->count,
-                                   &dma, GFP_KERNEL);
-       if (!buffer)
-               return;
-       for (i = 0; i < rx_ring->count; i++) {
-               rx_bi = &rx_ring->rx_bi[i];
-               rx_bi->dma = dma + (i * buf_size);
-               rx_bi->hdr_buf = buffer + (i * buf_size);
-       }
-}
-
 /**
  * i40evf_setup_rx_descriptors - Allocate Rx descriptors
  * @rx_ring: Rx descriptor ring (for a specific queue) to setup
@@ -629,9 +572,7 @@ int i40evf_setup_rx_descriptors(struct i40e_ring *rx_ring)
        u64_stats_init(&rx_ring->syncp);
 
        /* Round up to nearest 4K */
-       rx_ring->size = ring_is_16byte_desc_enabled(rx_ring)
-               ? rx_ring->count * sizeof(union i40e_16byte_rx_desc)
-               : rx_ring->count * sizeof(union i40e_32byte_rx_desc);
+       rx_ring->size = rx_ring->count * sizeof(union i40e_32byte_rx_desc);
        rx_ring->size = ALIGN(rx_ring->size, 4096);
        rx_ring->desc = dma_alloc_coherent(dev, rx_ring->size,
                                           &rx_ring->dma, GFP_KERNEL);
@@ -642,6 +583,7 @@ int i40evf_setup_rx_descriptors(struct i40e_ring *rx_ring)
                goto err;
        }
 
+       rx_ring->next_to_alloc = 0;
        rx_ring->next_to_clean = 0;
        rx_ring->next_to_use = 0;
 
@@ -660,6 +602,10 @@ err:
 static inline void i40e_release_rx_desc(struct i40e_ring *rx_ring, u32 val)
 {
        rx_ring->next_to_use = val;
+
+       /* update next to alloc since we have filled the ring */
+       rx_ring->next_to_alloc = val;
+
        /* Force memory writes to complete before letting h/w
         * know there are new descriptors to fetch.  (Only
         * applicable for weak-ordered memory model archs,
@@ -670,160 +616,122 @@ static inline void i40e_release_rx_desc(struct i40e_ring *rx_ring, u32 val)
 }
 
 /**
- * i40evf_alloc_rx_buffers_ps - Replace used receive buffers; packet split
- * @rx_ring: ring to place buffers on
- * @cleaned_count: number of buffers to replace
+ * i40e_alloc_mapped_page - recycle or make a new page
+ * @rx_ring: ring to use
+ * @bi: rx_buffer struct to modify
  *
- * Returns true if any errors on allocation
+ * Returns true if the page was successfully allocated or
+ * reused.
  **/
-bool i40evf_alloc_rx_buffers_ps(struct i40e_ring *rx_ring, u16 cleaned_count)
+static bool i40e_alloc_mapped_page(struct i40e_ring *rx_ring,
+                                  struct i40e_rx_buffer *bi)
 {
-       u16 i = rx_ring->next_to_use;
-       union i40e_rx_desc *rx_desc;
-       struct i40e_rx_buffer *bi;
-       const int current_node = numa_node_id();
+       struct page *page = bi->page;
+       dma_addr_t dma;
 
-       /* do nothing if no valid netdev defined */
-       if (!rx_ring->netdev || !cleaned_count)
-               return false;
+       /* since we are recycling buffers we should seldom need to alloc */
+       if (likely(page)) {
+               rx_ring->rx_stats.page_reuse_count++;
+               return true;
+       }
 
-       while (cleaned_count--) {
-               rx_desc = I40E_RX_DESC(rx_ring, i);
-               bi = &rx_ring->rx_bi[i];
+       /* alloc new page for storage */
+       page = dev_alloc_page();
+       if (unlikely(!page)) {
+               rx_ring->rx_stats.alloc_page_failed++;
+               return false;
+       }
 
-               if (bi->skb) /* desc is in use */
-                       goto no_buffers;
+       /* map page for use */
+       dma = dma_map_page(rx_ring->dev, page, 0, PAGE_SIZE, DMA_FROM_DEVICE);
 
-       /* If we've been moved to a different NUMA node, release the
-        * page so we can get a new one on the current node.
+       /* if mapping failed free memory back to system since
+        * there isn't much point in holding memory we can't use
         */
-               if (bi->page &&  page_to_nid(bi->page) != current_node) {
-                       dma_unmap_page(rx_ring->dev,
-                                      bi->page_dma,
-                                      PAGE_SIZE,
-                                      DMA_FROM_DEVICE);
-                       __free_page(bi->page);
-                       bi->page = NULL;
-                       bi->page_dma = 0;
-                       rx_ring->rx_stats.realloc_count++;
-               } else if (bi->page) {
-                       rx_ring->rx_stats.page_reuse_count++;
-               }
-
-               if (!bi->page) {
-                       bi->page = alloc_page(GFP_ATOMIC);
-                       if (!bi->page) {
-                               rx_ring->rx_stats.alloc_page_failed++;
-                               goto no_buffers;
-                       }
-                       bi->page_dma = dma_map_page(rx_ring->dev,
-                                                   bi->page,
-                                                   0,
-                                                   PAGE_SIZE,
-                                                   DMA_FROM_DEVICE);
-                       if (dma_mapping_error(rx_ring->dev, bi->page_dma)) {
-                               rx_ring->rx_stats.alloc_page_failed++;
-                               __free_page(bi->page);
-                               bi->page = NULL;
-                               bi->page_dma = 0;
-                               bi->page_offset = 0;
-                               goto no_buffers;
-                       }
-                       bi->page_offset = 0;
-               }
-
-               /* Refresh the desc even if buffer_addrs didn't change
-                * because each write-back erases this info.
-                */
-               rx_desc->read.pkt_addr =
-                               cpu_to_le64(bi->page_dma + bi->page_offset);
-               rx_desc->read.hdr_addr = cpu_to_le64(bi->dma);
-               i++;
-               if (i == rx_ring->count)
-                       i = 0;
+       if (dma_mapping_error(rx_ring->dev, dma)) {
+               __free_pages(page, 0);
+               rx_ring->rx_stats.alloc_page_failed++;
+               return false;
        }
 
-       if (rx_ring->next_to_use != i)
-               i40e_release_rx_desc(rx_ring, i);
+       bi->dma = dma;
+       bi->page = page;
+       bi->page_offset = 0;
 
-       return false;
+       return true;
+}
 
-no_buffers:
-       if (rx_ring->next_to_use != i)
-               i40e_release_rx_desc(rx_ring, i);
+/**
+ * i40e_receive_skb - Send a completed packet up the stack
+ * @rx_ring:  rx ring in play
+ * @skb: packet to send up
+ * @vlan_tag: vlan tag for packet
+ **/
+static void i40e_receive_skb(struct i40e_ring *rx_ring,
+                            struct sk_buff *skb, u16 vlan_tag)
+{
+       struct i40e_q_vector *q_vector = rx_ring->q_vector;
 
-       /* make sure to come back via polling to try again after
-        * allocation failure
-        */
-       return true;
+       if ((rx_ring->netdev->features & NETIF_F_HW_VLAN_CTAG_RX) &&
+           (vlan_tag & VLAN_VID_MASK))
+               __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), vlan_tag);
+
+       napi_gro_receive(&q_vector->napi, skb);
 }
 
 /**
- * i40evf_alloc_rx_buffers_1buf - Replace used receive buffers; single buffer
+ * i40evf_alloc_rx_buffers - Replace used receive buffers
  * @rx_ring: ring to place buffers on
  * @cleaned_count: number of buffers to replace
  *
- * Returns true if any errors on allocation
+ * Returns false if all allocations were successful, true if any fail
  **/
-bool i40evf_alloc_rx_buffers_1buf(struct i40e_ring *rx_ring, u16 cleaned_count)
+bool i40evf_alloc_rx_buffers(struct i40e_ring *rx_ring, u16 cleaned_count)
 {
-       u16 i = rx_ring->next_to_use;
+       u16 ntu = rx_ring->next_to_use;
        union i40e_rx_desc *rx_desc;
        struct i40e_rx_buffer *bi;
-       struct sk_buff *skb;
 
        /* do nothing if no valid netdev defined */
        if (!rx_ring->netdev || !cleaned_count)
                return false;
 
-       while (cleaned_count--) {
-               rx_desc = I40E_RX_DESC(rx_ring, i);
-               bi = &rx_ring->rx_bi[i];
-               skb = bi->skb;
-
-               if (!skb) {
-                       skb = __netdev_alloc_skb_ip_align(rx_ring->netdev,
-                                                         rx_ring->rx_buf_len,
-                                                         GFP_ATOMIC |
-                                                         __GFP_NOWARN);
-                       if (!skb) {
-                               rx_ring->rx_stats.alloc_buff_failed++;
-                               goto no_buffers;
-                       }
-                       /* initialize queue mapping */
-                       skb_record_rx_queue(skb, rx_ring->queue_index);
-                       bi->skb = skb;
-               }
+       rx_desc = I40E_RX_DESC(rx_ring, ntu);
+       bi = &rx_ring->rx_bi[ntu];
 
-               if (!bi->dma) {
-                       bi->dma = dma_map_single(rx_ring->dev,
-                                                skb->data,
-                                                rx_ring->rx_buf_len,
-                                                DMA_FROM_DEVICE);
-                       if (dma_mapping_error(rx_ring->dev, bi->dma)) {
-                               rx_ring->rx_stats.alloc_buff_failed++;
-                               bi->dma = 0;
-                               dev_kfree_skb(bi->skb);
-                               bi->skb = NULL;
-                               goto no_buffers;
-                       }
-               }
+       do {
+               if (!i40e_alloc_mapped_page(rx_ring, bi))
+                       goto no_buffers;
 
-               rx_desc->read.pkt_addr = cpu_to_le64(bi->dma);
+               /* Refresh the desc even if buffer_addrs didn't change
+                * because each write-back erases this info.
+                */
+               rx_desc->read.pkt_addr = cpu_to_le64(bi->dma + bi->page_offset);
                rx_desc->read.hdr_addr = 0;
-               i++;
-               if (i == rx_ring->count)
-                       i = 0;
-       }
 
-       if (rx_ring->next_to_use != i)
-               i40e_release_rx_desc(rx_ring, i);
+               rx_desc++;
+               bi++;
+               ntu++;
+               if (unlikely(ntu == rx_ring->count)) {
+                       rx_desc = I40E_RX_DESC(rx_ring, 0);
+                       bi = rx_ring->rx_bi;
+                       ntu = 0;
+               }
+
+               /* clear the status bits for the next_to_use descriptor */
+               rx_desc->wb.qword1.status_error_len = 0;
+
+               cleaned_count--;
+       } while (cleaned_count);
+
+       if (rx_ring->next_to_use != ntu)
+               i40e_release_rx_desc(rx_ring, ntu);
 
        return false;
 
 no_buffers:
-       if (rx_ring->next_to_use != i)
-               i40e_release_rx_desc(rx_ring, i);
+       if (rx_ring->next_to_use != ntu)
+               i40e_release_rx_desc(rx_ring, ntu);
 
        /* make sure to come back via polling to try again after
         * allocation failure
@@ -831,42 +739,36 @@ no_buffers:
        return true;
 }
 
-/**
- * i40e_receive_skb - Send a completed packet up the stack
- * @rx_ring:  rx ring in play
- * @skb: packet to send up
- * @vlan_tag: vlan tag for packet
- **/
-static void i40e_receive_skb(struct i40e_ring *rx_ring,
-                            struct sk_buff *skb, u16 vlan_tag)
-{
-       struct i40e_q_vector *q_vector = rx_ring->q_vector;
-
-       if (vlan_tag & VLAN_VID_MASK)
-               __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), vlan_tag);
-
-       napi_gro_receive(&q_vector->napi, skb);
-}
-
 /**
  * i40e_rx_checksum - Indicate in skb if hw indicated a good cksum
  * @vsi: the VSI we care about
  * @skb: skb currently being received and modified
- * @rx_status: status value of last descriptor in packet
- * @rx_error: error value of last descriptor in packet
- * @rx_ptype: ptype value of last descriptor in packet
+ * @rx_desc: the receive descriptor
+ *
+ * skb->protocol must be set before this function is called
  **/
 static inline void i40e_rx_checksum(struct i40e_vsi *vsi,
                                    struct sk_buff *skb,
-                                   u32 rx_status,
-                                   u32 rx_error,
-                                   u16 rx_ptype)
+                                   union i40e_rx_desc *rx_desc)
 {
-       struct i40e_rx_ptype_decoded decoded = decode_rx_desc_ptype(rx_ptype);
-       bool ipv4, ipv6, ipv4_tunnel, ipv6_tunnel;
+       struct i40e_rx_ptype_decoded decoded;
+       bool ipv4, ipv6, tunnel = false;
+       u32 rx_error, rx_status;
+       u8 ptype;
+       u64 qword;
+
+       qword = le64_to_cpu(rx_desc->wb.qword1.status_error_len);
+       ptype = (qword & I40E_RXD_QW1_PTYPE_MASK) >> I40E_RXD_QW1_PTYPE_SHIFT;
+       rx_error = (qword & I40E_RXD_QW1_ERROR_MASK) >>
+                  I40E_RXD_QW1_ERROR_SHIFT;
+       rx_status = (qword & I40E_RXD_QW1_STATUS_MASK) >>
+                   I40E_RXD_QW1_STATUS_SHIFT;
+       decoded = decode_rx_desc_ptype(ptype);
 
        skb->ip_summed = CHECKSUM_NONE;
 
+       skb_checksum_none_assert(skb);
+
        /* Rx csum enabled and ip headers found? */
        if (!(vsi->netdev->features & NETIF_F_RXCSUM))
                return;
@@ -912,14 +814,13 @@ static inline void i40e_rx_checksum(struct i40e_vsi *vsi,
         * doesn't make it a hard requirement so if we have validated the
         * inner checksum report CHECKSUM_UNNECESSARY.
         */
-
-       ipv4_tunnel = (rx_ptype >= I40E_RX_PTYPE_GRENAT4_MAC_PAY3) &&
-                    (rx_ptype <= I40E_RX_PTYPE_GRENAT4_MACVLAN_IPV6_ICMP_PAY4);
-       ipv6_tunnel = (rx_ptype >= I40E_RX_PTYPE_GRENAT6_MAC_PAY3) &&
-                    (rx_ptype <= I40E_RX_PTYPE_GRENAT6_MACVLAN_IPV6_ICMP_PAY4);
+       if (decoded.inner_prot & (I40E_RX_PTYPE_INNER_PROT_TCP |
+                                 I40E_RX_PTYPE_INNER_PROT_UDP |
+                                 I40E_RX_PTYPE_INNER_PROT_SCTP))
+               tunnel = true;
 
        skb->ip_summed = CHECKSUM_UNNECESSARY;
-       skb->csum_level = ipv4_tunnel || ipv6_tunnel;
+       skb->csum_level = tunnel ? 1 : 0;
 
        return;
 
@@ -933,7 +834,7 @@ checksum_fail:
  *
  * Returns a hash type to be used by skb_set_hash
  **/
-static inline enum pkt_hash_types i40e_ptype_to_htype(u8 ptype)
+static inline int i40e_ptype_to_htype(u8 ptype)
 {
        struct i40e_rx_ptype_decoded decoded = decode_rx_desc_ptype(ptype);
 
@@ -961,7 +862,7 @@ static inline void i40e_rx_hash(struct i40e_ring *ring,
                                u8 rx_ptype)
 {
        u32 hash;
-       const __le64 rss_mask  =
+       const __le64 rss_mask =
                cpu_to_le64((u64)I40E_RX_DESC_FLTSTAT_RSS_HASH <<
                            I40E_RX_DESC_STATUS_FLTSTAT_SHIFT);
 
@@ -975,315 +876,411 @@ static inline void i40e_rx_hash(struct i40e_ring *ring,
 }
 
 /**
- * i40e_clean_rx_irq_ps - Reclaim resources after receive; packet split
- * @rx_ring:  rx ring to clean
- * @budget:   how many cleans we're allowed
+ * i40evf_process_skb_fields - Populate skb header fields from Rx descriptor
+ * @rx_ring: rx descriptor ring packet is being transacted on
+ * @rx_desc: pointer to the EOP Rx descriptor
+ * @skb: pointer to current skb being populated
+ * @rx_ptype: the packet type decoded by hardware
  *
- * Returns true if there's any budget left (e.g. the clean is finished)
+ * This function checks the ring, descriptor, and packet information in
+ * order to populate the hash, checksum, VLAN, protocol, and
+ * other fields within the skb.
  **/
-static int i40e_clean_rx_irq_ps(struct i40e_ring *rx_ring, const int budget)
+static inline
+void i40evf_process_skb_fields(struct i40e_ring *rx_ring,
+                              union i40e_rx_desc *rx_desc, struct sk_buff *skb,
+                              u8 rx_ptype)
 {
-       unsigned int total_rx_bytes = 0, total_rx_packets = 0;
-       u16 rx_packet_len, rx_header_len, rx_sph, rx_hbo;
-       u16 cleaned_count = I40E_DESC_UNUSED(rx_ring);
-       struct i40e_vsi *vsi = rx_ring->vsi;
-       u16 i = rx_ring->next_to_clean;
-       union i40e_rx_desc *rx_desc;
-       u32 rx_error, rx_status;
-       bool failure = false;
-       u8 rx_ptype;
-       u64 qword;
-       u32 copysize;
+       i40e_rx_hash(rx_ring, rx_desc, skb, rx_ptype);
 
-       do {
-               struct i40e_rx_buffer *rx_bi;
-               struct sk_buff *skb;
-               u16 vlan_tag;
-               /* return some buffers to hardware, one at a time is too slow */
-               if (cleaned_count >= I40E_RX_BUFFER_WRITE) {
-                       failure = failure ||
-                                 i40evf_alloc_rx_buffers_ps(rx_ring,
-                                                            cleaned_count);
-                       cleaned_count = 0;
-               }
+       /* modifies the skb - consumes the enet header */
+       skb->protocol = eth_type_trans(skb, rx_ring->netdev);
 
-               i = rx_ring->next_to_clean;
-               rx_desc = I40E_RX_DESC(rx_ring, i);
-               qword = le64_to_cpu(rx_desc->wb.qword1.status_error_len);
-               rx_status = (qword & I40E_RXD_QW1_STATUS_MASK) >>
-                       I40E_RXD_QW1_STATUS_SHIFT;
+       i40e_rx_checksum(rx_ring->vsi, skb, rx_desc);
 
-               if (!(rx_status & BIT(I40E_RX_DESC_STATUS_DD_SHIFT)))
-                       break;
+       skb_record_rx_queue(skb, rx_ring->queue_index);
+}
 
-               /* This memory barrier is needed to keep us from reading
-                * any other fields out of the rx_desc until we know the
-                * DD bit is set.
-                */
-               dma_rmb();
-               /* sync header buffer for reading */
-               dma_sync_single_range_for_cpu(rx_ring->dev,
-                                             rx_ring->rx_bi[0].dma,
-                                             i * rx_ring->rx_hdr_len,
-                                             rx_ring->rx_hdr_len,
-                                             DMA_FROM_DEVICE);
-               rx_bi = &rx_ring->rx_bi[i];
-               skb = rx_bi->skb;
-               if (likely(!skb)) {
-                       skb = __netdev_alloc_skb_ip_align(rx_ring->netdev,
-                                                         rx_ring->rx_hdr_len,
-                                                         GFP_ATOMIC |
-                                                         __GFP_NOWARN);
-                       if (!skb) {
-                               rx_ring->rx_stats.alloc_buff_failed++;
-                               failure = true;
-                               break;
-                       }
+/**
+ * i40e_pull_tail - i40e specific version of skb_pull_tail
+ * @rx_ring: rx descriptor ring packet is being transacted on
+ * @skb: pointer to current skb being adjusted
+ *
+ * This function is an i40e specific version of __pskb_pull_tail.  The
+ * main difference between this version and the original function is that
+ * this function can make several assumptions about the state of things
+ * that allow for significant optimizations versus the standard function.
+ * As a result we can do things like drop a frag and maintain an accurate
+ * truesize for the skb.
+ */
+static void i40e_pull_tail(struct i40e_ring *rx_ring, struct sk_buff *skb)
+{
+       struct skb_frag_struct *frag = &skb_shinfo(skb)->frags[0];
+       unsigned char *va;
+       unsigned int pull_len;
 
-                       /* initialize queue mapping */
-                       skb_record_rx_queue(skb, rx_ring->queue_index);
-                       /* we are reusing so sync this buffer for CPU use */
-                       dma_sync_single_range_for_cpu(rx_ring->dev,
-                                                     rx_ring->rx_bi[0].dma,
-                                                     i * rx_ring->rx_hdr_len,
-                                                     rx_ring->rx_hdr_len,
-                                                     DMA_FROM_DEVICE);
-               }
-               rx_packet_len = (qword & I40E_RXD_QW1_LENGTH_PBUF_MASK) >>
-                               I40E_RXD_QW1_LENGTH_PBUF_SHIFT;
-               rx_header_len = (qword & I40E_RXD_QW1_LENGTH_HBUF_MASK) >>
-                               I40E_RXD_QW1_LENGTH_HBUF_SHIFT;
-               rx_sph = (qword & I40E_RXD_QW1_LENGTH_SPH_MASK) >>
-                        I40E_RXD_QW1_LENGTH_SPH_SHIFT;
-
-               rx_error = (qword & I40E_RXD_QW1_ERROR_MASK) >>
-                          I40E_RXD_QW1_ERROR_SHIFT;
-               rx_hbo = rx_error & BIT(I40E_RX_DESC_ERROR_HBO_SHIFT);
-               rx_error &= ~BIT(I40E_RX_DESC_ERROR_HBO_SHIFT);
+       /* it is valid to use page_address instead of kmap since we are
+        * working with pages allocated out of the lomem pool per
+        * alloc_page(GFP_ATOMIC)
+        */
+       va = skb_frag_address(frag);
 
-               rx_ptype = (qword & I40E_RXD_QW1_PTYPE_MASK) >>
-                          I40E_RXD_QW1_PTYPE_SHIFT;
-               /* sync half-page for reading */
-               dma_sync_single_range_for_cpu(rx_ring->dev,
-                                             rx_bi->page_dma,
-                                             rx_bi->page_offset,
-                                             PAGE_SIZE / 2,
-                                             DMA_FROM_DEVICE);
-               prefetch(page_address(rx_bi->page) + rx_bi->page_offset);
-               rx_bi->skb = NULL;
-               cleaned_count++;
-               copysize = 0;
-               if (rx_hbo || rx_sph) {
-                       int len;
-
-                       if (rx_hbo)
-                               len = I40E_RX_HDR_SIZE;
-                       else
-                               len = rx_header_len;
-                       memcpy(__skb_put(skb, len), rx_bi->hdr_buf, len);
-               } else if (skb->len == 0) {
-                       int len;
-                       unsigned char *va = page_address(rx_bi->page) +
-                                           rx_bi->page_offset;
-
-                       len = min(rx_packet_len, rx_ring->rx_hdr_len);
-                       memcpy(__skb_put(skb, len), va, len);
-                       copysize = len;
-                       rx_packet_len -= len;
-               }
-               /* Get the rest of the data if this was a header split */
-               if (rx_packet_len) {
-                       skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,
-                                       rx_bi->page,
-                                       rx_bi->page_offset + copysize,
-                                       rx_packet_len, I40E_RXBUFFER_2048);
-
-                       /* If the page count is more than 2, then both halves
-                        * of the page are used and we need to free it. Do it
-                        * here instead of in the alloc code. Otherwise one
-                        * of the half-pages might be released between now and
-                        * then, and we wouldn't know which one to use.
-                        * Don't call get_page and free_page since those are
-                        * both expensive atomic operations that just change
-                        * the refcount in opposite directions. Just give the
-                        * page to the stack; he can have our refcount.
-                        */
-                       if (page_count(rx_bi->page) > 2) {
-                               dma_unmap_page(rx_ring->dev,
-                                              rx_bi->page_dma,
-                                              PAGE_SIZE,
-                                              DMA_FROM_DEVICE);
-                               rx_bi->page = NULL;
-                               rx_bi->page_dma = 0;
-                               rx_ring->rx_stats.realloc_count++;
-                       } else {
-                               get_page(rx_bi->page);
-                               /* switch to the other half-page here; the
-                                * allocation code programs the right addr
-                                * into HW. If we haven't used this half-page,
-                                * the address won't be changed, and HW can
-                                * just use it next time through.
-                                */
-                               rx_bi->page_offset ^= PAGE_SIZE / 2;
-                       }
+       /* we need the header to contain the greater of either ETH_HLEN or
+        * 60 bytes if the skb->len is less than 60 for skb_pad.
+        */
+       pull_len = eth_get_headlen(va, I40E_RX_HDR_SIZE);
 
-               }
-               I40E_RX_INCREMENT(rx_ring, i);
+       /* align pull length to size of long to optimize memcpy performance */
+       skb_copy_to_linear_data(skb, va, ALIGN(pull_len, sizeof(long)));
 
-               if (unlikely(
-                   !(rx_status & BIT(I40E_RX_DESC_STATUS_EOF_SHIFT)))) {
-                       struct i40e_rx_buffer *next_buffer;
+       /* update all of the pointers */
+       skb_frag_size_sub(frag, pull_len);
+       frag->page_offset += pull_len;
+       skb->data_len -= pull_len;
+       skb->tail += pull_len;
+}
 
-                       next_buffer = &rx_ring->rx_bi[i];
-                       next_buffer->skb = skb;
-                       rx_ring->rx_stats.non_eop_descs++;
-                       continue;
-               }
+/**
+ * i40e_cleanup_headers - Correct empty headers
+ * @rx_ring: rx descriptor ring packet is being transacted on
+ * @skb: pointer to current skb being fixed
+ *
+ * Also address the case where we are pulling data in on pages only
+ * and as such no data is present in the skb header.
+ *
+ * In addition if skb is not at least 60 bytes we need to pad it so that
+ * it is large enough to qualify as a valid Ethernet frame.
+ *
+ * Returns true if an error was encountered and skb was freed.
+ **/
+static bool i40e_cleanup_headers(struct i40e_ring *rx_ring, struct sk_buff *skb)
+{
+       /* place header in linear portion of buffer */
+       if (skb_is_nonlinear(skb))
+               i40e_pull_tail(rx_ring, skb);
 
-               /* ERR_MASK will only have valid bits if EOP set */
-               if (unlikely(rx_error & BIT(I40E_RX_DESC_ERROR_RXE_SHIFT))) {
-                       dev_kfree_skb_any(skb);
-                       continue;
-               }
+       /* if eth_skb_pad returns an error the skb was freed */
+       if (eth_skb_pad(skb))
+               return true;
 
-               i40e_rx_hash(rx_ring, rx_desc, skb, rx_ptype);
+       return false;
+}
 
-               /* probably a little skewed due to removing CRC */
-               total_rx_bytes += skb->len;
-               total_rx_packets++;
+/**
+ * i40e_reuse_rx_page - page flip buffer and store it back on the ring
+ * @rx_ring: rx descriptor ring to store buffers on
+ * @old_buff: donor buffer to have page reused
+ *
+ * Synchronizes page for reuse by the adapter
+ **/
+static void i40e_reuse_rx_page(struct i40e_ring *rx_ring,
+                              struct i40e_rx_buffer *old_buff)
+{
+       struct i40e_rx_buffer *new_buff;
+       u16 nta = rx_ring->next_to_alloc;
 
-               skb->protocol = eth_type_trans(skb, rx_ring->netdev);
+       new_buff = &rx_ring->rx_bi[nta];
 
-               i40e_rx_checksum(vsi, skb, rx_status, rx_error, rx_ptype);
+       /* update, and store next to alloc */
+       nta++;
+       rx_ring->next_to_alloc = (nta < rx_ring->count) ? nta : 0;
 
-               vlan_tag = rx_status & BIT(I40E_RX_DESC_STATUS_L2TAG1P_SHIFT)
-                        ? le16_to_cpu(rx_desc->wb.qword0.lo_dword.l2tag1)
-                        : 0;
-#ifdef I40E_FCOE
-               if (unlikely(
-                   i40e_rx_is_fcoe(rx_ptype) &&
-                   !i40e_fcoe_handle_offload(rx_ring, rx_desc, skb))) {
-                       dev_kfree_skb_any(skb);
-                       continue;
-               }
+       /* transfer page from old buffer to new buffer */
+       *new_buff = *old_buff;
+}
+
+/**
+ * i40e_page_is_reserved - check if reuse is possible
+ * @page: page struct to check
+ */
+static inline bool i40e_page_is_reserved(struct page *page)
+{
+       return (page_to_nid(page) != numa_mem_id()) || page_is_pfmemalloc(page);
+}
+
+/**
+ * i40e_add_rx_frag - Add contents of Rx buffer to sk_buff
+ * @rx_ring: rx descriptor ring to transact packets on
+ * @rx_buffer: buffer containing page to add
+ * @rx_desc: descriptor containing length of buffer written by hardware
+ * @skb: sk_buff to place the data into
+ *
+ * This function will add the data contained in rx_buffer->page to the skb.
+ * This is done either through a direct copy if the data in the buffer is
+ * less than the skb header size, otherwise it will just attach the page as
+ * a frag to the skb.
+ *
+ * The function will then update the page offset if necessary and return
+ * true if the buffer can be reused by the adapter.
+ **/
+static bool i40e_add_rx_frag(struct i40e_ring *rx_ring,
+                            struct i40e_rx_buffer *rx_buffer,
+                            union i40e_rx_desc *rx_desc,
+                            struct sk_buff *skb)
+{
+       struct page *page = rx_buffer->page;
+       u64 qword = le64_to_cpu(rx_desc->wb.qword1.status_error_len);
+       unsigned int size = (qword & I40E_RXD_QW1_LENGTH_PBUF_MASK) >>
+                           I40E_RXD_QW1_LENGTH_PBUF_SHIFT;
+#if (PAGE_SIZE < 8192)
+       unsigned int truesize = I40E_RXBUFFER_2048;
+#else
+       unsigned int truesize = ALIGN(size, L1_CACHE_BYTES);
+       unsigned int last_offset = PAGE_SIZE - I40E_RXBUFFER_2048;
 #endif
-               i40e_receive_skb(rx_ring, skb, vlan_tag);
 
-               rx_desc->wb.qword1.status_error_len = 0;
+       /* will the data fit in the skb we allocated? if so, just
+        * copy it as it is pretty small anyway
+        */
+       if ((size <= I40E_RX_HDR_SIZE) && !skb_is_nonlinear(skb)) {
+               unsigned char *va = page_address(page) + rx_buffer->page_offset;
 
-       } while (likely(total_rx_packets < budget));
+               memcpy(__skb_put(skb, size), va, ALIGN(size, sizeof(long)));
 
-       u64_stats_update_begin(&rx_ring->syncp);
-       rx_ring->stats.packets += total_rx_packets;
-       rx_ring->stats.bytes += total_rx_bytes;
-       u64_stats_update_end(&rx_ring->syncp);
-       rx_ring->q_vector->rx.total_packets += total_rx_packets;
-       rx_ring->q_vector->rx.total_bytes += total_rx_bytes;
+               /* page is not reserved, we can reuse buffer as-is */
+               if (likely(!i40e_page_is_reserved(page)))
+                       return true;
 
-       return failure ? budget : total_rx_packets;
+               /* this page cannot be reused so discard it */
+               __free_pages(page, 0);
+               return false;
+       }
+
+       skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page,
+                       rx_buffer->page_offset, size, truesize);
+
+       /* avoid re-using remote pages */
+       if (unlikely(i40e_page_is_reserved(page)))
+               return false;
+
+#if (PAGE_SIZE < 8192)
+       /* if we are only owner of page we can reuse it */
+       if (unlikely(page_count(page) != 1))
+               return false;
+
+       /* flip page offset to other buffer */
+       rx_buffer->page_offset ^= truesize;
+#else
+       /* move offset up to the next cache line */
+       rx_buffer->page_offset += truesize;
+
+       if (rx_buffer->page_offset > last_offset)
+               return false;
+#endif
+
+       /* Even if we own the page, we are not allowed to use atomic_set()
+        * This would break get_page_unless_zero() users.
+        */
+       get_page(rx_buffer->page);
+
+       return true;
 }
 
 /**
- * i40e_clean_rx_irq_1buf - Reclaim resources after receive; single buffer
- * @rx_ring:  rx ring to clean
- * @budget:   how many cleans we're allowed
+ * i40evf_fetch_rx_buffer - Allocate skb and populate it
+ * @rx_ring: rx descriptor ring to transact packets on
+ * @rx_desc: descriptor containing info written by hardware
  *
- * Returns number of packets cleaned
+ * This function allocates an skb on the fly, and populates it with the page
+ * data from the current receive descriptor, taking care to set up the skb
+ * correctly, as well as handling calling the page recycle function if
+ * necessary.
+ */
+static inline
+struct sk_buff *i40evf_fetch_rx_buffer(struct i40e_ring *rx_ring,
+                                      union i40e_rx_desc *rx_desc)
+{
+       struct i40e_rx_buffer *rx_buffer;
+       struct sk_buff *skb;
+       struct page *page;
+
+       rx_buffer = &rx_ring->rx_bi[rx_ring->next_to_clean];
+       page = rx_buffer->page;
+       prefetchw(page);
+
+       skb = rx_buffer->skb;
+
+       if (likely(!skb)) {
+               void *page_addr = page_address(page) + rx_buffer->page_offset;
+
+               /* prefetch first cache line of first page */
+               prefetch(page_addr);
+#if L1_CACHE_BYTES < 128
+               prefetch(page_addr + L1_CACHE_BYTES);
+#endif
+
+               /* allocate a skb to store the frags */
+               skb = __napi_alloc_skb(&rx_ring->q_vector->napi,
+                                      I40E_RX_HDR_SIZE,
+                                      GFP_ATOMIC | __GFP_NOWARN);
+               if (unlikely(!skb)) {
+                       rx_ring->rx_stats.alloc_buff_failed++;
+                       return NULL;
+               }
+
+               /* we will be copying header into skb->data in
+                * pskb_may_pull so it is in our interest to prefetch
+                * it now to avoid a possible cache miss
+                */
+               prefetchw(skb->data);
+       } else {
+               rx_buffer->skb = NULL;
+       }
+
+       /* we are reusing so sync this buffer for CPU use */
+       dma_sync_single_range_for_cpu(rx_ring->dev,
+                                     rx_buffer->dma,
+                                     rx_buffer->page_offset,
+                                     I40E_RXBUFFER_2048,
+                                     DMA_FROM_DEVICE);
+
+       /* pull page into skb */
+       if (i40e_add_rx_frag(rx_ring, rx_buffer, rx_desc, skb)) {
+               /* hand second half of page back to the ring */
+               i40e_reuse_rx_page(rx_ring, rx_buffer);
+               rx_ring->rx_stats.page_reuse_count++;
+       } else {
+               /* we are not reusing the buffer so unmap it */
+               dma_unmap_page(rx_ring->dev, rx_buffer->dma, PAGE_SIZE,
+                              DMA_FROM_DEVICE);
+       }
+
+       /* clear contents of buffer_info */
+       rx_buffer->page = NULL;
+
+       return skb;
+}
+
+/**
+ * i40e_is_non_eop - process handling of non-EOP buffers
+ * @rx_ring: Rx ring being processed
+ * @rx_desc: Rx descriptor for current buffer
+ * @skb: Current socket buffer containing buffer in progress
+ *
+ * This function updates next to clean.  If the buffer is an EOP buffer
+ * this function exits returning false, otherwise it will place the
+ * sk_buff in the next buffer to be chained and return true indicating
+ * that this is in fact a non-EOP buffer.
  **/
-static int i40e_clean_rx_irq_1buf(struct i40e_ring *rx_ring, int budget)
+static bool i40e_is_non_eop(struct i40e_ring *rx_ring,
+                           union i40e_rx_desc *rx_desc,
+                           struct sk_buff *skb)
+{
+       u32 ntc = rx_ring->next_to_clean + 1;
+
+       /* fetch, update, and store next to clean */
+       ntc = (ntc < rx_ring->count) ? ntc : 0;
+       rx_ring->next_to_clean = ntc;
+
+       prefetch(I40E_RX_DESC(rx_ring, ntc));
+
+       /* if we are the last buffer then there is nothing else to do */
+#define I40E_RXD_EOF BIT(I40E_RX_DESC_STATUS_EOF_SHIFT)
+       if (likely(i40e_test_staterr(rx_desc, I40E_RXD_EOF)))
+               return false;
+
+       /* place skb in next buffer to be received */
+       rx_ring->rx_bi[ntc].skb = skb;
+       rx_ring->rx_stats.non_eop_descs++;
+
+       return true;
+}
+
+/**
+ * i40e_clean_rx_irq - Clean completed descriptors from Rx ring - bounce buf
+ * @rx_ring: rx descriptor ring to transact packets on
+ * @budget: Total limit on number of packets to process
+ *
+ * This function provides a "bounce buffer" approach to Rx interrupt
+ * processing.  The advantage to this is that on systems that have
+ * expensive overhead for IOMMU access this provides a means of avoiding
+ * it by maintaining the mapping of the page to the system.
+ *
+ * Returns amount of work completed
+ **/
+static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget)
 {
        unsigned int total_rx_bytes = 0, total_rx_packets = 0;
        u16 cleaned_count = I40E_DESC_UNUSED(rx_ring);
-       struct i40e_vsi *vsi = rx_ring->vsi;
-       union i40e_rx_desc *rx_desc;
-       u32 rx_error, rx_status;
-       u16 rx_packet_len;
        bool failure = false;
-       u8 rx_ptype;
-       u64 qword;
-       u16 i;
 
-       do {
-               struct i40e_rx_buffer *rx_bi;
+       while (likely(total_rx_packets < budget)) {
+               union i40e_rx_desc *rx_desc;
                struct sk_buff *skb;
+               u32 rx_status;
                u16 vlan_tag;
+               u8 rx_ptype;
+               u64 qword;
+
                /* return some buffers to hardware, one at a time is too slow */
                if (cleaned_count >= I40E_RX_BUFFER_WRITE) {
                        failure = failure ||
-                                 i40evf_alloc_rx_buffers_1buf(rx_ring,
-                                                              cleaned_count);
+                                 i40evf_alloc_rx_buffers(rx_ring, cleaned_count);
                        cleaned_count = 0;
                }
 
-               i = rx_ring->next_to_clean;
-               rx_desc = I40E_RX_DESC(rx_ring, i);
+               rx_desc = I40E_RX_DESC(rx_ring, rx_ring->next_to_clean);
+
                qword = le64_to_cpu(rx_desc->wb.qword1.status_error_len);
+               rx_ptype = (qword & I40E_RXD_QW1_PTYPE_MASK) >>
+                          I40E_RXD_QW1_PTYPE_SHIFT;
                rx_status = (qword & I40E_RXD_QW1_STATUS_MASK) >>
-                       I40E_RXD_QW1_STATUS_SHIFT;
+                           I40E_RXD_QW1_STATUS_SHIFT;
 
                if (!(rx_status & BIT(I40E_RX_DESC_STATUS_DD_SHIFT)))
                        break;
 
+               /* status_error_len will always be zero for unused descriptors
+                * because it's cleared in cleanup, and overlaps with hdr_addr
+                * which is always zero because packet split isn't used, if the
+                * hardware wrote DD then it will be non-zero
+                */
+               if (!rx_desc->wb.qword1.status_error_len)
+                       break;
+
                /* This memory barrier is needed to keep us from reading
                 * any other fields out of the rx_desc until we know the
                 * DD bit is set.
                 */
                dma_rmb();
 
-               rx_bi = &rx_ring->rx_bi[i];
-               skb = rx_bi->skb;
-               prefetch(skb->data);
-
-               rx_packet_len = (qword & I40E_RXD_QW1_LENGTH_PBUF_MASK) >>
-                               I40E_RXD_QW1_LENGTH_PBUF_SHIFT;
-
-               rx_error = (qword & I40E_RXD_QW1_ERROR_MASK) >>
-                          I40E_RXD_QW1_ERROR_SHIFT;
-               rx_error &= ~BIT(I40E_RX_DESC_ERROR_HBO_SHIFT);
+               skb = i40evf_fetch_rx_buffer(rx_ring, rx_desc);
+               if (!skb)
+                       break;
 
-               rx_ptype = (qword & I40E_RXD_QW1_PTYPE_MASK) >>
-                          I40E_RXD_QW1_PTYPE_SHIFT;
-               rx_bi->skb = NULL;
                cleaned_count++;
 
-               /* Get the header and possibly the whole packet
-                * If this is an skb from previous receive dma will be 0
-                */
-               skb_put(skb, rx_packet_len);
-               dma_unmap_single(rx_ring->dev, rx_bi->dma, rx_ring->rx_buf_len,
-                                DMA_FROM_DEVICE);
-               rx_bi->dma = 0;
-
-               I40E_RX_INCREMENT(rx_ring, i);
-
-               if (unlikely(
-                   !(rx_status & BIT(I40E_RX_DESC_STATUS_EOF_SHIFT)))) {
-                       rx_ring->rx_stats.non_eop_descs++;
+               if (i40e_is_non_eop(rx_ring, rx_desc, skb))
                        continue;
-               }
 
-               /* ERR_MASK will only have valid bits if EOP set */
-               if (unlikely(rx_error & BIT(I40E_RX_DESC_ERROR_RXE_SHIFT))) {
+               /* ERR_MASK will only have valid bits if EOP set, and
+                * what we are doing here is actually checking
+                * I40E_RX_DESC_ERROR_RXE_SHIFT, since it is the zeroth bit in
+                * the error field
+                */
+               if (unlikely(i40e_test_staterr(rx_desc, BIT(I40E_RXD_QW1_ERROR_SHIFT)))) {
                        dev_kfree_skb_any(skb);
                        continue;
                }
 
-               i40e_rx_hash(rx_ring, rx_desc, skb, rx_ptype);
+               if (i40e_cleanup_headers(rx_ring, skb))
+                       continue;
+
                /* probably a little skewed due to removing CRC */
                total_rx_bytes += skb->len;
-               total_rx_packets++;
 
-               skb->protocol = eth_type_trans(skb, rx_ring->netdev);
+               /* populate checksum, VLAN, and protocol */
+               i40evf_process_skb_fields(rx_ring, rx_desc, skb, rx_ptype);
+
 
-               i40e_rx_checksum(vsi, skb, rx_status, rx_error, rx_ptype);
+               vlan_tag = (qword & BIT(I40E_RX_DESC_STATUS_L2TAG1P_SHIFT)) ?
+                          le16_to_cpu(rx_desc->wb.qword0.lo_dword.l2tag1) : 0;
 
-               vlan_tag = rx_status & BIT(I40E_RX_DESC_STATUS_L2TAG1P_SHIFT)
-                        ? le16_to_cpu(rx_desc->wb.qword0.lo_dword.l2tag1)
-                        : 0;
                i40e_receive_skb(rx_ring, skb, vlan_tag);
 
-               rx_desc->wb.qword1.status_error_len = 0;
-       } while (likely(total_rx_packets < budget));
+               /* update budget accounting */
+               total_rx_packets++;
+       }
 
        u64_stats_update_begin(&rx_ring->syncp);
        rx_ring->stats.packets += total_rx_packets;
@@ -1292,6 +1289,7 @@ static int i40e_clean_rx_irq_1buf(struct i40e_ring *rx_ring, int budget)
        rx_ring->q_vector->rx.total_packets += total_rx_packets;
        rx_ring->q_vector->rx.total_bytes += total_rx_bytes;
 
+       /* guarantee a trip back through this routine if there was a failure */
        return failure ? budget : total_rx_packets;
 }
 
@@ -1433,12 +1431,7 @@ int i40evf_napi_poll(struct napi_struct *napi, int budget)
        budget_per_ring = max(budget/q_vector->num_ringpairs, 1);
 
        i40e_for_each_ring(ring, q_vector->rx) {
-               int cleaned;
-
-               if (ring_is_ps_enabled(ring))
-                       cleaned = i40e_clean_rx_irq_ps(ring, budget_per_ring);
-               else
-                       cleaned = i40e_clean_rx_irq_1buf(ring, budget_per_ring);
+               int cleaned = i40e_clean_rx_irq(ring, budget_per_ring);
 
                work_done += cleaned;
                /* if we clean as many as budgeted, we must not be done */
@@ -1564,9 +1557,16 @@ static int i40e_tso(struct sk_buff *skb, u8 *hdr_len, u64 *cd_type_cmd_tso_mss)
                ip.v6->payload_len = 0;
        }
 
-       if (skb_shinfo(skb)->gso_type & (SKB_GSO_UDP_TUNNEL | SKB_GSO_GRE |
+       if (skb_shinfo(skb)->gso_type & (SKB_GSO_GRE |
+                                        SKB_GSO_GRE_CSUM |
+                                        SKB_GSO_IPIP |
+                                        SKB_GSO_SIT |
+                                        SKB_GSO_UDP_TUNNEL |
                                         SKB_GSO_UDP_TUNNEL_CSUM)) {
-               if (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_TUNNEL_CSUM) {
+               if (!(skb_shinfo(skb)->gso_type & SKB_GSO_PARTIAL) &&
+                   (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_TUNNEL_CSUM)) {
+                       l4.udp->len = 0;
+
                        /* determine offset of outer transport header */
                        l4_offset = l4.hdr - skb->data;
 
@@ -1665,13 +1665,6 @@ static int i40e_tx_enable_csum(struct sk_buff *skb, u32 *tx_flags,
                                                 &l4_proto, &frag_off);
                }
 
-               /* compute outer L3 header size */
-               tunnel |= ((l4.hdr - ip.hdr) / 4) <<
-                         I40E_TXD_CTX_QW0_EXT_IPLEN_SHIFT;
-
-               /* switch IP header pointer from outer to inner header */
-               ip.hdr = skb_inner_network_header(skb);
-
                /* define outer transport */
                switch (l4_proto) {
                case IPPROTO_UDP:
@@ -1682,6 +1675,11 @@ static int i40e_tx_enable_csum(struct sk_buff *skb, u32 *tx_flags,
                        tunnel |= I40E_TXD_CTX_GRE_TUNNELING;
                        *tx_flags |= I40E_TX_FLAGS_VXLAN_TUNNEL;
                        break;
+               case IPPROTO_IPIP:
+               case IPPROTO_IPV6:
+                       *tx_flags |= I40E_TX_FLAGS_VXLAN_TUNNEL;
+                       l4.hdr = skb_inner_network_header(skb);
+                       break;
                default:
                        if (*tx_flags & I40E_TX_FLAGS_TSO)
                                return -1;
@@ -1690,12 +1688,20 @@ static int i40e_tx_enable_csum(struct sk_buff *skb, u32 *tx_flags,
                        return 0;
                }
 
+               /* compute outer L3 header size */
+               tunnel |= ((l4.hdr - ip.hdr) / 4) <<
+                         I40E_TXD_CTX_QW0_EXT_IPLEN_SHIFT;
+
+               /* switch IP header pointer from outer to inner header */
+               ip.hdr = skb_inner_network_header(skb);
+
                /* compute tunnel header size */
                tunnel |= ((ip.hdr - l4.hdr) / 2) <<
                          I40E_TXD_CTX_QW0_NATLEN_SHIFT;
 
                /* indicate if we need to offload outer UDP header */
                if ((*tx_flags & I40E_TX_FLAGS_TSO) &&
+                   !(skb_shinfo(skb)->gso_type & SKB_GSO_PARTIAL) &&
                    (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_TUNNEL_CSUM))
                        tunnel |= I40E_TXD_CTX_QW0_L4T_CS_MASK;
 
index 54b52e8..0112277 100644 (file)
@@ -102,8 +102,8 @@ enum i40e_dyn_idx_t {
        (((pf)->flags & I40E_FLAG_MULTIPLE_TCP_UDP_RSS_PCTYPE) ? \
          I40E_DEFAULT_RSS_HENA_EXPANDED : I40E_DEFAULT_RSS_HENA)
 
-/* Supported Rx Buffer Sizes */
-#define I40E_RXBUFFER_512   512    /* Used for packet split */
+/* Supported Rx Buffer Sizes (a multiple of 128) */
+#define I40E_RXBUFFER_256   256
 #define I40E_RXBUFFER_2048  2048
 #define I40E_RXBUFFER_3072  3072   /* For FCoE MTU of 2158 */
 #define I40E_RXBUFFER_4096  4096
@@ -114,9 +114,28 @@ enum i40e_dyn_idx_t {
  * reserve 2 more, and skb_shared_info adds an additional 384 bytes more,
  * this adds up to 512 bytes of extra data meaning the smallest allocation
  * we could have is 1K.
- * i.e. RXBUFFER_512 --> size-1024 slab
+ * i.e. RXBUFFER_256 --> 960 byte skb (size-1024 slab)
+ * i.e. RXBUFFER_512 --> 1216 byte skb (size-2048 slab)
  */
-#define I40E_RX_HDR_SIZE  I40E_RXBUFFER_512
+#define I40E_RX_HDR_SIZE I40E_RXBUFFER_256
+#define i40e_rx_desc i40e_32byte_rx_desc
+
+/**
+ * i40e_test_staterr - tests bits in Rx descriptor status and error fields
+ * @rx_desc: pointer to receive descriptor (in le64 format)
+ * @stat_err_bits: value to mask
+ *
+ * This function does some fast chicanery in order to return the
+ * value of the mask which is really only used for boolean tests.
+ * The status_error_len doesn't need to be shifted because it begins
+ * at offset zero.
+ */
+static inline bool i40e_test_staterr(union i40e_rx_desc *rx_desc,
+                                    const u64 stat_err_bits)
+{
+       return !!(rx_desc->wb.qword1.status_error_len &
+                 cpu_to_le64(stat_err_bits));
+}
 
 /* How many Rx Buffers do we bundle into one write to the hardware ? */
 #define I40E_RX_BUFFER_WRITE   16      /* Must be power of 2 */
@@ -142,8 +161,6 @@ enum i40e_dyn_idx_t {
                prefetch((n));                          \
        } while (0)
 
-#define i40e_rx_desc i40e_32byte_rx_desc
-
 #define I40E_MAX_BUFFER_TXD    8
 #define I40E_MIN_TX_LEN                17
 
@@ -212,10 +229,8 @@ struct i40e_tx_buffer {
 
 struct i40e_rx_buffer {
        struct sk_buff *skb;
-       void *hdr_buf;
        dma_addr_t dma;
        struct page *page;
-       dma_addr_t page_dma;
        unsigned int page_offset;
 };
 
@@ -244,22 +259,18 @@ struct i40e_rx_queue_stats {
 enum i40e_ring_state_t {
        __I40E_TX_FDIR_INIT_DONE,
        __I40E_TX_XPS_INIT_DONE,
-       __I40E_RX_PS_ENABLED,
-       __I40E_RX_16BYTE_DESC_ENABLED,
 };
 
-#define ring_is_ps_enabled(ring) \
-       test_bit(__I40E_RX_PS_ENABLED, &(ring)->state)
-#define set_ring_ps_enabled(ring) \
-       set_bit(__I40E_RX_PS_ENABLED, &(ring)->state)
-#define clear_ring_ps_enabled(ring) \
-       clear_bit(__I40E_RX_PS_ENABLED, &(ring)->state)
-#define ring_is_16byte_desc_enabled(ring) \
-       test_bit(__I40E_RX_16BYTE_DESC_ENABLED, &(ring)->state)
-#define set_ring_16byte_desc_enabled(ring) \
-       set_bit(__I40E_RX_16BYTE_DESC_ENABLED, &(ring)->state)
-#define clear_ring_16byte_desc_enabled(ring) \
-       clear_bit(__I40E_RX_16BYTE_DESC_ENABLED, &(ring)->state)
+/* some useful defines for virtchannel interface, which
+ * is the only remaining user of header split
+ */
+#define I40E_RX_DTYPE_NO_SPLIT      0
+#define I40E_RX_DTYPE_HEADER_SPLIT  1
+#define I40E_RX_DTYPE_SPLIT_ALWAYS  2
+#define I40E_RX_SPLIT_L2      0x1
+#define I40E_RX_SPLIT_IP      0x2
+#define I40E_RX_SPLIT_TCP_UDP 0x4
+#define I40E_RX_SPLIT_SCTP    0x8
 
 /* struct that defines a descriptor ring, associated with a VSI */
 struct i40e_ring {
@@ -278,16 +289,7 @@ struct i40e_ring {
 
        u16 count;                      /* Number of descriptors */
        u16 reg_idx;                    /* HW register index of the ring */
-       u16 rx_hdr_len;
        u16 rx_buf_len;
-       u8  dtype;
-#define I40E_RX_DTYPE_NO_SPLIT      0
-#define I40E_RX_DTYPE_HEADER_SPLIT  1
-#define I40E_RX_DTYPE_SPLIT_ALWAYS  2
-#define I40E_RX_SPLIT_L2      0x1
-#define I40E_RX_SPLIT_IP      0x2
-#define I40E_RX_SPLIT_TCP_UDP 0x4
-#define I40E_RX_SPLIT_SCTP    0x8
 
        /* used in interrupt processing */
        u16 next_to_use;
@@ -319,6 +321,7 @@ struct i40e_ring {
        struct i40e_q_vector *q_vector; /* Backreference to associated vector */
 
        struct rcu_head rcu;            /* to avoid race on free */
+       u16 next_to_alloc;
 } ____cacheline_internodealigned_in_smp;
 
 enum i40e_latency_range {
@@ -342,9 +345,7 @@ struct i40e_ring_container {
 #define i40e_for_each_ring(pos, head) \
        for (pos = (head).ring; pos != NULL; pos = pos->next)
 
-bool i40evf_alloc_rx_buffers_ps(struct i40e_ring *rxr, u16 cleaned_count);
-bool i40evf_alloc_rx_buffers_1buf(struct i40e_ring *rxr, u16 cleaned_count);
-void i40evf_alloc_rx_headers(struct i40e_ring *rxr);
+bool i40evf_alloc_rx_buffers(struct i40e_ring *rxr, u16 cleaned_count);
 netdev_tx_t i40evf_xmit_frame(struct sk_buff *skb, struct net_device *netdev);
 void i40evf_clean_tx_ring(struct i40e_ring *tx_ring);
 void i40evf_clean_rx_ring(struct i40e_ring *rx_ring);
index 4a78c18..97f96e0 100644 (file)
@@ -36,7 +36,7 @@
 #include "i40e_devids.h"
 
 /* I40E_MASK is a macro used on 32 bit registers */
-#define I40E_MASK(mask, shift) (mask << shift)
+#define I40E_MASK(mask, shift) ((u32)(mask) << (shift))
 
 #define I40E_MAX_VSI_QP                        16
 #define I40E_MAX_VF_VSI                        3
@@ -258,6 +258,11 @@ struct i40e_hw_capabilities {
 #define I40E_FLEX10_STATUS_DCC_ERROR   0x1
 #define I40E_FLEX10_STATUS_VC_MODE     0x2
 
+       bool sec_rev_disabled;
+       bool update_disabled;
+#define I40E_NVM_MGMT_SEC_REV_DISABLED 0x1
+#define I40E_NVM_MGMT_UPDATE_DISABLED  0x2
+
        bool mgmt_cem;
        bool ieee_1588;
        bool iwarp;
@@ -523,6 +528,7 @@ struct i40e_hw {
        struct i40e_aq_desc nvm_wb_desc;
        struct i40e_virt_mem nvm_buff;
        bool nvm_release_on_done;
+       u16 nvm_wait_opcode;
 
        /* HMC info */
        struct i40e_hmc_info hmc; /* HMC info struct */
index e657ecc..fa044a9 100644 (file)
@@ -67,8 +67,6 @@ struct i40e_vsi {
        u16 rx_itr_setting;
        u16 tx_itr_setting;
        u16 qs_handle;
-       u8 *rss_hkey_user; /* User configured hash keys */
-       u8 *rss_lut_user;  /* User configured lookup table entries */
 };
 
 /* How many Rx Buffers do we bundle into one write to the hardware ? */
@@ -82,9 +80,6 @@ struct i40e_vsi {
 #define I40EVF_REQ_DESCRIPTOR_MULTIPLE  32
 
 /* Supported Rx Buffer Sizes */
-#define I40EVF_RXBUFFER_64    64     /* Used for packet split */
-#define I40EVF_RXBUFFER_128   128    /* Used for packet split */
-#define I40EVF_RXBUFFER_256   256    /* Used for packet split */
 #define I40EVF_RXBUFFER_2048  2048
 #define I40EVF_MAX_RXBUFFER   16384  /* largest size for single descriptor */
 #define I40EVF_MAX_AQ_BUF_SIZE    4096
@@ -210,9 +205,6 @@ struct i40evf_adapter {
 
        u32 flags;
 #define I40EVF_FLAG_RX_CSUM_ENABLED              BIT(0)
-#define I40EVF_FLAG_RX_1BUF_CAPABLE              BIT(1)
-#define I40EVF_FLAG_RX_PS_CAPABLE                BIT(2)
-#define I40EVF_FLAG_RX_PS_ENABLED                BIT(3)
 #define I40EVF_FLAG_IMIR_ENABLED                 BIT(5)
 #define I40EVF_FLAG_MQ_CAPABLE                   BIT(6)
 #define I40EVF_FLAG_NEED_LINK_UPDATE             BIT(7)
@@ -222,6 +214,7 @@ struct i40evf_adapter {
 #define I40EVF_FLAG_WB_ON_ITR_CAPABLE          BIT(11)
 #define I40EVF_FLAG_OUTER_UDP_CSUM_CAPABLE     BIT(12)
 #define I40EVF_FLAG_ADDR_SET_BY_PF             BIT(13)
+#define I40EVF_FLAG_PROMISC_ON                 BIT(15)
 /* duplicates for common code */
 #define I40E_FLAG_FDIR_ATR_ENABLED              0
 #define I40E_FLAG_DCB_ENABLED                   0
@@ -239,8 +232,15 @@ struct i40evf_adapter {
 #define I40EVF_FLAG_AQ_CONFIGURE_QUEUES                BIT(6)
 #define I40EVF_FLAG_AQ_MAP_VECTORS             BIT(7)
 #define I40EVF_FLAG_AQ_HANDLE_RESET            BIT(8)
-#define I40EVF_FLAG_AQ_CONFIGURE_RSS           BIT(9)
+#define I40EVF_FLAG_AQ_CONFIGURE_RSS           BIT(9)  /* direct AQ config */
 #define I40EVF_FLAG_AQ_GET_CONFIG              BIT(10)
+/* Newer style, RSS done by the PF so we can ignore hardware vagaries. */
+#define I40EVF_FLAG_AQ_GET_HENA                        BIT(11)
+#define I40EVF_FLAG_AQ_SET_HENA                        BIT(12)
+#define I40EVF_FLAG_AQ_SET_RSS_KEY             BIT(13)
+#define I40EVF_FLAG_AQ_SET_RSS_LUT             BIT(14)
+#define I40EVF_FLAG_AQ_REQUEST_PROMISC         BIT(15)
+#define I40EVF_FLAG_AQ_RELEASE_PROMISC         BIT(16)
 
        /* OS defined structs */
        struct net_device *netdev;
@@ -256,10 +256,18 @@ struct i40evf_adapter {
        bool netdev_registered;
        bool link_up;
        enum i40e_virtchnl_ops current_op;
-#define CLIENT_ENABLED(_a) ((_a)->vf_res->vf_offload_flags & \
-                           I40E_VIRTCHNL_VF_OFFLOAD_IWARP)
+#define CLIENT_ENABLED(_a) ((_a)->vf_res ? \
+                           (_a)->vf_res->vf_offload_flags & \
+                               I40E_VIRTCHNL_VF_OFFLOAD_IWARP : \
+                           0)
+/* RSS by the PF should be preferred over RSS via other methods. */
+#define RSS_PF(_a) ((_a)->vf_res->vf_offload_flags & \
+                   I40E_VIRTCHNL_VF_OFFLOAD_RSS_PF)
 #define RSS_AQ(_a) ((_a)->vf_res->vf_offload_flags & \
                    I40E_VIRTCHNL_VF_OFFLOAD_RSS_AQ)
+#define RSS_REG(_a) (!((_a)->vf_res->vf_offload_flags & \
+                      (I40E_VIRTCHNL_VF_OFFLOAD_RSS_AQ | \
+                       I40E_VIRTCHNL_VF_OFFLOAD_RSS_PF)))
 #define VLAN_ALLOWED(_a) ((_a)->vf_res->vf_offload_flags & \
                          I40E_VIRTCHNL_VF_OFFLOAD_VLAN)
        struct i40e_virtchnl_vf_resource *vf_res; /* incl. all VSIs */
@@ -271,11 +279,16 @@ struct i40evf_adapter {
        struct i40e_eth_stats current_stats;
        struct i40e_vsi vsi;
        u32 aq_wait_count;
+       /* RSS stuff */
+       u64 hena;
+       u16 rss_key_size;
+       u16 rss_lut_size;
+       u8 *rss_key;
+       u8 *rss_lut;
 };
 
 
 /* Ethtool Private Flags */
-#define I40EVF_PRIV_FLAGS_PS           BIT(0)
 
 /* needed by i40evf_ethtool.c */
 extern char i40evf_driver_name[];
@@ -314,11 +327,12 @@ void i40evf_del_vlans(struct i40evf_adapter *adapter);
 void i40evf_set_promiscuous(struct i40evf_adapter *adapter, int flags);
 void i40evf_request_stats(struct i40evf_adapter *adapter);
 void i40evf_request_reset(struct i40evf_adapter *adapter);
+void i40evf_get_hena(struct i40evf_adapter *adapter);
+void i40evf_set_hena(struct i40evf_adapter *adapter);
+void i40evf_set_rss_key(struct i40evf_adapter *adapter);
+void i40evf_set_rss_lut(struct i40evf_adapter *adapter);
 void i40evf_virtchnl_completion(struct i40evf_adapter *adapter,
                                enum i40e_virtchnl_ops v_opcode,
                                i40e_status v_retval, u8 *msg, u16 msglen);
-int i40evf_config_rss(struct i40e_vsi *vsi, const u8 *seed, u8 *lut,
-                     u16 lut_size);
-int i40evf_get_rss(struct i40e_vsi *vsi, const u8 *seed, u8 *lut,
-                  u16 lut_size);
+int i40evf_config_rss(struct i40evf_adapter *adapter);
 #endif /* _I40EVF_H_ */
index dd4430a..c9c202f 100644 (file)
@@ -63,12 +63,6 @@ static const struct i40evf_stats i40evf_gstrings_stats[] = {
 #define I40EVF_STATS_LEN(_dev) \
        (I40EVF_GLOBAL_STATS_LEN + I40EVF_QUEUE_STATS_LEN(_dev))
 
-static const char i40evf_priv_flags_strings[][ETH_GSTRING_LEN] = {
-       "packet-split",
-};
-
-#define I40EVF_PRIV_FLAGS_STR_LEN ARRAY_SIZE(i40evf_priv_flags_strings)
-
 /**
  * i40evf_get_settings - Get Link Speed and Duplex settings
  * @netdev: network interface device structure
@@ -103,8 +97,6 @@ static int i40evf_get_sset_count(struct net_device *netdev, int sset)
 {
        if (sset == ETH_SS_STATS)
                return I40EVF_STATS_LEN(netdev);
-       else if (sset == ETH_SS_PRIV_FLAGS)
-               return I40EVF_PRIV_FLAGS_STR_LEN;
        else
                return -EINVAL;
 }
@@ -170,12 +162,6 @@ static void i40evf_get_strings(struct net_device *netdev, u32 sset, u8 *data)
                        snprintf(p, ETH_GSTRING_LEN, "rx-%u.bytes", i);
                        p += ETH_GSTRING_LEN;
                }
-       } else if (sset == ETH_SS_PRIV_FLAGS) {
-               for (i = 0; i < I40EVF_PRIV_FLAGS_STR_LEN; i++) {
-                       memcpy(data, i40evf_priv_flags_strings[i],
-                              ETH_GSTRING_LEN);
-                       data += ETH_GSTRING_LEN;
-               }
        }
 }
 
@@ -225,7 +211,6 @@ static void i40evf_get_drvinfo(struct net_device *netdev,
        strlcpy(drvinfo->version, i40evf_driver_version, 32);
        strlcpy(drvinfo->fw_version, "N/A", 4);
        strlcpy(drvinfo->bus_info, pci_name(adapter->pdev), 32);
-       drvinfo->n_priv_flags = I40EVF_PRIV_FLAGS_STR_LEN;
 }
 
 /**
@@ -377,63 +362,6 @@ static int i40evf_set_coalesce(struct net_device *netdev,
        return 0;
 }
 
-/**
- * i40e_get_rss_hash_opts - Get RSS hash Input Set for each flow type
- * @adapter: board private structure
- * @cmd: ethtool rxnfc command
- *
- * Returns Success if the flow is supported, else Invalid Input.
- **/
-static int i40evf_get_rss_hash_opts(struct i40evf_adapter *adapter,
-                                   struct ethtool_rxnfc *cmd)
-{
-       struct i40e_hw *hw = &adapter->hw;
-       u64 hena = (u64)rd32(hw, I40E_VFQF_HENA(0)) |
-                  ((u64)rd32(hw, I40E_VFQF_HENA(1)) << 32);
-
-       /* We always hash on IP src and dest addresses */
-       cmd->data = RXH_IP_SRC | RXH_IP_DST;
-
-       switch (cmd->flow_type) {
-       case TCP_V4_FLOW:
-               if (hena & BIT_ULL(I40E_FILTER_PCTYPE_NONF_IPV4_TCP))
-                       cmd->data |= RXH_L4_B_0_1 | RXH_L4_B_2_3;
-               break;
-       case UDP_V4_FLOW:
-               if (hena & BIT_ULL(I40E_FILTER_PCTYPE_NONF_IPV4_UDP))
-                       cmd->data |= RXH_L4_B_0_1 | RXH_L4_B_2_3;
-               break;
-
-       case SCTP_V4_FLOW:
-       case AH_ESP_V4_FLOW:
-       case AH_V4_FLOW:
-       case ESP_V4_FLOW:
-       case IPV4_FLOW:
-               break;
-
-       case TCP_V6_FLOW:
-               if (hena & BIT_ULL(I40E_FILTER_PCTYPE_NONF_IPV6_TCP))
-                       cmd->data |= RXH_L4_B_0_1 | RXH_L4_B_2_3;
-               break;
-       case UDP_V6_FLOW:
-               if (hena & BIT_ULL(I40E_FILTER_PCTYPE_NONF_IPV6_UDP))
-                       cmd->data |= RXH_L4_B_0_1 | RXH_L4_B_2_3;
-               break;
-
-       case SCTP_V6_FLOW:
-       case AH_ESP_V6_FLOW:
-       case AH_V6_FLOW:
-       case ESP_V6_FLOW:
-       case IPV6_FLOW:
-               break;
-       default:
-               cmd->data = 0;
-               return -EINVAL;
-       }
-
-       return 0;
-}
-
 /**
  * i40evf_get_rxnfc - command to get RX flow classification rules
  * @netdev: network interface device structure
@@ -454,145 +382,8 @@ static int i40evf_get_rxnfc(struct net_device *netdev,
                ret = 0;
                break;
        case ETHTOOL_GRXFH:
-               ret = i40evf_get_rss_hash_opts(adapter, cmd);
-               break;
-       default:
-               break;
-       }
-
-       return ret;
-}
-
-/**
- * i40evf_set_rss_hash_opt - Enable/Disable flow types for RSS hash
- * @adapter: board private structure
- * @cmd: ethtool rxnfc command
- *
- * Returns Success if the flow input set is supported.
- **/
-static int i40evf_set_rss_hash_opt(struct i40evf_adapter *adapter,
-                                  struct ethtool_rxnfc *nfc)
-{
-       struct i40e_hw *hw = &adapter->hw;
-       u32 flags = adapter->vf_res->vf_offload_flags;
-
-       u64 hena = (u64)rd32(hw, I40E_VFQF_HENA(0)) |
-                  ((u64)rd32(hw, I40E_VFQF_HENA(1)) << 32);
-
-       /* RSS does not support anything other than hashing
-        * to queues on src and dst IPs and ports
-        */
-       if (nfc->data & ~(RXH_IP_SRC | RXH_IP_DST |
-                         RXH_L4_B_0_1 | RXH_L4_B_2_3))
-               return -EINVAL;
-
-       /* We need at least the IP SRC and DEST fields for hashing */
-       if (!(nfc->data & RXH_IP_SRC) ||
-           !(nfc->data & RXH_IP_DST))
-               return -EINVAL;
-
-       switch (nfc->flow_type) {
-       case TCP_V4_FLOW:
-               if (nfc->data & (RXH_L4_B_0_1 | RXH_L4_B_2_3)) {
-                       if (flags & I40E_VIRTCHNL_VF_OFFLOAD_RSS_PCTYPE_V2)
-                               hena |=
-                          BIT_ULL(I40E_FILTER_PCTYPE_NONF_IPV4_TCP_SYN_NO_ACK);
-
-                       hena |= BIT_ULL(I40E_FILTER_PCTYPE_NONF_IPV4_TCP);
-               } else {
-                       return -EINVAL;
-               }
-               break;
-       case TCP_V6_FLOW:
-               if (nfc->data & (RXH_L4_B_0_1 | RXH_L4_B_2_3)) {
-                       if (flags & I40E_VIRTCHNL_VF_OFFLOAD_RSS_PCTYPE_V2)
-                               hena |=
-                          BIT_ULL(I40E_FILTER_PCTYPE_NONF_IPV6_TCP_SYN_NO_ACK);
-
-                       hena |= BIT_ULL(I40E_FILTER_PCTYPE_NONF_IPV6_TCP);
-               } else {
-                       return -EINVAL;
-               }
-               break;
-       case UDP_V4_FLOW:
-               if (nfc->data & (RXH_L4_B_0_1 | RXH_L4_B_2_3)) {
-                       if (flags & I40E_VIRTCHNL_VF_OFFLOAD_RSS_PCTYPE_V2)
-                               hena |=
-                           BIT_ULL(I40E_FILTER_PCTYPE_NONF_UNICAST_IPV4_UDP) |
-                           BIT_ULL(I40E_FILTER_PCTYPE_NONF_MULTICAST_IPV4_UDP);
-
-                       hena |= (BIT_ULL(I40E_FILTER_PCTYPE_NONF_IPV4_UDP) |
-                                BIT_ULL(I40E_FILTER_PCTYPE_FRAG_IPV4));
-               } else {
-                       return -EINVAL;
-               }
-               break;
-       case UDP_V6_FLOW:
-               if (nfc->data & (RXH_L4_B_0_1 | RXH_L4_B_2_3)) {
-                       if (flags & I40E_VIRTCHNL_VF_OFFLOAD_RSS_PCTYPE_V2)
-                               hena |=
-                           BIT_ULL(I40E_FILTER_PCTYPE_NONF_UNICAST_IPV6_UDP) |
-                           BIT_ULL(I40E_FILTER_PCTYPE_NONF_MULTICAST_IPV6_UDP);
-
-                       hena |= (BIT_ULL(I40E_FILTER_PCTYPE_NONF_IPV6_UDP) |
-                                BIT_ULL(I40E_FILTER_PCTYPE_FRAG_IPV6));
-               } else {
-                       return -EINVAL;
-               }
-               break;
-       case AH_ESP_V4_FLOW:
-       case AH_V4_FLOW:
-       case ESP_V4_FLOW:
-       case SCTP_V4_FLOW:
-               if ((nfc->data & RXH_L4_B_0_1) ||
-                   (nfc->data & RXH_L4_B_2_3))
-                       return -EINVAL;
-               hena |= BIT_ULL(I40E_FILTER_PCTYPE_NONF_IPV4_OTHER);
-               break;
-       case AH_ESP_V6_FLOW:
-       case AH_V6_FLOW:
-       case ESP_V6_FLOW:
-       case SCTP_V6_FLOW:
-               if ((nfc->data & RXH_L4_B_0_1) ||
-                   (nfc->data & RXH_L4_B_2_3))
-                       return -EINVAL;
-               hena |= BIT_ULL(I40E_FILTER_PCTYPE_NONF_IPV6_OTHER);
-               break;
-       case IPV4_FLOW:
-               hena |= (BIT_ULL(I40E_FILTER_PCTYPE_NONF_IPV4_OTHER) |
-                        BIT_ULL(I40E_FILTER_PCTYPE_FRAG_IPV4));
-               break;
-       case IPV6_FLOW:
-               hena |= (BIT_ULL(I40E_FILTER_PCTYPE_NONF_IPV6_OTHER) |
-                        BIT_ULL(I40E_FILTER_PCTYPE_FRAG_IPV6));
-               break;
-       default:
-               return -EINVAL;
-       }
-
-       wr32(hw, I40E_VFQF_HENA(0), (u32)hena);
-       wr32(hw, I40E_VFQF_HENA(1), (u32)(hena >> 32));
-       i40e_flush(hw);
-
-       return 0;
-}
-
-/**
- * i40evf_set_rxnfc - command to set RX flow classification rules
- * @netdev: network interface device structure
- * @cmd: ethtool rxnfc command
- *
- * Returns Success if the command is supported.
- **/
-static int i40evf_set_rxnfc(struct net_device *netdev,
-                           struct ethtool_rxnfc *cmd)
-{
-       struct i40evf_adapter *adapter = netdev_priv(netdev);
-       int ret = -EOPNOTSUPP;
-
-       switch (cmd->cmd) {
-       case ETHTOOL_SRXFH:
-               ret = i40evf_set_rss_hash_opt(adapter, cmd);
+               netdev_info(netdev,
+                           "RSS hash info is not available to vf, use pf.\n");
                break;
        default:
                break;
@@ -600,7 +391,6 @@ static int i40evf_set_rxnfc(struct net_device *netdev,
 
        return ret;
 }
-
 /**
  * i40evf_get_channels: get the number of channels supported by the device
  * @netdev: network interface device structure
@@ -623,6 +413,19 @@ static void i40evf_get_channels(struct net_device *netdev,
        ch->combined_count = adapter->num_active_queues;
 }
 
+/**
+ * i40evf_get_rxfh_key_size - get the RSS hash key size
+ * @netdev: network interface device structure
+ *
+ * Returns the table size.
+ **/
+static u32 i40evf_get_rxfh_key_size(struct net_device *netdev)
+{
+       struct i40evf_adapter *adapter = netdev_priv(netdev);
+
+       return adapter->rss_key_size;
+}
+
 /**
  * i40evf_get_rxfh_indir_size - get the rx flow hash indirection table size
  * @netdev: network interface device structure
@@ -631,7 +434,9 @@ static void i40evf_get_channels(struct net_device *netdev,
  **/
 static u32 i40evf_get_rxfh_indir_size(struct net_device *netdev)
 {
-       return (I40E_VFQF_HLUT_MAX_INDEX + 1) * 4;
+       struct i40evf_adapter *adapter = netdev_priv(netdev);
+
+       return adapter->rss_lut_size;
 }
 
 /**
@@ -646,9 +451,6 @@ static int i40evf_get_rxfh(struct net_device *netdev, u32 *indir, u8 *key,
                           u8 *hfunc)
 {
        struct i40evf_adapter *adapter = netdev_priv(netdev);
-       struct i40e_vsi *vsi = &adapter->vsi;
-       u8 *seed = NULL, *lut;
-       int ret;
        u16 i;
 
        if (hfunc)
@@ -656,24 +458,13 @@ static int i40evf_get_rxfh(struct net_device *netdev, u32 *indir, u8 *key,
        if (!indir)
                return 0;
 
-       seed = key;
-
-       lut = kzalloc(I40EVF_HLUT_ARRAY_SIZE, GFP_KERNEL);
-       if (!lut)
-               return -ENOMEM;
-
-       ret = i40evf_get_rss(vsi, seed, lut, I40EVF_HLUT_ARRAY_SIZE);
-       if (ret)
-               goto out;
+       memcpy(key, adapter->rss_key, adapter->rss_key_size);
 
        /* Each 32 bits pointed by 'indir' is stored with a lut entry */
-       for (i = 0; i < I40EVF_HLUT_ARRAY_SIZE; i++)
-               indir[i] = (u32)lut[i];
+       for (i = 0; i < adapter->rss_lut_size; i++)
+               indir[i] = (u32)adapter->rss_lut[i];
 
-out:
-       kfree(lut);
-
-       return ret;
+       return 0;
 }
 
 /**
@@ -689,8 +480,6 @@ static int i40evf_set_rxfh(struct net_device *netdev, const u32 *indir,
                           const u8 *key, const u8 hfunc)
 {
        struct i40evf_adapter *adapter = netdev_priv(netdev);
-       struct i40e_vsi *vsi = &adapter->vsi;
-       u8 *seed = NULL;
        u16 i;
 
        /* We do not allow change in unsupported parameters */
@@ -701,76 +490,14 @@ static int i40evf_set_rxfh(struct net_device *netdev, const u32 *indir,
                return 0;
 
        if (key) {
-               if (!vsi->rss_hkey_user) {
-                       vsi->rss_hkey_user = kzalloc(I40EVF_HKEY_ARRAY_SIZE,
-                                                    GFP_KERNEL);
-                       if (!vsi->rss_hkey_user)
-                               return -ENOMEM;
-               }
-               memcpy(vsi->rss_hkey_user, key, I40EVF_HKEY_ARRAY_SIZE);
-               seed = vsi->rss_hkey_user;
-       }
-       if (!vsi->rss_lut_user) {
-               vsi->rss_lut_user = kzalloc(I40EVF_HLUT_ARRAY_SIZE,
-                                           GFP_KERNEL);
-               if (!vsi->rss_lut_user)
-                       return -ENOMEM;
+               memcpy(adapter->rss_key, key, adapter->rss_key_size);
        }
 
        /* Each 32 bits pointed by 'indir' is stored with a lut entry */
-       for (i = 0; i < I40EVF_HLUT_ARRAY_SIZE; i++)
-               vsi->rss_lut_user[i] = (u8)(indir[i]);
-
-       return i40evf_config_rss(vsi, seed, vsi->rss_lut_user,
-                                I40EVF_HLUT_ARRAY_SIZE);
-}
-
-/**
- * i40evf_get_priv_flags - report device private flags
- * @dev: network interface device structure
- *
- * The get string set count and the string set should be matched for each
- * flag returned.  Add new strings for each flag to the i40e_priv_flags_strings
- * array.
- *
- * Returns a u32 bitmap of flags.
- **/
-static u32 i40evf_get_priv_flags(struct net_device *dev)
-{
-       struct i40evf_adapter *adapter = netdev_priv(dev);
-       u32 ret_flags = 0;
-
-       ret_flags |= adapter->flags & I40EVF_FLAG_RX_PS_ENABLED ?
-               I40EVF_PRIV_FLAGS_PS : 0;
-
-       return ret_flags;
-}
+       for (i = 0; i < adapter->rss_lut_size; i++)
+               adapter->rss_lut[i] = (u8)(indir[i]);
 
-/**
- * i40evf_set_priv_flags - set private flags
- * @dev: network interface device structure
- * @flags: bit flags to be set
- **/
-static int i40evf_set_priv_flags(struct net_device *dev, u32 flags)
-{
-       struct i40evf_adapter *adapter = netdev_priv(dev);
-       bool reset_required = false;
-
-       if ((flags & I40EVF_PRIV_FLAGS_PS) &&
-           !(adapter->flags & I40EVF_FLAG_RX_PS_ENABLED)) {
-               adapter->flags |= I40EVF_FLAG_RX_PS_ENABLED;
-               reset_required = true;
-       } else if (!(flags & I40EVF_PRIV_FLAGS_PS) &&
-                  (adapter->flags & I40EVF_FLAG_RX_PS_ENABLED)) {
-               adapter->flags &= ~I40EVF_FLAG_RX_PS_ENABLED;
-               reset_required = true;
-       }
-
-       /* if needed, issue reset to cause things to take effect */
-       if (reset_required)
-               i40evf_schedule_reset(adapter);
-
-       return 0;
+       return i40evf_config_rss(adapter);
 }
 
 static const struct ethtool_ops i40evf_ethtool_ops = {
@@ -782,18 +509,16 @@ static const struct ethtool_ops i40evf_ethtool_ops = {
        .get_strings            = i40evf_get_strings,
        .get_ethtool_stats      = i40evf_get_ethtool_stats,
        .get_sset_count         = i40evf_get_sset_count,
-       .get_priv_flags         = i40evf_get_priv_flags,
-       .set_priv_flags         = i40evf_set_priv_flags,
        .get_msglevel           = i40evf_get_msglevel,
        .set_msglevel           = i40evf_set_msglevel,
        .get_coalesce           = i40evf_get_coalesce,
        .set_coalesce           = i40evf_set_coalesce,
        .get_rxnfc              = i40evf_get_rxnfc,
-       .set_rxnfc              = i40evf_set_rxnfc,
        .get_rxfh_indir_size    = i40evf_get_rxfh_indir_size,
        .get_rxfh               = i40evf_get_rxfh,
        .set_rxfh               = i40evf_set_rxfh,
        .get_channels           = i40evf_get_channels,
+       .get_rxfh_key_size      = i40evf_get_rxfh_key_size,
 };
 
 /**
index 9110319..b548dbe 100644 (file)
@@ -38,7 +38,7 @@ static const char i40evf_driver_string[] =
 
 #define DRV_VERSION_MAJOR 1
 #define DRV_VERSION_MINOR 5
-#define DRV_VERSION_BUILD 5
+#define DRV_VERSION_BUILD 10
 #define DRV_VERSION __stringify(DRV_VERSION_MAJOR) "." \
             __stringify(DRV_VERSION_MINOR) "." \
             __stringify(DRV_VERSION_BUILD) \
@@ -641,28 +641,11 @@ static void i40evf_configure_tx(struct i40evf_adapter *adapter)
 static void i40evf_configure_rx(struct i40evf_adapter *adapter)
 {
        struct i40e_hw *hw = &adapter->hw;
-       struct net_device *netdev = adapter->netdev;
-       int max_frame = netdev->mtu + ETH_HLEN + ETH_FCS_LEN;
        int i;
-       int rx_buf_len;
-
-
-       /* Set the RX buffer length according to the mode */
-       if (adapter->flags & I40EVF_FLAG_RX_PS_ENABLED ||
-           netdev->mtu <= ETH_DATA_LEN)
-               rx_buf_len = I40EVF_RXBUFFER_2048;
-       else
-               rx_buf_len = ALIGN(max_frame, 1024);
 
        for (i = 0; i < adapter->num_active_queues; i++) {
                adapter->rx_rings[i].tail = hw->hw_addr + I40E_QRX_TAIL1(i);
-               adapter->rx_rings[i].rx_buf_len = rx_buf_len;
-               if (adapter->flags & I40EVF_FLAG_RX_PS_ENABLED) {
-                       set_ring_ps_enabled(&adapter->rx_rings[i]);
-                       adapter->rx_rings[i].rx_hdr_len = I40E_RX_HDR_SIZE;
-               } else {
-                       clear_ring_ps_enabled(&adapter->rx_rings[i]);
-               }
+               adapter->rx_rings[i].rx_buf_len = I40EVF_RXBUFFER_2048;
        }
 }
 
@@ -943,6 +926,14 @@ static void i40evf_set_rx_mode(struct net_device *netdev)
 bottom_of_search_loop:
                continue;
        }
+
+       if (netdev->flags & IFF_PROMISC &&
+           !(adapter->flags & I40EVF_FLAG_PROMISC_ON))
+               adapter->aq_required |= I40EVF_FLAG_AQ_REQUEST_PROMISC;
+       else if (!(netdev->flags & IFF_PROMISC) &&
+                adapter->flags & I40EVF_FLAG_PROMISC_ON)
+               adapter->aq_required |= I40EVF_FLAG_AQ_RELEASE_PROMISC;
+
        clear_bit(__I40EVF_IN_CRITICAL_TASK, &adapter->crit_section);
 }
 
@@ -999,14 +990,7 @@ static void i40evf_configure(struct i40evf_adapter *adapter)
        for (i = 0; i < adapter->num_active_queues; i++) {
                struct i40e_ring *ring = &adapter->rx_rings[i];
 
-       if (adapter->flags & I40EVF_FLAG_RX_PS_ENABLED) {
-               i40evf_alloc_rx_headers(ring);
-               i40evf_alloc_rx_buffers_ps(ring, ring->count);
-       } else {
-               i40evf_alloc_rx_buffers_1buf(ring, ring->count);
-       }
-               ring->next_to_use = ring->count - 1;
-               writel(ring->next_to_use, ring->tail);
+               i40evf_alloc_rx_buffers(ring, I40E_DESC_UNUSED(ring));
        }
 }
 
@@ -1224,24 +1208,18 @@ out:
 }
 
 /**
- * i40e_config_rss_aq - Prepare for RSS using AQ commands
- * @vsi: vsi structure
- * @seed: RSS hash seed
- * @lut: Lookup table
- * @lut_size: Lookup table size
+ * i40e_config_rss_aq - Configure RSS keys and lut by using AQ commands
+ * @adapter: board private structure
  *
  * Return 0 on success, negative on failure
  **/
-static int i40evf_config_rss_aq(struct i40e_vsi *vsi, const u8 *seed,
-                               u8 *lut, u16 lut_size)
+static int i40evf_config_rss_aq(struct i40evf_adapter *adapter)
 {
-       struct i40evf_adapter *adapter = vsi->back;
+       struct i40e_aqc_get_set_rss_key_data *rss_key =
+               (struct i40e_aqc_get_set_rss_key_data *)adapter->rss_key;
        struct i40e_hw *hw = &adapter->hw;
        int ret = 0;
 
-       if (!vsi->id)
-               return -EINVAL;
-
        if (adapter->current_op != I40E_VIRTCHNL_OP_UNKNOWN) {
                /* bail because we already have a command pending */
                dev_err(&adapter->pdev->dev, "Cannot configure RSS, command %d pending\n",
@@ -1249,198 +1227,82 @@ static int i40evf_config_rss_aq(struct i40e_vsi *vsi, const u8 *seed,
                return -EBUSY;
        }
 
-       if (seed) {
-               struct i40e_aqc_get_set_rss_key_data *rss_key =
-                       (struct i40e_aqc_get_set_rss_key_data *)seed;
-               ret = i40evf_aq_set_rss_key(hw, vsi->id, rss_key);
-               if (ret) {
-                       dev_err(&adapter->pdev->dev, "Cannot set RSS key, err %s aq_err %s\n",
-                               i40evf_stat_str(hw, ret),
-                               i40evf_aq_str(hw, hw->aq.asq_last_status));
-                       return ret;
-               }
+       ret = i40evf_aq_set_rss_key(hw, adapter->vsi.id, rss_key);
+       if (ret) {
+               dev_err(&adapter->pdev->dev, "Cannot set RSS key, err %s aq_err %s\n",
+                       i40evf_stat_str(hw, ret),
+                       i40evf_aq_str(hw, hw->aq.asq_last_status));
+               return ret;
+
        }
 
-       if (lut) {
-               ret = i40evf_aq_set_rss_lut(hw, vsi->id, false, lut, lut_size);
-               if (ret) {
-                       dev_err(&adapter->pdev->dev,
-                               "Cannot set RSS lut, err %s aq_err %s\n",
-                               i40evf_stat_str(hw, ret),
-                               i40evf_aq_str(hw, hw->aq.asq_last_status));
-                       return ret;
-               }
+       ret = i40evf_aq_set_rss_lut(hw, adapter->vsi.id, false,
+                                   adapter->rss_lut, adapter->rss_lut_size);
+       if (ret) {
+               dev_err(&adapter->pdev->dev, "Cannot set RSS lut, err %s aq_err %s\n",
+                       i40evf_stat_str(hw, ret),
+                       i40evf_aq_str(hw, hw->aq.asq_last_status));
        }
 
        return ret;
+
 }
 
 /**
  * i40evf_config_rss_reg - Configure RSS keys and lut by writing registers
- * @vsi: Pointer to vsi structure
- * @seed: RSS hash seed
- * @lut: Lookup table
- * @lut_size: Lookup table size
+ * @adapter: board private structure
  *
  * Returns 0 on success, negative on failure
  **/
-static int i40evf_config_rss_reg(struct i40e_vsi *vsi, const u8 *seed,
-                                const u8 *lut, u16 lut_size)
+static int i40evf_config_rss_reg(struct i40evf_adapter *adapter)
 {
-       struct i40evf_adapter *adapter = vsi->back;
        struct i40e_hw *hw = &adapter->hw;
+       u32 *dw;
        u16 i;
 
-       if (seed) {
-               u32 *seed_dw = (u32 *)seed;
-
-               for (i = 0; i <= I40E_VFQF_HKEY_MAX_INDEX; i++)
-                       wr32(hw, I40E_VFQF_HKEY(i), seed_dw[i]);
-       }
-
-       if (lut) {
-               u32 *lut_dw = (u32 *)lut;
+       dw = (u32 *)adapter->rss_key;
+       for (i = 0; i <= adapter->rss_key_size / 4; i++)
+               wr32(hw, I40E_VFQF_HKEY(i), dw[i]);
 
-               if (lut_size != I40EVF_HLUT_ARRAY_SIZE)
-                       return -EINVAL;
+       dw = (u32 *)adapter->rss_lut;
+       for (i = 0; i <= adapter->rss_lut_size / 4; i++)
+               wr32(hw, I40E_VFQF_HLUT(i), dw[i]);
 
-               for (i = 0; i <= I40E_VFQF_HLUT_MAX_INDEX; i++)
-                       wr32(hw, I40E_VFQF_HLUT(i), lut_dw[i]);
-       }
        i40e_flush(hw);
 
        return 0;
 }
 
-/**
- *  * i40evf_get_rss_aq - Get RSS keys and lut by using AQ commands
- *  @vsi: Pointer to vsi structure
- *  @seed: RSS hash seed
- *  @lut: Lookup table
- *  @lut_size: Lookup table size
- *
- *  Return 0 on success, negative on failure
- **/
-static int i40evf_get_rss_aq(struct i40e_vsi *vsi, const u8 *seed,
-                            u8 *lut, u16 lut_size)
-{
-       struct i40evf_adapter *adapter = vsi->back;
-       struct i40e_hw *hw = &adapter->hw;
-       int ret = 0;
-
-       if (seed) {
-               ret = i40evf_aq_get_rss_key(hw, vsi->id,
-                       (struct i40e_aqc_get_set_rss_key_data *)seed);
-               if (ret) {
-                       dev_err(&adapter->pdev->dev,
-                               "Cannot get RSS key, err %s aq_err %s\n",
-                               i40evf_stat_str(hw, ret),
-                               i40evf_aq_str(hw, hw->aq.asq_last_status));
-                       return ret;
-               }
-       }
-
-       if (lut) {
-               ret = i40evf_aq_get_rss_lut(hw, vsi->id, false, lut, lut_size);
-               if (ret) {
-                       dev_err(&adapter->pdev->dev,
-                               "Cannot get RSS lut, err %s aq_err %s\n",
-                               i40evf_stat_str(hw, ret),
-                               i40evf_aq_str(hw, hw->aq.asq_last_status));
-                       return ret;
-               }
-       }
-
-       return ret;
-}
-
-/**
- *  * i40evf_get_rss_reg - Get RSS keys and lut by reading registers
- *  @vsi: Pointer to vsi structure
- *  @seed: RSS hash seed
- *  @lut: Lookup table
- *  @lut_size: Lookup table size
- *
- *  Returns 0 on success, negative on failure
- **/
-static int i40evf_get_rss_reg(struct i40e_vsi *vsi, const u8 *seed,
-                             const u8 *lut, u16 lut_size)
-{
-       struct i40evf_adapter *adapter = vsi->back;
-       struct i40e_hw *hw = &adapter->hw;
-       u16 i;
-
-       if (seed) {
-               u32 *seed_dw = (u32 *)seed;
-
-               for (i = 0; i <= I40E_VFQF_HKEY_MAX_INDEX; i++)
-                       seed_dw[i] = rd32(hw, I40E_VFQF_HKEY(i));
-       }
-
-       if (lut) {
-               u32 *lut_dw = (u32 *)lut;
-
-               if (lut_size != I40EVF_HLUT_ARRAY_SIZE)
-                       return -EINVAL;
-
-               for (i = 0; i <= I40E_VFQF_HLUT_MAX_INDEX; i++)
-                       lut_dw[i] = rd32(hw, I40E_VFQF_HLUT(i));
-       }
-
-       return 0;
-}
-
 /**
  * i40evf_config_rss - Configure RSS keys and lut
- * @vsi: Pointer to vsi structure
- * @seed: RSS hash seed
- * @lut: Lookup table
- * @lut_size: Lookup table size
- *
- * Returns 0 on success, negative on failure
- **/
-int i40evf_config_rss(struct i40e_vsi *vsi, const u8 *seed,
-                     u8 *lut, u16 lut_size)
-{
-       struct i40evf_adapter *adapter = vsi->back;
-
-       if (RSS_AQ(adapter))
-               return i40evf_config_rss_aq(vsi, seed, lut, lut_size);
-       else
-               return i40evf_config_rss_reg(vsi, seed, lut, lut_size);
-}
-
-/**
- * i40evf_get_rss - Get RSS keys and lut
- * @vsi: Pointer to vsi structure
- * @seed: RSS hash seed
- * @lut: Lookup table
- * @lut_size: Lookup table size
+ * @adapter: board private structure
  *
  * Returns 0 on success, negative on failure
  **/
-int i40evf_get_rss(struct i40e_vsi *vsi, const u8 *seed, u8 *lut, u16 lut_size)
+int i40evf_config_rss(struct i40evf_adapter *adapter)
 {
-       struct i40evf_adapter *adapter = vsi->back;
 
-       if (RSS_AQ(adapter))
-               return i40evf_get_rss_aq(vsi, seed, lut, lut_size);
-       else
-               return i40evf_get_rss_reg(vsi, seed, lut, lut_size);
+       if (RSS_PF(adapter)) {
+               adapter->aq_required |= I40EVF_FLAG_AQ_SET_RSS_LUT |
+                                       I40EVF_FLAG_AQ_SET_RSS_KEY;
+               return 0;
+       } else if (RSS_AQ(adapter)) {
+               return i40evf_config_rss_aq(adapter);
+       } else {
+               return i40evf_config_rss_reg(adapter);
+       }
 }
 
 /**
  * i40evf_fill_rss_lut - Fill the lut with default values
- * @lut: Lookup table to be filled with
- * @rss_table_size: Lookup table size
- * @rss_size: Range of queue number for hashing
+ * @adapter: board private structure
  **/
-static void i40evf_fill_rss_lut(u8 *lut, u16 rss_table_size, u16 rss_size)
+static void i40evf_fill_rss_lut(struct i40evf_adapter *adapter)
 {
        u16 i;
 
-       for (i = 0; i < rss_table_size; i++)
-               lut[i] = i % rss_size;
+       for (i = 0; i < adapter->rss_lut_size; i++)
+               adapter->rss_lut[i] = i % adapter->num_active_queues;
 }
 
 /**
@@ -1451,42 +1313,25 @@ static void i40evf_fill_rss_lut(u8 *lut, u16 rss_table_size, u16 rss_size)
  **/
 static int i40evf_init_rss(struct i40evf_adapter *adapter)
 {
-       struct i40e_vsi *vsi = &adapter->vsi;
        struct i40e_hw *hw = &adapter->hw;
-       u8 seed[I40EVF_HKEY_ARRAY_SIZE];
-       u64 hena;
-       u8 *lut;
        int ret;
 
-       /* Enable PCTYPES for RSS, TCP/UDP with IPv4/IPv6 */
-       if (adapter->vf_res->vf_offload_flags &
-                                       I40E_VIRTCHNL_VF_OFFLOAD_RSS_PCTYPE_V2)
-               hena = I40E_DEFAULT_RSS_HENA_EXPANDED;
-       else
-               hena = I40E_DEFAULT_RSS_HENA;
-       wr32(hw, I40E_VFQF_HENA(0), (u32)hena);
-       wr32(hw, I40E_VFQF_HENA(1), (u32)(hena >> 32));
+       if (!RSS_PF(adapter)) {
+               /* Enable PCTYPES for RSS, TCP/UDP with IPv4/IPv6 */
+               if (adapter->vf_res->vf_offload_flags &
+                   I40E_VIRTCHNL_VF_OFFLOAD_RSS_PCTYPE_V2)
+                       adapter->hena = I40E_DEFAULT_RSS_HENA_EXPANDED;
+               else
+                       adapter->hena = I40E_DEFAULT_RSS_HENA;
 
-       lut = kzalloc(I40EVF_HLUT_ARRAY_SIZE, GFP_KERNEL);
-       if (!lut)
-               return -ENOMEM;
+               wr32(hw, I40E_VFQF_HENA(0), (u32)adapter->hena);
+               wr32(hw, I40E_VFQF_HENA(1), (u32)(adapter->hena >> 32));
+       }
 
-       /* Use user configured lut if there is one, otherwise use default */
-       if (vsi->rss_lut_user)
-               memcpy(lut, vsi->rss_lut_user, I40EVF_HLUT_ARRAY_SIZE);
-       else
-               i40evf_fill_rss_lut(lut, I40EVF_HLUT_ARRAY_SIZE,
-                                   adapter->num_active_queues);
+       i40evf_fill_rss_lut(adapter);
 
-       /* Use user configured hash key if there is one, otherwise
-        * user default.
-        */
-       if (vsi->rss_hkey_user)
-               memcpy(seed, vsi->rss_hkey_user, I40EVF_HKEY_ARRAY_SIZE);
-       else
-               netdev_rss_key_fill((void *)seed, I40EVF_HKEY_ARRAY_SIZE);
-       ret = i40evf_config_rss(vsi, seed, lut, I40EVF_HLUT_ARRAY_SIZE);
-       kfree(lut);
+       netdev_rss_key_fill((void *)adapter->rss_key, adapter->rss_key_size);
+       ret = i40evf_config_rss(adapter);
 
        return ret;
 }
@@ -1601,19 +1446,16 @@ err_set_interrupt:
 }
 
 /**
- * i40evf_clear_rss_config_user - Clear user configurations of RSS
- * @vsi: Pointer to VSI structure
+ * i40evf_free_rss - Free memory used by RSS structs
+ * @adapter: board private structure
  **/
-static void i40evf_clear_rss_config_user(struct i40e_vsi *vsi)
+static void i40evf_free_rss(struct i40evf_adapter *adapter)
 {
-       if (!vsi)
-               return;
+       kfree(adapter->rss_key);
+       adapter->rss_key = NULL;
 
-       kfree(vsi->rss_hkey_user);
-       vsi->rss_hkey_user = NULL;
-
-       kfree(vsi->rss_lut_user);
-       vsi->rss_lut_user = NULL;
+       kfree(adapter->rss_lut);
+       adapter->rss_lut = NULL;
 }
 
 /**
@@ -1747,6 +1589,33 @@ static void i40evf_watchdog_task(struct work_struct *work)
                adapter->aq_required &= ~I40EVF_FLAG_AQ_CONFIGURE_RSS;
                goto watchdog_done;
        }
+       if (adapter->aq_required & I40EVF_FLAG_AQ_GET_HENA) {
+               i40evf_get_hena(adapter);
+               goto watchdog_done;
+       }
+       if (adapter->aq_required & I40EVF_FLAG_AQ_SET_HENA) {
+               i40evf_set_hena(adapter);
+               goto watchdog_done;
+       }
+       if (adapter->aq_required & I40EVF_FLAG_AQ_SET_RSS_KEY) {
+               i40evf_set_rss_key(adapter);
+               goto watchdog_done;
+       }
+       if (adapter->aq_required & I40EVF_FLAG_AQ_SET_RSS_LUT) {
+               i40evf_set_rss_lut(adapter);
+               goto watchdog_done;
+       }
+
+       if (adapter->aq_required & I40EVF_FLAG_AQ_REQUEST_PROMISC) {
+               i40evf_set_promiscuous(adapter, I40E_FLAG_VF_UNICAST_PROMISC |
+                                      I40E_FLAG_VF_MULTICAST_PROMISC);
+               goto watchdog_done;
+       }
+
+       if (adapter->aq_required & I40EVF_FLAG_AQ_RELEASE_PROMISC) {
+               i40evf_set_promiscuous(adapter, 0);
+               goto watchdog_done;
+       }
 
        if (adapter->state == __I40EVF_RUNNING)
                i40evf_request_stats(adapter);
@@ -2325,6 +2194,7 @@ int i40evf_process_config(struct i40evf_adapter *adapter)
 {
        struct i40e_virtchnl_vf_resource *vfres = adapter->vf_res;
        struct net_device *netdev = adapter->netdev;
+       struct i40e_vsi *vsi = &adapter->vsi;
        int i;
 
        /* got VF config message back from PF, now we can parse it */
@@ -2337,40 +2207,46 @@ int i40evf_process_config(struct i40evf_adapter *adapter)
                return -ENODEV;
        }
 
-       netdev->features |= NETIF_F_HIGHDMA |
-                           NETIF_F_SG |
-                           NETIF_F_IP_CSUM |
-                           NETIF_F_SCTP_CRC |
-                           NETIF_F_IPV6_CSUM |
-                           NETIF_F_TSO |
-                           NETIF_F_TSO6 |
-                           NETIF_F_TSO_ECN |
-                           NETIF_F_GSO_GRE |
-                           NETIF_F_GSO_UDP_TUNNEL |
-                           NETIF_F_RXCSUM |
-                           NETIF_F_GRO;
-
-       netdev->hw_enc_features |= NETIF_F_IP_CSUM             |
-                                  NETIF_F_IPV6_CSUM           |
-                                  NETIF_F_TSO                 |
-                                  NETIF_F_TSO6                |
-                                  NETIF_F_TSO_ECN             |
-                                  NETIF_F_GSO_GRE             |
-                                  NETIF_F_GSO_UDP_TUNNEL      |
-                                  NETIF_F_GSO_UDP_TUNNEL_CSUM;
-
-       if (adapter->flags & I40EVF_FLAG_OUTER_UDP_CSUM_CAPABLE)
-               netdev->features |= NETIF_F_GSO_UDP_TUNNEL_CSUM;
-
-       /* always clear VLAN features because they can change at every reset */
-       netdev->features &= ~(I40EVF_VLAN_FEATURES);
-       /* copy netdev features into list of user selectable features */
-       netdev->hw_features |= netdev->features;
-
-       if (vfres->vf_offload_flags & I40E_VIRTCHNL_VF_OFFLOAD_VLAN) {
-               netdev->vlan_features = netdev->features;
-               netdev->features |= I40EVF_VLAN_FEATURES;
-       }
+       netdev->hw_enc_features |= NETIF_F_SG                   |
+                                  NETIF_F_IP_CSUM              |
+                                  NETIF_F_IPV6_CSUM            |
+                                  NETIF_F_HIGHDMA              |
+                                  NETIF_F_SOFT_FEATURES        |
+                                  NETIF_F_TSO                  |
+                                  NETIF_F_TSO_ECN              |
+                                  NETIF_F_TSO6                 |
+                                  NETIF_F_GSO_GRE              |
+                                  NETIF_F_GSO_GRE_CSUM         |
+                                  NETIF_F_GSO_IPIP             |
+                                  NETIF_F_GSO_SIT              |
+                                  NETIF_F_GSO_UDP_TUNNEL       |
+                                  NETIF_F_GSO_UDP_TUNNEL_CSUM  |
+                                  NETIF_F_GSO_PARTIAL          |
+                                  NETIF_F_SCTP_CRC             |
+                                  NETIF_F_RXHASH               |
+                                  NETIF_F_RXCSUM               |
+                                  0;
+
+       if (!(adapter->flags & I40EVF_FLAG_OUTER_UDP_CSUM_CAPABLE))
+               netdev->gso_partial_features |= NETIF_F_GSO_UDP_TUNNEL_CSUM;
+
+       netdev->gso_partial_features |= NETIF_F_GSO_GRE_CSUM;
+
+       /* record features VLANs can make use of */
+       netdev->vlan_features |= netdev->hw_enc_features |
+                                NETIF_F_TSO_MANGLEID;
+
+       /* Write features and hw_features separately to avoid polluting
+        * with, or dropping, features that are set when we registgered.
+        */
+       netdev->hw_features |= netdev->hw_enc_features;
+
+       netdev->features |= netdev->hw_enc_features | I40EVF_VLAN_FEATURES;
+       netdev->hw_enc_features |= NETIF_F_TSO_MANGLEID;
+
+       /* disable VLAN features if not supported */
+       if (!(vfres->vf_offload_flags & I40E_VIRTCHNL_VF_OFFLOAD_VLAN))
+               netdev->features ^= I40EVF_VLAN_FEATURES;
 
        adapter->vsi.id = adapter->vsi_res->vsi_id;
 
@@ -2381,8 +2257,16 @@ int i40evf_process_config(struct i40evf_adapter *adapter)
                                       ITR_REG_TO_USEC(I40E_ITR_RX_DEF));
        adapter->vsi.tx_itr_setting = (I40E_ITR_DYNAMIC |
                                       ITR_REG_TO_USEC(I40E_ITR_TX_DEF));
-       adapter->vsi.netdev = adapter->netdev;
-       adapter->vsi.qs_handle = adapter->vsi_res->qset_handle;
+       vsi->netdev = adapter->netdev;
+       vsi->qs_handle = adapter->vsi_res->qset_handle;
+       if (vfres->vf_offload_flags & I40E_VIRTCHNL_VF_OFFLOAD_RSS_PF) {
+               adapter->rss_key_size = vfres->rss_key_size;
+               adapter->rss_lut_size = vfres->rss_lut_size;
+       } else {
+               adapter->rss_key_size = I40EVF_HKEY_ARRAY_SIZE;
+               adapter->rss_lut_size = I40EVF_HLUT_ARRAY_SIZE;
+       }
+
        return 0;
 }
 
@@ -2515,11 +2399,6 @@ static void i40evf_init_task(struct work_struct *work)
        adapter->current_op = I40E_VIRTCHNL_OP_UNKNOWN;
 
        adapter->flags |= I40EVF_FLAG_RX_CSUM_ENABLED;
-       adapter->flags |= I40EVF_FLAG_RX_1BUF_CAPABLE;
-       adapter->flags |= I40EVF_FLAG_RX_PS_CAPABLE;
-
-       /* Default to single buffer rx, can be changed through ethtool. */
-       adapter->flags &= ~I40EVF_FLAG_RX_PS_ENABLED;
 
        netdev->netdev_ops = &i40evf_netdev_ops;
        i40evf_set_ethtool_ops(netdev);
@@ -2578,6 +2457,11 @@ static void i40evf_init_task(struct work_struct *work)
        set_bit(__I40E_DOWN, &adapter->vsi.state);
        i40evf_misc_irq_enable(adapter);
 
+       adapter->rss_key = kzalloc(adapter->rss_key_size, GFP_KERNEL);
+       adapter->rss_lut = kzalloc(adapter->rss_lut_size, GFP_KERNEL);
+       if (!adapter->rss_key || !adapter->rss_lut)
+               goto err_mem;
+
        if (RSS_AQ(adapter)) {
                adapter->aq_required |= I40EVF_FLAG_AQ_CONFIGURE_RSS;
                mod_timer_pending(&adapter->watchdog_timer, jiffies + 1);
@@ -2588,7 +2472,8 @@ static void i40evf_init_task(struct work_struct *work)
 restart:
        schedule_delayed_work(&adapter->init_task, msecs_to_jiffies(30));
        return;
-
+err_mem:
+       i40evf_free_rss(adapter);
 err_register:
        i40evf_free_misc_irq(adapter);
 err_sw_init:
@@ -2870,8 +2755,7 @@ static void i40evf_remove(struct pci_dev *pdev)
 
        flush_scheduled_work();
 
-       /* Clear user configurations for RSS */
-       i40evf_clear_rss_config_user(&adapter->vsi);
+       i40evf_free_rss(adapter);
 
        if (hw->aq.asq.count)
                i40evf_shutdown_adminq(hw);
@@ -2882,7 +2766,6 @@ static void i40evf_remove(struct pci_dev *pdev)
 
        iounmap(hw->hw_addr);
        pci_release_regions(pdev);
-
        i40evf_free_all_tx_resources(adapter);
        i40evf_free_all_rx_resources(adapter);
        i40evf_free_queues(adapter);
index 488e738..c5d33a2 100644 (file)
@@ -270,10 +270,6 @@ void i40evf_configure_queues(struct i40evf_adapter *adapter)
                vqpi->rxq.max_pkt_size = adapter->netdev->mtu
                                        + ETH_HLEN + VLAN_HLEN + ETH_FCS_LEN;
                vqpi->rxq.databuffer_size = adapter->rx_rings[i].rx_buf_len;
-               if (adapter->flags & I40EVF_FLAG_RX_PS_ENABLED) {
-                       vqpi->rxq.splithdr_enabled = true;
-                       vqpi->rxq.hdr_size = I40E_RX_HDR_SIZE;
-               }
                vqpi++;
        }
 
@@ -652,6 +648,17 @@ void i40evf_set_promiscuous(struct i40evf_adapter *adapter, int flags)
                        adapter->current_op);
                return;
        }
+
+       if (flags) {
+               adapter->flags |= I40EVF_FLAG_PROMISC_ON;
+               adapter->aq_required &= ~I40EVF_FLAG_AQ_REQUEST_PROMISC;
+               dev_info(&adapter->pdev->dev, "Entering promiscuous mode\n");
+       } else {
+               adapter->flags &= ~I40EVF_FLAG_PROMISC_ON;
+               adapter->aq_required &= ~I40EVF_FLAG_AQ_RELEASE_PROMISC;
+               dev_info(&adapter->pdev->dev, "Leaving promiscuous mode\n");
+       }
+
        adapter->current_op = I40E_VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE;
        vpi.vsi_id = adapter->vsi_res->vsi_id;
        vpi.flags = flags;
@@ -681,6 +688,115 @@ void i40evf_request_stats(struct i40evf_adapter *adapter)
                /* if the request failed, don't lock out others */
                adapter->current_op = I40E_VIRTCHNL_OP_UNKNOWN;
 }
+
+/**
+ * i40evf_get_hena
+ * @adapter: adapter structure
+ *
+ * Request hash enable capabilities from PF
+ **/
+void i40evf_get_hena(struct i40evf_adapter *adapter)
+{
+       if (adapter->current_op != I40E_VIRTCHNL_OP_UNKNOWN) {
+               /* bail because we already have a command pending */
+               dev_err(&adapter->pdev->dev, "Cannot get RSS hash capabilities, command %d pending\n",
+                       adapter->current_op);
+               return;
+       }
+       adapter->current_op = I40E_VIRTCHNL_OP_GET_RSS_HENA_CAPS;
+       adapter->aq_required &= ~I40EVF_FLAG_AQ_GET_HENA;
+       i40evf_send_pf_msg(adapter, I40E_VIRTCHNL_OP_GET_RSS_HENA_CAPS,
+                          NULL, 0);
+}
+
+/**
+ * i40evf_set_hena
+ * @adapter: adapter structure
+ *
+ * Request the PF to set our RSS hash capabilities
+ **/
+void i40evf_set_hena(struct i40evf_adapter *adapter)
+{
+       struct i40e_virtchnl_rss_hena vrh;
+
+       if (adapter->current_op != I40E_VIRTCHNL_OP_UNKNOWN) {
+               /* bail because we already have a command pending */
+               dev_err(&adapter->pdev->dev, "Cannot set RSS hash enable, command %d pending\n",
+                       adapter->current_op);
+               return;
+       }
+       vrh.hena = adapter->hena;
+       adapter->current_op = I40E_VIRTCHNL_OP_SET_RSS_HENA;
+       adapter->aq_required &= ~I40EVF_FLAG_AQ_SET_HENA;
+       i40evf_send_pf_msg(adapter, I40E_VIRTCHNL_OP_SET_RSS_HENA,
+                          (u8 *)&vrh, sizeof(vrh));
+}
+
+/**
+ * i40evf_set_rss_key
+ * @adapter: adapter structure
+ *
+ * Request the PF to set our RSS hash key
+ **/
+void i40evf_set_rss_key(struct i40evf_adapter *adapter)
+{
+       struct i40e_virtchnl_rss_key *vrk;
+       int len;
+
+       if (adapter->current_op != I40E_VIRTCHNL_OP_UNKNOWN) {
+               /* bail because we already have a command pending */
+               dev_err(&adapter->pdev->dev, "Cannot set RSS key, command %d pending\n",
+                       adapter->current_op);
+               return;
+       }
+       len = sizeof(struct i40e_virtchnl_rss_key) +
+             (adapter->rss_key_size * sizeof(u8)) - 1;
+       vrk = kzalloc(len, GFP_KERNEL);
+       if (!vrk)
+               return;
+       vrk->vsi_id = adapter->vsi.id;
+       vrk->key_len = adapter->rss_key_size;
+       memcpy(vrk->key, adapter->rss_key, adapter->rss_key_size);
+
+       adapter->current_op = I40E_VIRTCHNL_OP_CONFIG_RSS_KEY;
+       adapter->aq_required &= ~I40EVF_FLAG_AQ_SET_RSS_KEY;
+       i40evf_send_pf_msg(adapter, I40E_VIRTCHNL_OP_CONFIG_RSS_KEY,
+                          (u8 *)vrk, len);
+       kfree(vrk);
+}
+
+/**
+ * i40evf_set_rss_lut
+ * @adapter: adapter structure
+ *
+ * Request the PF to set our RSS lookup table
+ **/
+void i40evf_set_rss_lut(struct i40evf_adapter *adapter)
+{
+       struct i40e_virtchnl_rss_lut *vrl;
+       int len;
+
+       if (adapter->current_op != I40E_VIRTCHNL_OP_UNKNOWN) {
+               /* bail because we already have a command pending */
+               dev_err(&adapter->pdev->dev, "Cannot set RSS LUT, command %d pending\n",
+                       adapter->current_op);
+               return;
+       }
+       len = sizeof(struct i40e_virtchnl_rss_lut) +
+             (adapter->rss_lut_size * sizeof(u8)) - 1;
+       vrl = kzalloc(len, GFP_KERNEL);
+       if (!vrl)
+               return;
+       vrl->vsi_id = adapter->vsi.id;
+       vrl->lut_entries = adapter->rss_lut_size;
+       memcpy(vrl->lut, adapter->rss_lut, adapter->rss_lut_size);
+       adapter->current_op = I40E_VIRTCHNL_OP_CONFIG_RSS_LUT;
+       adapter->aq_required &= ~I40EVF_FLAG_AQ_SET_RSS_LUT;
+       i40evf_send_pf_msg(adapter, I40E_VIRTCHNL_OP_CONFIG_RSS_LUT,
+                          (u8 *)vrl, len);
+       kfree(vrl);
+}
+
 /**
  * i40evf_request_reset
  * @adapter: adapter structure
@@ -820,6 +936,16 @@ void i40evf_virtchnl_completion(struct i40evf_adapter *adapter,
                if (v_opcode != adapter->current_op)
                        return;
                break;
+       case I40E_VIRTCHNL_OP_GET_RSS_HENA_CAPS: {
+               struct i40e_virtchnl_rss_hena *vrh =
+                       (struct i40e_virtchnl_rss_hena *)msg;
+               if (msglen == sizeof(*vrh))
+                       adapter->hena = vrh->hena;
+               else
+                       dev_warn(&adapter->pdev->dev,
+                                "Invalid message %d from PF\n", v_opcode);
+               }
+               break;
        default:
                if (v_opcode != adapter->current_op)
                        dev_warn(&adapter->pdev->dev, "Expected response %d from PF, received %d\n",
index 8e96c35..7460bdb 100644 (file)
@@ -383,7 +383,7 @@ static void igb_dump(struct igb_adapter *adapter)
                dev_info(&adapter->pdev->dev, "Net device Info\n");
                pr_info("Device Name     state            trans_start      last_rx\n");
                pr_info("%-15s %016lX %016lX %016lX\n", netdev->name,
-                       netdev->state, netdev->trans_start, netdev->last_rx);
+                       netdev->state, dev_trans_start(netdev), netdev->last_rx);
        }
 
        /* Print Registers */
index d10ed62..9f2db18 100644 (file)
@@ -143,14 +143,11 @@ struct vf_data_storage {
        unsigned char vf_mac_addresses[ETH_ALEN];
        u16 vf_mc_hashes[IXGBE_MAX_VF_MC_ENTRIES];
        u16 num_vf_mc_hashes;
-       u16 default_vf_vlan_id;
-       u16 vlans_enabled;
        bool clear_to_send;
        bool pf_set_mac;
        u16 pf_vlan; /* When set, guest VLAN config not allowed. */
        u16 pf_qos;
        u16 tx_rate;
-       u16 vlan_count;
        u8 spoofchk_enabled;
        bool rss_query_enabled;
        u8 trusted;
@@ -173,7 +170,7 @@ struct vf_macvlans {
 };
 
 #define IXGBE_MAX_TXD_PWR      14
-#define IXGBE_MAX_DATA_PER_TXD (1 << IXGBE_MAX_TXD_PWR)
+#define IXGBE_MAX_DATA_PER_TXD (1u << IXGBE_MAX_TXD_PWR)
 
 /* Tx Descriptors needed, worst case */
 #define TXD_USE_COUNT(S) DIV_ROUND_UP((S), IXGBE_MAX_DATA_PER_TXD)
@@ -623,44 +620,45 @@ struct ixgbe_adapter {
         * thus the additional *_CAPABLE flags.
         */
        u32 flags;
-#define IXGBE_FLAG_MSI_ENABLED                  (u32)(1 << 1)
-#define IXGBE_FLAG_MSIX_ENABLED                 (u32)(1 << 3)
-#define IXGBE_FLAG_RX_1BUF_CAPABLE              (u32)(1 << 4)
-#define IXGBE_FLAG_RX_PS_CAPABLE                (u32)(1 << 5)
-#define IXGBE_FLAG_RX_PS_ENABLED                (u32)(1 << 6)
-#define IXGBE_FLAG_DCA_ENABLED                  (u32)(1 << 8)
-#define IXGBE_FLAG_DCA_CAPABLE                  (u32)(1 << 9)
-#define IXGBE_FLAG_IMIR_ENABLED                 (u32)(1 << 10)
-#define IXGBE_FLAG_MQ_CAPABLE                   (u32)(1 << 11)
-#define IXGBE_FLAG_DCB_ENABLED                  (u32)(1 << 12)
-#define IXGBE_FLAG_VMDQ_CAPABLE                 (u32)(1 << 13)
-#define IXGBE_FLAG_VMDQ_ENABLED                 (u32)(1 << 14)
-#define IXGBE_FLAG_FAN_FAIL_CAPABLE             (u32)(1 << 15)
-#define IXGBE_FLAG_NEED_LINK_UPDATE             (u32)(1 << 16)
-#define IXGBE_FLAG_NEED_LINK_CONFIG             (u32)(1 << 17)
-#define IXGBE_FLAG_FDIR_HASH_CAPABLE            (u32)(1 << 18)
-#define IXGBE_FLAG_FDIR_PERFECT_CAPABLE         (u32)(1 << 19)
-#define IXGBE_FLAG_FCOE_CAPABLE                 (u32)(1 << 20)
-#define IXGBE_FLAG_FCOE_ENABLED                 (u32)(1 << 21)
-#define IXGBE_FLAG_SRIOV_CAPABLE                (u32)(1 << 22)
-#define IXGBE_FLAG_SRIOV_ENABLED                (u32)(1 << 23)
+#define IXGBE_FLAG_MSI_ENABLED                 BIT(1)
+#define IXGBE_FLAG_MSIX_ENABLED                        BIT(3)
+#define IXGBE_FLAG_RX_1BUF_CAPABLE             BIT(4)
+#define IXGBE_FLAG_RX_PS_CAPABLE               BIT(5)
+#define IXGBE_FLAG_RX_PS_ENABLED               BIT(6)
+#define IXGBE_FLAG_DCA_ENABLED                 BIT(8)
+#define IXGBE_FLAG_DCA_CAPABLE                 BIT(9)
+#define IXGBE_FLAG_IMIR_ENABLED                        BIT(10)
+#define IXGBE_FLAG_MQ_CAPABLE                  BIT(11)
+#define IXGBE_FLAG_DCB_ENABLED                 BIT(12)
+#define IXGBE_FLAG_VMDQ_CAPABLE                        BIT(13)
+#define IXGBE_FLAG_VMDQ_ENABLED                        BIT(14)
+#define IXGBE_FLAG_FAN_FAIL_CAPABLE            BIT(15)
+#define IXGBE_FLAG_NEED_LINK_UPDATE            BIT(16)
+#define IXGBE_FLAG_NEED_LINK_CONFIG            BIT(17)
+#define IXGBE_FLAG_FDIR_HASH_CAPABLE           BIT(18)
+#define IXGBE_FLAG_FDIR_PERFECT_CAPABLE                BIT(19)
+#define IXGBE_FLAG_FCOE_CAPABLE                        BIT(20)
+#define IXGBE_FLAG_FCOE_ENABLED                        BIT(21)
+#define IXGBE_FLAG_SRIOV_CAPABLE               BIT(22)
+#define IXGBE_FLAG_SRIOV_ENABLED               BIT(23)
 #define IXGBE_FLAG_VXLAN_OFFLOAD_CAPABLE       BIT(24)
 #define IXGBE_FLAG_RX_HWTSTAMP_ENABLED         BIT(25)
 #define IXGBE_FLAG_RX_HWTSTAMP_IN_REGISTER     BIT(26)
+#define IXGBE_FLAG_DCB_CAPABLE                 BIT(27)
 
        u32 flags2;
-#define IXGBE_FLAG2_RSC_CAPABLE                 (u32)(1 << 0)
-#define IXGBE_FLAG2_RSC_ENABLED                 (u32)(1 << 1)
-#define IXGBE_FLAG2_TEMP_SENSOR_CAPABLE         (u32)(1 << 2)
-#define IXGBE_FLAG2_TEMP_SENSOR_EVENT           (u32)(1 << 3)
-#define IXGBE_FLAG2_SEARCH_FOR_SFP              (u32)(1 << 4)
-#define IXGBE_FLAG2_SFP_NEEDS_RESET             (u32)(1 << 5)
-#define IXGBE_FLAG2_RESET_REQUESTED             (u32)(1 << 6)
-#define IXGBE_FLAG2_FDIR_REQUIRES_REINIT        (u32)(1 << 7)
-#define IXGBE_FLAG2_RSS_FIELD_IPV4_UDP         (u32)(1 << 8)
-#define IXGBE_FLAG2_RSS_FIELD_IPV6_UDP         (u32)(1 << 9)
-#define IXGBE_FLAG2_PTP_PPS_ENABLED            (u32)(1 << 10)
-#define IXGBE_FLAG2_PHY_INTERRUPT              (u32)(1 << 11)
+#define IXGBE_FLAG2_RSC_CAPABLE                        BIT(0)
+#define IXGBE_FLAG2_RSC_ENABLED                        BIT(1)
+#define IXGBE_FLAG2_TEMP_SENSOR_CAPABLE                BIT(2)
+#define IXGBE_FLAG2_TEMP_SENSOR_EVENT          BIT(3)
+#define IXGBE_FLAG2_SEARCH_FOR_SFP             BIT(4)
+#define IXGBE_FLAG2_SFP_NEEDS_RESET            BIT(5)
+#define IXGBE_FLAG2_RESET_REQUESTED            BIT(6)
+#define IXGBE_FLAG2_FDIR_REQUIRES_REINIT       BIT(7)
+#define IXGBE_FLAG2_RSS_FIELD_IPV4_UDP         BIT(8)
+#define IXGBE_FLAG2_RSS_FIELD_IPV6_UDP         BIT(9)
+#define IXGBE_FLAG2_PTP_PPS_ENABLED            BIT(10)
+#define IXGBE_FLAG2_PHY_INTERRUPT              BIT(11)
 #define IXGBE_FLAG2_VXLAN_REREG_NEEDED         BIT(12)
 #define IXGBE_FLAG2_VLAN_PROMISC               BIT(13)
 
@@ -795,7 +793,7 @@ struct ixgbe_adapter {
        unsigned long fwd_bitmask; /* Bitmask indicating in use pools */
 
 #define IXGBE_MAX_LINK_HANDLE 10
-       struct ixgbe_mat_field *jump_tables[IXGBE_MAX_LINK_HANDLE];
+       struct ixgbe_jump_table *jump_tables[IXGBE_MAX_LINK_HANDLE];
        unsigned long tables;
 
 /* maximum number of RETA entries among all devices supported by ixgbe
@@ -806,6 +804,8 @@ struct ixgbe_adapter {
 
 #define IXGBE_RSS_KEY_SIZE     40  /* size of RSS Hash Key in bytes */
        u32 rss_key[IXGBE_RSS_KEY_SIZE / sizeof(u32)];
+
+       bool need_crosstalk_fix;
 };
 
 static inline u8 ixgbe_max_rss_indices(struct ixgbe_adapter *adapter)
@@ -828,7 +828,7 @@ struct ixgbe_fdir_filter {
        struct hlist_node fdir_node;
        union ixgbe_atr_input filter;
        u16 sw_idx;
-       u16 action;
+       u64 action;
 };
 
 enum ixgbe_state_t {
@@ -896,8 +896,8 @@ void ixgbe_configure_tx_ring(struct ixgbe_adapter *, struct ixgbe_ring *);
 void ixgbe_disable_rx_queue(struct ixgbe_adapter *adapter, struct ixgbe_ring *);
 void ixgbe_update_stats(struct ixgbe_adapter *adapter);
 int ixgbe_init_interrupt_scheme(struct ixgbe_adapter *adapter);
-int ixgbe_wol_supported(struct ixgbe_adapter *adapter, u16 device_id,
-                              u16 subdevice_id);
+bool ixgbe_wol_supported(struct ixgbe_adapter *adapter, u16 device_id,
+                        u16 subdevice_id);
 #ifdef CONFIG_PCI_IOV
 void ixgbe_full_sync_mac_table(struct ixgbe_adapter *adapter);
 #endif
index 6ecd598..fb51be7 100644 (file)
@@ -792,7 +792,7 @@ mac_reset_top:
        }
 
        gheccr = IXGBE_READ_REG(hw, IXGBE_GHECCR);
-       gheccr &= ~((1 << 21) | (1 << 18) | (1 << 9) | (1 << 6));
+       gheccr &= ~(BIT(21) | BIT(18) | BIT(9) | BIT(6));
        IXGBE_WRITE_REG(hw, IXGBE_GHECCR, gheccr);
 
        /*
@@ -914,10 +914,10 @@ static s32 ixgbe_set_vfta_82598(struct ixgbe_hw *hw, u32 vlan, u32 vind,
        bits = IXGBE_READ_REG(hw, IXGBE_VFTA(regindex));
        if (vlan_on)
                /* Turn on this VLAN id */
-               bits |= (1 << bitindex);
+               bits |= BIT(bitindex);
        else
                /* Turn off this VLAN id */
-               bits &= ~(1 << bitindex);
+               bits &= ~BIT(bitindex);
        IXGBE_WRITE_REG(hw, IXGBE_VFTA(regindex), bits);
 
        return 0;
index 0151978..47afed7 100644 (file)
@@ -1296,17 +1296,17 @@ s32 ixgbe_init_fdir_perfect_82599(struct ixgbe_hw *hw, u32 fdirctrl)
 #define IXGBE_COMPUTE_SIG_HASH_ITERATION(_n) \
 do { \
        u32 n = (_n); \
-       if (IXGBE_ATR_COMMON_HASH_KEY & (0x01 << n)) \
+       if (IXGBE_ATR_COMMON_HASH_KEY & BIT(n)) \
                common_hash ^= lo_hash_dword >> n; \
-       else if (IXGBE_ATR_BUCKET_HASH_KEY & (0x01 << n)) \
+       else if (IXGBE_ATR_BUCKET_HASH_KEY & BIT(n)) \
                bucket_hash ^= lo_hash_dword >> n; \
-       else if (IXGBE_ATR_SIGNATURE_HASH_KEY & (0x01 << n)) \
+       else if (IXGBE_ATR_SIGNATURE_HASH_KEY & BIT(n)) \
                sig_hash ^= lo_hash_dword << (16 - n); \
-       if (IXGBE_ATR_COMMON_HASH_KEY & (0x01 << (n + 16))) \
+       if (IXGBE_ATR_COMMON_HASH_KEY & BIT(n + 16)) \
                common_hash ^= hi_hash_dword >> n; \
-       else if (IXGBE_ATR_BUCKET_HASH_KEY & (0x01 << (n + 16))) \
+       else if (IXGBE_ATR_BUCKET_HASH_KEY & BIT(n + 16)) \
                bucket_hash ^= hi_hash_dword >> n; \
-       else if (IXGBE_ATR_SIGNATURE_HASH_KEY & (0x01 << (n + 16))) \
+       else if (IXGBE_ATR_SIGNATURE_HASH_KEY & BIT(n + 16)) \
                sig_hash ^= hi_hash_dword << (16 - n); \
 } while (0)
 
@@ -1440,9 +1440,9 @@ s32 ixgbe_fdir_add_signature_filter_82599(struct ixgbe_hw *hw,
 #define IXGBE_COMPUTE_BKT_HASH_ITERATION(_n) \
 do { \
        u32 n = (_n); \
-       if (IXGBE_ATR_BUCKET_HASH_KEY & (0x01 << n)) \
+       if (IXGBE_ATR_BUCKET_HASH_KEY & BIT(n)) \
                bucket_hash ^= lo_hash_dword >> n; \
-       if (IXGBE_ATR_BUCKET_HASH_KEY & (0x01 << (n + 16))) \
+       if (IXGBE_ATR_BUCKET_HASH_KEY & BIT(n + 16)) \
                bucket_hash ^= hi_hash_dword >> n; \
 } while (0)
 
index 737443a..902d206 100644 (file)
@@ -825,8 +825,8 @@ s32 ixgbe_init_eeprom_params_generic(struct ixgbe_hw *hw)
                         */
                        eeprom_size = (u16)((eec & IXGBE_EEC_SIZE) >>
                                            IXGBE_EEC_SIZE_SHIFT);
-                       eeprom->word_size = 1 << (eeprom_size +
-                                                 IXGBE_EEPROM_WORD_SIZE_SHIFT);
+                       eeprom->word_size = BIT(eeprom_size +
+                                                IXGBE_EEPROM_WORD_SIZE_SHIFT);
                }
 
                if (eec & IXGBE_EEC_ADDR_SIZE)
@@ -1502,7 +1502,7 @@ static void ixgbe_shift_out_eeprom_bits(struct ixgbe_hw *hw, u16 data,
         * Mask is used to shift "count" bits of "data" out to the EEPROM
         * one bit at a time.  Determine the starting bit based on count
         */
-       mask = 0x01 << (count - 1);
+       mask = BIT(count - 1);
 
        for (i = 0; i < count; i++) {
                /*
@@ -1991,7 +1991,7 @@ static void ixgbe_set_mta(struct ixgbe_hw *hw, u8 *mc_addr)
         */
        vector_reg = (vector >> 5) & 0x7F;
        vector_bit = vector & 0x1F;
-       hw->mac.mta_shadow[vector_reg] |= (1 << vector_bit);
+       hw->mac.mta_shadow[vector_reg] |= BIT(vector_bit);
 }
 
 /**
@@ -2921,10 +2921,10 @@ s32 ixgbe_clear_vmdq_generic(struct ixgbe_hw *hw, u32 rar, u32 vmdq)
                        mpsar_hi = 0;
                }
        } else if (vmdq < 32) {
-               mpsar_lo &= ~(1 << vmdq);
+               mpsar_lo &= ~BIT(vmdq);
                IXGBE_WRITE_REG(hw, IXGBE_MPSAR_LO(rar), mpsar_lo);
        } else {
-               mpsar_hi &= ~(1 << (vmdq - 32));
+               mpsar_hi &= ~BIT(vmdq - 32);
                IXGBE_WRITE_REG(hw, IXGBE_MPSAR_HI(rar), mpsar_hi);
        }
 
@@ -2953,11 +2953,11 @@ s32 ixgbe_set_vmdq_generic(struct ixgbe_hw *hw, u32 rar, u32 vmdq)
 
        if (vmdq < 32) {
                mpsar = IXGBE_READ_REG(hw, IXGBE_MPSAR_LO(rar));
-               mpsar |= 1 << vmdq;
+               mpsar |= BIT(vmdq);
                IXGBE_WRITE_REG(hw, IXGBE_MPSAR_LO(rar), mpsar);
        } else {
                mpsar = IXGBE_READ_REG(hw, IXGBE_MPSAR_HI(rar));
-               mpsar |= 1 << (vmdq - 32);
+               mpsar |= BIT(vmdq - 32);
                IXGBE_WRITE_REG(hw, IXGBE_MPSAR_HI(rar), mpsar);
        }
        return 0;
@@ -2978,11 +2978,11 @@ s32 ixgbe_set_vmdq_san_mac_generic(struct ixgbe_hw *hw, u32 vmdq)
        u32 rar = hw->mac.san_mac_rar_index;
 
        if (vmdq < 32) {
-               IXGBE_WRITE_REG(hw, IXGBE_MPSAR_LO(rar), 1 << vmdq);
+               IXGBE_WRITE_REG(hw, IXGBE_MPSAR_LO(rar), BIT(vmdq));
                IXGBE_WRITE_REG(hw, IXGBE_MPSAR_HI(rar), 0);
        } else {
                IXGBE_WRITE_REG(hw, IXGBE_MPSAR_LO(rar), 0);
-               IXGBE_WRITE_REG(hw, IXGBE_MPSAR_HI(rar), 1 << (vmdq - 32));
+               IXGBE_WRITE_REG(hw, IXGBE_MPSAR_HI(rar), BIT(vmdq - 32));
        }
 
        return 0;
@@ -3082,7 +3082,7 @@ s32 ixgbe_set_vfta_generic(struct ixgbe_hw *hw, u32 vlan, u32 vind,
         *    bits[4-0]:  which bit in the register
         */
        regidx = vlan / 32;
-       vfta_delta = 1 << (vlan % 32);
+       vfta_delta = BIT(vlan % 32);
        vfta = IXGBE_READ_REG(hw, IXGBE_VFTA(regidx));
 
        /* vfta_delta represents the difference between the current value
@@ -3113,12 +3113,12 @@ s32 ixgbe_set_vfta_generic(struct ixgbe_hw *hw, u32 vlan, u32 vind,
        bits = IXGBE_READ_REG(hw, IXGBE_VLVFB(vlvf_index * 2 + vind / 32));
 
        /* set the pool bit */
-       bits |= 1 << (vind % 32);
+       bits |= BIT(vind % 32);
        if (vlan_on)
                goto vlvf_update;
 
        /* clear the pool bit */
-       bits ^= 1 << (vind % 32);
+       bits ^= BIT(vind % 32);
 
        if (!bits &&
            !IXGBE_READ_REG(hw, IXGBE_VLVFB(vlvf_index * 2 + 1 - vind / 32))) {
@@ -3310,43 +3310,25 @@ wwn_prefix_err:
 /**
  *  ixgbe_set_mac_anti_spoofing - Enable/Disable MAC anti-spoofing
  *  @hw: pointer to hardware structure
- *  @enable: enable or disable switch for anti-spoofing
- *  @pf: Physical Function pool - do not enable anti-spoofing for the PF
+ *  @enable: enable or disable switch for MAC anti-spoofing
+ *  @vf: Virtual Function pool - VF Pool to set for MAC anti-spoofing
  *
  **/
-void ixgbe_set_mac_anti_spoofing(struct ixgbe_hw *hw, bool enable, int pf)
+void ixgbe_set_mac_anti_spoofing(struct ixgbe_hw *hw, bool enable, int vf)
 {
-       int j;
-       int pf_target_reg = pf >> 3;
-       int pf_target_shift = pf % 8;
-       u32 pfvfspoof = 0;
+       int vf_target_reg = vf >> 3;
+       int vf_target_shift = vf % 8;
+       u32 pfvfspoof;
 
        if (hw->mac.type == ixgbe_mac_82598EB)
                return;
 
+       pfvfspoof = IXGBE_READ_REG(hw, IXGBE_PFVFSPOOF(vf_target_reg));
        if (enable)
-               pfvfspoof = IXGBE_SPOOF_MACAS_MASK;
-
-       /*
-        * PFVFSPOOF register array is size 8 with 8 bits assigned to
-        * MAC anti-spoof enables in each register array element.
-        */
-       for (j = 0; j < pf_target_reg; j++)
-               IXGBE_WRITE_REG(hw, IXGBE_PFVFSPOOF(j), pfvfspoof);
-
-       /*
-        * The PF should be allowed to spoof so that it can support
-        * emulation mode NICs.  Do not set the bits assigned to the PF
-        */
-       pfvfspoof &= (1 << pf_target_shift) - 1;
-       IXGBE_WRITE_REG(hw, IXGBE_PFVFSPOOF(j), pfvfspoof);
-
-       /*
-        * Remaining pools belong to the PF so they do not need to have
-        * anti-spoofing enabled.
-        */
-       for (j++; j < IXGBE_PFVFSPOOF_REG_COUNT; j++)
-               IXGBE_WRITE_REG(hw, IXGBE_PFVFSPOOF(j), 0);
+               pfvfspoof |= BIT(vf_target_shift);
+       else
+               pfvfspoof &= ~BIT(vf_target_shift);
+       IXGBE_WRITE_REG(hw, IXGBE_PFVFSPOOF(vf_target_reg), pfvfspoof);
 }
 
 /**
@@ -3367,9 +3349,9 @@ void ixgbe_set_vlan_anti_spoofing(struct ixgbe_hw *hw, bool enable, int vf)
 
        pfvfspoof = IXGBE_READ_REG(hw, IXGBE_PFVFSPOOF(vf_target_reg));
        if (enable)
-               pfvfspoof |= (1 << vf_target_shift);
+               pfvfspoof |= BIT(vf_target_shift);
        else
-               pfvfspoof &= ~(1 << vf_target_shift);
+               pfvfspoof &= ~BIT(vf_target_shift);
        IXGBE_WRITE_REG(hw, IXGBE_PFVFSPOOF(vf_target_reg), pfvfspoof);
 }
 
index 6f8e6a5..6d4c260 100644 (file)
@@ -106,7 +106,7 @@ s32 prot_autoc_write_generic(struct ixgbe_hw *hw, u32 reg_val, bool locked);
 
 s32 ixgbe_blink_led_start_generic(struct ixgbe_hw *hw, u32 index);
 s32 ixgbe_blink_led_stop_generic(struct ixgbe_hw *hw, u32 index);
-void ixgbe_set_mac_anti_spoofing(struct ixgbe_hw *hw, bool enable, int pf);
+void ixgbe_set_mac_anti_spoofing(struct ixgbe_hw *hw, bool enable, int vf);
 void ixgbe_set_vlan_anti_spoofing(struct ixgbe_hw *hw, bool enable, int vf);
 s32 ixgbe_get_device_caps_generic(struct ixgbe_hw *hw, u16 *device_caps);
 s32 ixgbe_set_fw_drv_ver_generic(struct ixgbe_hw *hw, u8 maj, u8 min,
index f8fb2ac..072ef3b 100644 (file)
@@ -186,7 +186,7 @@ void ixgbe_dcb_unpack_pfc(struct ixgbe_dcb_config *cfg, u8 *pfc_en)
 
        for (*pfc_en = 0, tc = 0; tc < MAX_TRAFFIC_CLASS; tc++) {
                if (tc_config[tc].dcb_pfc != pfc_disabled)
-                       *pfc_en |= 1 << tc;
+                       *pfc_en |= BIT(tc);
        }
 }
 
@@ -232,7 +232,7 @@ void ixgbe_dcb_unpack_prio(struct ixgbe_dcb_config *cfg, int direction,
 u8 ixgbe_dcb_get_tc_from_up(struct ixgbe_dcb_config *cfg, int direction, u8 up)
 {
        struct tc_configuration *tc_config = &cfg->tc_config[0];
-       u8 prio_mask = 1 << up;
+       u8 prio_mask = BIT(up);
        u8 tc = cfg->num_tcs.pg_tcs;
 
        /* If tc is 0 then DCB is likely not enabled or supported */
index d3ba63f..b79e93a 100644 (file)
@@ -210,7 +210,7 @@ s32 ixgbe_dcb_config_pfc_82598(struct ixgbe_hw *hw, u8 pfc_en)
 
        /* Configure PFC Tx thresholds per TC */
        for (i = 0; i < MAX_TRAFFIC_CLASS; i++) {
-               if (!(pfc_en & (1 << i))) {
+               if (!(pfc_en & BIT(i))) {
                        IXGBE_WRITE_REG(hw, IXGBE_FCRTL(i), 0);
                        IXGBE_WRITE_REG(hw, IXGBE_FCRTH(i), 0);
                        continue;
index b5cc989..1011d64 100644 (file)
@@ -248,7 +248,7 @@ s32 ixgbe_dcb_config_pfc_82599(struct ixgbe_hw *hw, u8 pfc_en, u8 *prio_tc)
                int enabled = 0;
 
                for (j = 0; j < MAX_USER_PRIORITY; j++) {
-                       if ((prio_tc[j] == i) && (pfc_en & (1 << j))) {
+                       if ((prio_tc[j] == i) && (pfc_en & BIT(j))) {
                                enabled = 1;
                                break;
                        }
index 2707bda..b8fc3cf 100644 (file)
@@ -62,7 +62,7 @@ static int ixgbe_copy_dcb_cfg(struct ixgbe_adapter *adapter, int tc_max)
                             };
        u8 up = dcb_getapp(adapter->netdev, &app);
 
-       if (up && !(up & (1 << adapter->fcoe.up)))
+       if (up && !(up & BIT(adapter->fcoe.up)))
                changes |= BIT_APP_UPCHG;
 #endif
 
@@ -657,7 +657,7 @@ static int ixgbe_dcbnl_ieee_setapp(struct net_device *dev,
            app->protocol == ETH_P_FCOE) {
                u8 app_mask = dcb_ieee_getapp_mask(dev, app);
 
-               if (app_mask & (1 << adapter->fcoe.up))
+               if (app_mask & BIT(adapter->fcoe.up))
                        return 0;
 
                adapter->fcoe.up = app->priority;
@@ -700,7 +700,7 @@ static int ixgbe_dcbnl_ieee_delapp(struct net_device *dev,
            app->protocol == ETH_P_FCOE) {
                u8 app_mask = dcb_ieee_getapp_mask(dev, app);
 
-               if (app_mask & (1 << adapter->fcoe.up))
+               if (app_mask & BIT(adapter->fcoe.up))
                        return 0;
 
                adapter->fcoe.up = app_mask ?
index 9f76be1..59b771b 100644 (file)
@@ -533,10 +533,8 @@ static void ixgbe_get_regs(struct net_device *netdev,
 
        /* Flow Control */
        regs_buff[30] = IXGBE_READ_REG(hw, IXGBE_PFCTOP);
-       regs_buff[31] = IXGBE_READ_REG(hw, IXGBE_FCTTV(0));
-       regs_buff[32] = IXGBE_READ_REG(hw, IXGBE_FCTTV(1));
-       regs_buff[33] = IXGBE_READ_REG(hw, IXGBE_FCTTV(2));
-       regs_buff[34] = IXGBE_READ_REG(hw, IXGBE_FCTTV(3));
+       for (i = 0; i < 4; i++)
+               regs_buff[31 + i] = IXGBE_READ_REG(hw, IXGBE_FCTTV(i));
        for (i = 0; i < 8; i++) {
                switch (hw->mac.type) {
                case ixgbe_mac_82598EB:
@@ -720,8 +718,10 @@ static void ixgbe_get_regs(struct net_device *netdev,
        regs_buff[939] = IXGBE_GET_STAT(adapter, bprc);
        regs_buff[940] = IXGBE_GET_STAT(adapter, mprc);
        regs_buff[941] = IXGBE_GET_STAT(adapter, gptc);
-       regs_buff[942] = IXGBE_GET_STAT(adapter, gorc);
-       regs_buff[944] = IXGBE_GET_STAT(adapter, gotc);
+       regs_buff[942] = (u32)IXGBE_GET_STAT(adapter, gorc);
+       regs_buff[943] = (u32)(IXGBE_GET_STAT(adapter, gorc) >> 32);
+       regs_buff[944] = (u32)IXGBE_GET_STAT(adapter, gotc);
+       regs_buff[945] = (u32)(IXGBE_GET_STAT(adapter, gotc) >> 32);
        for (i = 0; i < 8; i++)
                regs_buff[946 + i] = IXGBE_GET_STAT(adapter, rnbc[i]);
        regs_buff[954] = IXGBE_GET_STAT(adapter, ruc);
@@ -731,7 +731,8 @@ static void ixgbe_get_regs(struct net_device *netdev,
        regs_buff[958] = IXGBE_GET_STAT(adapter, mngprc);
        regs_buff[959] = IXGBE_GET_STAT(adapter, mngpdc);
        regs_buff[960] = IXGBE_GET_STAT(adapter, mngptc);
-       regs_buff[961] = IXGBE_GET_STAT(adapter, tor);
+       regs_buff[961] = (u32)IXGBE_GET_STAT(adapter, tor);
+       regs_buff[962] = (u32)(IXGBE_GET_STAT(adapter, tor) >> 32);
        regs_buff[963] = IXGBE_GET_STAT(adapter, tpr);
        regs_buff[964] = IXGBE_GET_STAT(adapter, tpt);
        regs_buff[965] = IXGBE_GET_STAT(adapter, ptc64);
@@ -803,15 +804,11 @@ static void ixgbe_get_regs(struct net_device *netdev,
                regs_buff[1096 + i] = IXGBE_READ_REG(hw, IXGBE_TIC_DW(i));
        regs_buff[1100] = IXGBE_READ_REG(hw, IXGBE_TDPROBE);
        regs_buff[1101] = IXGBE_READ_REG(hw, IXGBE_TXBUFCTRL);
-       regs_buff[1102] = IXGBE_READ_REG(hw, IXGBE_TXBUFDATA0);
-       regs_buff[1103] = IXGBE_READ_REG(hw, IXGBE_TXBUFDATA1);
-       regs_buff[1104] = IXGBE_READ_REG(hw, IXGBE_TXBUFDATA2);
-       regs_buff[1105] = IXGBE_READ_REG(hw, IXGBE_TXBUFDATA3);
+       for (i = 0; i < 4; i++)
+               regs_buff[1102 + i] = IXGBE_READ_REG(hw, IXGBE_TXBUFDATA(i));
        regs_buff[1106] = IXGBE_READ_REG(hw, IXGBE_RXBUFCTRL);
-       regs_buff[1107] = IXGBE_READ_REG(hw, IXGBE_RXBUFDATA0);
-       regs_buff[1108] = IXGBE_READ_REG(hw, IXGBE_RXBUFDATA1);
-       regs_buff[1109] = IXGBE_READ_REG(hw, IXGBE_RXBUFDATA2);
-       regs_buff[1110] = IXGBE_READ_REG(hw, IXGBE_RXBUFDATA3);
+       for (i = 0; i < 4; i++)
+               regs_buff[1107 + i] = IXGBE_READ_REG(hw, IXGBE_RXBUFDATA(i));
        for (i = 0; i < 8; i++)
                regs_buff[1111 + i] = IXGBE_READ_REG(hw, IXGBE_PCIE_DIAG(i));
        regs_buff[1119] = IXGBE_READ_REG(hw, IXGBE_RFVAL);
@@ -1586,7 +1583,7 @@ static int ixgbe_intr_test(struct ixgbe_adapter *adapter, u64 *data)
        /* Test each interrupt */
        for (; i < 10; i++) {
                /* Interrupt to test */
-               mask = 1 << i;
+               mask = BIT(i);
 
                if (!shared_int) {
                        /*
@@ -3014,14 +3011,14 @@ static int ixgbe_get_ts_info(struct net_device *dev,
                        info->phc_index = -1;
 
                info->tx_types =
-                       (1 << HWTSTAMP_TX_OFF) |
-                       (1 << HWTSTAMP_TX_ON);
+                       BIT(HWTSTAMP_TX_OFF) |
+                       BIT(HWTSTAMP_TX_ON);
 
                info->rx_filters =
-                       (1 << HWTSTAMP_FILTER_NONE) |
-                       (1 << HWTSTAMP_FILTER_PTP_V1_L4_SYNC) |
-                       (1 << HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ) |
-                       (1 << HWTSTAMP_FILTER_PTP_V2_EVENT);
+                       BIT(HWTSTAMP_FILTER_NONE) |
+                       BIT(HWTSTAMP_FILTER_PTP_V1_L4_SYNC) |
+                       BIT(HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ) |
+                       BIT(HWTSTAMP_FILTER_PTP_V2_EVENT);
                break;
        default:
                return ethtool_op_get_ts_info(dev, info);
index b2f2cf4..d08fbcf 100644 (file)
@@ -53,6 +53,7 @@
 #include <net/vxlan.h>
 #include <net/pkt_cls.h>
 #include <net/tc_act/tc_gact.h>
+#include <net/tc_act/tc_mirred.h>
 
 #include "ixgbe.h"
 #include "ixgbe_common.h"
@@ -371,6 +372,27 @@ u32 ixgbe_read_reg(struct ixgbe_hw *hw, u32 reg)
 
        if (ixgbe_removed(reg_addr))
                return IXGBE_FAILED_READ_REG;
+       if (unlikely(hw->phy.nw_mng_if_sel &
+                    IXGBE_NW_MNG_IF_SEL_ENABLE_10_100M)) {
+               struct ixgbe_adapter *adapter;
+               int i;
+
+               for (i = 0; i < 200; ++i) {
+                       value = readl(reg_addr + IXGBE_MAC_SGMII_BUSY);
+                       if (likely(!value))
+                               goto writes_completed;
+                       if (value == IXGBE_FAILED_READ_REG) {
+                               ixgbe_remove_adapter(hw);
+                               return IXGBE_FAILED_READ_REG;
+                       }
+                       udelay(5);
+               }
+
+               adapter = hw->back;
+               e_warn(hw, "register writes incomplete %08x\n", value);
+       }
+
+writes_completed:
        value = readl(reg_addr + reg);
        if (unlikely(value == IXGBE_FAILED_READ_REG))
                ixgbe_check_remove(hw, reg);
@@ -587,7 +609,7 @@ static void ixgbe_dump(struct ixgbe_adapter *adapter)
                pr_info("%-15s %016lX %016lX %016lX\n",
                        netdev->name,
                        netdev->state,
-                       netdev->trans_start,
+                       dev_trans_start(netdev),
                        netdev->last_rx);
        }
 
@@ -2224,7 +2246,7 @@ static void ixgbe_configure_msix(struct ixgbe_adapter *adapter)
 
        /* Populate MSIX to EITR Select */
        if (adapter->num_vfs > 32) {
-               u32 eitrsel = (1 << (adapter->num_vfs - 32)) - 1;
+               u32 eitrsel = BIT(adapter->num_vfs - 32) - 1;
                IXGBE_WRITE_REG(&adapter->hw, IXGBE_EITRSEL, eitrsel);
        }
 
@@ -2863,7 +2885,7 @@ int ixgbe_poll(struct napi_struct *napi, int budget)
        if (adapter->rx_itr_setting & 1)
                ixgbe_set_itr(q_vector);
        if (!test_bit(__IXGBE_DOWN, &adapter->state))
-               ixgbe_irq_enable_queues(adapter, ((u64)1 << q_vector->v_idx));
+               ixgbe_irq_enable_queues(adapter, BIT_ULL(q_vector->v_idx));
 
        return 0;
 }
@@ -3156,15 +3178,15 @@ void ixgbe_configure_tx_ring(struct ixgbe_adapter *adapter,
         * currently 40.
         */
        if (!ring->q_vector || (ring->q_vector->itr < IXGBE_100K_ITR))
-               txdctl |= (1 << 16);    /* WTHRESH = 1 */
+               txdctl |= 1u << 16;     /* WTHRESH = 1 */
        else
-               txdctl |= (8 << 16);    /* WTHRESH = 8 */
+               txdctl |= 8u << 16;     /* WTHRESH = 8 */
 
        /*
         * Setting PTHRESH to 32 both improves performance
         * and avoids a TX hang with DFP enabled
         */
-       txdctl |= (1 << 8) |    /* HTHRESH = 1 */
+       txdctl |= (1u << 8) |   /* HTHRESH = 1 */
                   32;          /* PTHRESH = 32 */
 
        /* reinitialize flowdirector state */
@@ -3716,9 +3738,9 @@ static void ixgbe_setup_psrtype(struct ixgbe_adapter *adapter)
                return;
 
        if (rss_i > 3)
-               psrtype |= 2 << 29;
+               psrtype |= 2u << 29;
        else if (rss_i > 1)
-               psrtype |= 1 << 29;
+               psrtype |= 1u << 29;
 
        for_each_set_bit(pool, &adapter->fwd_bitmask, 32)
                IXGBE_WRITE_REG(hw, IXGBE_PSRTYPE(VMDQ_P(pool)), psrtype);
@@ -3745,9 +3767,9 @@ static void ixgbe_configure_virtualization(struct ixgbe_adapter *adapter)
        reg_offset = (VMDQ_P(0) >= 32) ? 1 : 0;
 
        /* Enable only the PF's pool for Tx/Rx */
-       IXGBE_WRITE_REG(hw, IXGBE_VFRE(reg_offset), (~0) << vf_shift);
+       IXGBE_WRITE_REG(hw, IXGBE_VFRE(reg_offset), GENMASK(vf_shift, 31));
        IXGBE_WRITE_REG(hw, IXGBE_VFRE(reg_offset ^ 1), reg_offset - 1);
-       IXGBE_WRITE_REG(hw, IXGBE_VFTE(reg_offset), (~0) << vf_shift);
+       IXGBE_WRITE_REG(hw, IXGBE_VFTE(reg_offset), GENMASK(vf_shift, 31));
        IXGBE_WRITE_REG(hw, IXGBE_VFTE(reg_offset ^ 1), reg_offset - 1);
        if (adapter->bridge_mode == BRIDGE_MODE_VEB)
                IXGBE_WRITE_REG(hw, IXGBE_PFDTXGSWC, IXGBE_PFDTXGSWC_VT_LBEN);
@@ -3776,34 +3798,10 @@ static void ixgbe_configure_virtualization(struct ixgbe_adapter *adapter)
 
        IXGBE_WRITE_REG(hw, IXGBE_GCR_EXT, gcr_ext);
 
-
-       /* Enable MAC Anti-Spoofing */
-       hw->mac.ops.set_mac_anti_spoofing(hw, (adapter->num_vfs != 0),
-                                         adapter->num_vfs);
-
-       /* Ensure LLDP and FC is set for Ethertype Antispoofing if we will be
-        * calling set_ethertype_anti_spoofing for each VF in loop below
-        */
-       if (hw->mac.ops.set_ethertype_anti_spoofing) {
-               IXGBE_WRITE_REG(hw, IXGBE_ETQF(IXGBE_ETQF_FILTER_LLDP),
-                               (IXGBE_ETQF_FILTER_EN    |
-                                IXGBE_ETQF_TX_ANTISPOOF |
-                                IXGBE_ETH_P_LLDP));
-
-               IXGBE_WRITE_REG(hw, IXGBE_ETQF(IXGBE_ETQF_FILTER_FC),
-                               (IXGBE_ETQF_FILTER_EN |
-                                IXGBE_ETQF_TX_ANTISPOOF |
-                                ETH_P_PAUSE));
-       }
-
-       /* For VFs that have spoof checking turned off */
        for (i = 0; i < adapter->num_vfs; i++) {
-               if (!adapter->vfinfo[i].spoofchk_enabled)
-                       ixgbe_ndo_set_vf_spoofchk(adapter->netdev, i, false);
-
-               /* enable ethertype anti spoofing if hw supports it */
-               if (hw->mac.ops.set_ethertype_anti_spoofing)
-                       hw->mac.ops.set_ethertype_anti_spoofing(hw, true, i);
+               /* configure spoof checking */
+               ixgbe_ndo_set_vf_spoofchk(adapter->netdev, i,
+                                         adapter->vfinfo[i].spoofchk_enabled);
 
                /* Enable/Disable RSS query feature  */
                ixgbe_ndo_set_vf_rss_query_en(adapter->netdev, i,
@@ -3997,7 +3995,7 @@ void ixgbe_update_pf_promisc_vlvf(struct ixgbe_adapter *adapter, u32 vid)
         * entry other than the PF.
         */
        word = idx * 2 + (VMDQ_P(0) / 32);
-       bits = ~(1 << (VMDQ_P(0)) % 32);
+       bits = ~BIT(VMDQ_P(0) % 32);
        bits &= IXGBE_READ_REG(hw, IXGBE_VLVFB(word));
 
        /* Disable the filter so this falls into the default pool. */
@@ -4132,7 +4130,7 @@ static void ixgbe_vlan_promisc_enable(struct ixgbe_adapter *adapter)
                u32 reg_offset = IXGBE_VLVFB(i * 2 + VMDQ_P(0) / 32);
                u32 vlvfb = IXGBE_READ_REG(hw, reg_offset);
 
-               vlvfb |= 1 << (VMDQ_P(0) % 32);
+               vlvfb |= BIT(VMDQ_P(0) % 32);
                IXGBE_WRITE_REG(hw, reg_offset, vlvfb);
        }
 
@@ -4162,7 +4160,7 @@ static void ixgbe_scrub_vfta(struct ixgbe_adapter *adapter, u32 vfta_offset)
 
                if (vlvf) {
                        /* record VLAN ID in VFTA */
-                       vfta[(vid - vid_start) / 32] |= 1 << (vid % 32);
+                       vfta[(vid - vid_start) / 32] |= BIT(vid % 32);
 
                        /* if PF is part of this then continue */
                        if (test_bit(vid, adapter->active_vlans))
@@ -4171,7 +4169,7 @@ static void ixgbe_scrub_vfta(struct ixgbe_adapter *adapter, u32 vfta_offset)
 
                /* remove PF from the pool */
                word = i * 2 + VMDQ_P(0) / 32;
-               bits = ~(1 << (VMDQ_P(0) % 32));
+               bits = ~BIT(VMDQ_P(0) % 32);
                bits &= IXGBE_READ_REG(hw, IXGBE_VLVFB(word));
                IXGBE_WRITE_REG(hw, IXGBE_VLVFB(word), bits);
        }
@@ -4865,9 +4863,9 @@ static void ixgbe_fwd_psrtype(struct ixgbe_fwd_adapter *vadapter)
                return;
 
        if (rss_i > 3)
-               psrtype |= 2 << 29;
+               psrtype |= 2u << 29;
        else if (rss_i > 1)
-               psrtype |= 1 << 29;
+               psrtype |= 1u << 29;
 
        IXGBE_WRITE_REG(hw, IXGBE_PSRTYPE(VMDQ_P(pool)), psrtype);
 }
@@ -4931,7 +4929,7 @@ static void ixgbe_disable_fwd_ring(struct ixgbe_fwd_adapter *vadapter,
        /* shutdown specific queue receive and wait for dma to settle */
        ixgbe_disable_rx_queue(adapter, rx_ring);
        usleep_range(10000, 20000);
-       ixgbe_irq_disable_queues(adapter, ((u64)1 << index));
+       ixgbe_irq_disable_queues(adapter, BIT_ULL(index));
        ixgbe_clean_rx_ring(rx_ring);
        rx_ring->l2_accel_priv = NULL;
 }
@@ -5290,7 +5288,7 @@ void ixgbe_reinit_locked(struct ixgbe_adapter *adapter)
 {
        WARN_ON(in_interrupt());
        /* put off any impending NetWatchDogTimeout */
-       adapter->netdev->trans_start = jiffies;
+       netif_trans_update(adapter->netdev);
 
        while (test_and_set_bit(__IXGBE_RESETTING, &adapter->state))
                usleep_range(1000, 2000);
@@ -5561,6 +5559,58 @@ static void ixgbe_tx_timeout(struct net_device *netdev)
        ixgbe_tx_timeout_reset(adapter);
 }
 
+#ifdef CONFIG_IXGBE_DCB
+static void ixgbe_init_dcb(struct ixgbe_adapter *adapter)
+{
+       struct ixgbe_hw *hw = &adapter->hw;
+       struct tc_configuration *tc;
+       int j;
+
+       switch (hw->mac.type) {
+       case ixgbe_mac_82598EB:
+       case ixgbe_mac_82599EB:
+               adapter->dcb_cfg.num_tcs.pg_tcs = MAX_TRAFFIC_CLASS;
+               adapter->dcb_cfg.num_tcs.pfc_tcs = MAX_TRAFFIC_CLASS;
+               break;
+       case ixgbe_mac_X540:
+       case ixgbe_mac_X550:
+               adapter->dcb_cfg.num_tcs.pg_tcs = X540_TRAFFIC_CLASS;
+               adapter->dcb_cfg.num_tcs.pfc_tcs = X540_TRAFFIC_CLASS;
+               break;
+       case ixgbe_mac_X550EM_x:
+       case ixgbe_mac_x550em_a:
+       default:
+               adapter->dcb_cfg.num_tcs.pg_tcs = DEF_TRAFFIC_CLASS;
+               adapter->dcb_cfg.num_tcs.pfc_tcs = DEF_TRAFFIC_CLASS;
+               break;
+       }
+
+       /* Configure DCB traffic classes */
+       for (j = 0; j < MAX_TRAFFIC_CLASS; j++) {
+               tc = &adapter->dcb_cfg.tc_config[j];
+               tc->path[DCB_TX_CONFIG].bwg_id = 0;
+               tc->path[DCB_TX_CONFIG].bwg_percent = 12 + (j & 1);
+               tc->path[DCB_RX_CONFIG].bwg_id = 0;
+               tc->path[DCB_RX_CONFIG].bwg_percent = 12 + (j & 1);
+               tc->dcb_pfc = pfc_disabled;
+       }
+
+       /* Initialize default user to priority mapping, UPx->TC0 */
+       tc = &adapter->dcb_cfg.tc_config[0];
+       tc->path[DCB_TX_CONFIG].up_to_tc_bitmap = 0xFF;
+       tc->path[DCB_RX_CONFIG].up_to_tc_bitmap = 0xFF;
+
+       adapter->dcb_cfg.bw_percentage[DCB_TX_CONFIG][0] = 100;
+       adapter->dcb_cfg.bw_percentage[DCB_RX_CONFIG][0] = 100;
+       adapter->dcb_cfg.pfc_mode_enable = false;
+       adapter->dcb_set_bitmap = 0x00;
+       if (adapter->flags & IXGBE_FLAG_DCB_CAPABLE)
+               adapter->dcbx_cap = DCB_CAP_DCBX_HOST | DCB_CAP_DCBX_VER_CEE;
+       memcpy(&adapter->temp_dcb_cfg, &adapter->dcb_cfg,
+              sizeof(adapter->temp_dcb_cfg));
+}
+#endif
+
 /**
  * ixgbe_sw_init - Initialize general software structures (struct ixgbe_adapter)
  * @adapter: board private structure to initialize
@@ -5575,10 +5625,8 @@ static int ixgbe_sw_init(struct ixgbe_adapter *adapter)
        struct pci_dev *pdev = adapter->pdev;
        unsigned int rss, fdir;
        u32 fwsm;
-#ifdef CONFIG_IXGBE_DCB
-       int j;
-       struct tc_configuration *tc;
-#endif
+       u16 device_caps;
+       int i;
 
        /* PCI config space info */
 
@@ -5600,6 +5648,10 @@ static int ixgbe_sw_init(struct ixgbe_adapter *adapter)
 #ifdef CONFIG_IXGBE_DCA
        adapter->flags |= IXGBE_FLAG_DCA_CAPABLE;
 #endif
+#ifdef CONFIG_IXGBE_DCB
+       adapter->flags |= IXGBE_FLAG_DCB_CAPABLE;
+       adapter->flags &= ~IXGBE_FLAG_DCB_ENABLED;
+#endif
 #ifdef IXGBE_FCOE
        adapter->flags |= IXGBE_FLAG_FCOE_CAPABLE;
        adapter->flags &= ~IXGBE_FLAG_FCOE_ENABLED;
@@ -5610,7 +5662,14 @@ static int ixgbe_sw_init(struct ixgbe_adapter *adapter)
 #endif /* IXGBE_FCOE */
 
        /* initialize static ixgbe jump table entries */
-       adapter->jump_tables[0] = ixgbe_ipv4_fields;
+       adapter->jump_tables[0] = kzalloc(sizeof(*adapter->jump_tables[0]),
+                                         GFP_KERNEL);
+       if (!adapter->jump_tables[0])
+               return -ENOMEM;
+       adapter->jump_tables[0]->mat = ixgbe_ipv4_fields;
+
+       for (i = 1; i < IXGBE_MAX_LINK_HANDLE; i++)
+               adapter->jump_tables[i] = NULL;
 
        adapter->mac_table = kzalloc(sizeof(struct ixgbe_mac_addr) *
                                     hw->mac.num_rar_entries,
@@ -5649,6 +5708,16 @@ static int ixgbe_sw_init(struct ixgbe_adapter *adapter)
                break;
        case ixgbe_mac_X550EM_x:
        case ixgbe_mac_x550em_a:
+#ifdef CONFIG_IXGBE_DCB
+               adapter->flags &= ~IXGBE_FLAG_DCB_CAPABLE;
+#endif
+#ifdef IXGBE_FCOE
+               adapter->flags &= ~IXGBE_FLAG_FCOE_CAPABLE;
+#ifdef CONFIG_IXGBE_DCB
+               adapter->fcoe.up = 0;
+#endif /* IXGBE_DCB */
+#endif /* IXGBE_FCOE */
+       /* Fall Through */
        case ixgbe_mac_X550:
 #ifdef CONFIG_IXGBE_DCA
                adapter->flags &= ~IXGBE_FLAG_DCA_CAPABLE;
@@ -5670,43 +5739,7 @@ static int ixgbe_sw_init(struct ixgbe_adapter *adapter)
        spin_lock_init(&adapter->fdir_perfect_lock);
 
 #ifdef CONFIG_IXGBE_DCB
-       switch (hw->mac.type) {
-       case ixgbe_mac_X540:
-       case ixgbe_mac_X550:
-       case ixgbe_mac_X550EM_x:
-       case ixgbe_mac_x550em_a:
-               adapter->dcb_cfg.num_tcs.pg_tcs = X540_TRAFFIC_CLASS;
-               adapter->dcb_cfg.num_tcs.pfc_tcs = X540_TRAFFIC_CLASS;
-               break;
-       default:
-               adapter->dcb_cfg.num_tcs.pg_tcs = MAX_TRAFFIC_CLASS;
-               adapter->dcb_cfg.num_tcs.pfc_tcs = MAX_TRAFFIC_CLASS;
-               break;
-       }
-
-       /* Configure DCB traffic classes */
-       for (j = 0; j < MAX_TRAFFIC_CLASS; j++) {
-               tc = &adapter->dcb_cfg.tc_config[j];
-               tc->path[DCB_TX_CONFIG].bwg_id = 0;
-               tc->path[DCB_TX_CONFIG].bwg_percent = 12 + (j & 1);
-               tc->path[DCB_RX_CONFIG].bwg_id = 0;
-               tc->path[DCB_RX_CONFIG].bwg_percent = 12 + (j & 1);
-               tc->dcb_pfc = pfc_disabled;
-       }
-
-       /* Initialize default user to priority mapping, UPx->TC0 */
-       tc = &adapter->dcb_cfg.tc_config[0];
-       tc->path[DCB_TX_CONFIG].up_to_tc_bitmap = 0xFF;
-       tc->path[DCB_RX_CONFIG].up_to_tc_bitmap = 0xFF;
-
-       adapter->dcb_cfg.bw_percentage[DCB_TX_CONFIG][0] = 100;
-       adapter->dcb_cfg.bw_percentage[DCB_RX_CONFIG][0] = 100;
-       adapter->dcb_cfg.pfc_mode_enable = false;
-       adapter->dcb_set_bitmap = 0x00;
-       adapter->dcbx_cap = DCB_CAP_DCBX_HOST | DCB_CAP_DCBX_VER_CEE;
-       memcpy(&adapter->temp_dcb_cfg, &adapter->dcb_cfg,
-              sizeof(adapter->temp_dcb_cfg));
-
+       ixgbe_init_dcb(adapter);
 #endif
 
        /* default flow control settings */
@@ -5740,6 +5773,22 @@ static int ixgbe_sw_init(struct ixgbe_adapter *adapter)
        adapter->tx_ring_count = IXGBE_DEFAULT_TXD;
        adapter->rx_ring_count = IXGBE_DEFAULT_RXD;
 
+       /* Cache bit indicating need for crosstalk fix */
+       switch (hw->mac.type) {
+       case ixgbe_mac_82599EB:
+       case ixgbe_mac_X550EM_x:
+       case ixgbe_mac_x550em_a:
+               hw->mac.ops.get_device_caps(hw, &device_caps);
+               if (device_caps & IXGBE_DEVICE_CAPS_NO_CROSSTALK_WR)
+                       adapter->need_crosstalk_fix = false;
+               else
+                       adapter->need_crosstalk_fix = true;
+               break;
+       default:
+               adapter->need_crosstalk_fix = false;
+               break;
+       }
+
        /* set default work limits */
        adapter->tx_work_limit = IXGBE_DEFAULT_TX_WORK;
 
@@ -6631,7 +6680,7 @@ static void ixgbe_check_hang_subtask(struct ixgbe_adapter *adapter)
                for (i = 0; i < adapter->num_q_vectors; i++) {
                        struct ixgbe_q_vector *qv = adapter->q_vector[i];
                        if (qv->rx.ring || qv->tx.ring)
-                               eics |= ((u64)1 << i);
+                               eics |= BIT_ULL(i);
                }
        }
 
@@ -6662,6 +6711,18 @@ static void ixgbe_watchdog_update_link(struct ixgbe_adapter *adapter)
                link_up = true;
        }
 
+       /* If Crosstalk fix enabled do the sanity check of making sure
+        * the SFP+ cage is empty.
+        */
+       if (adapter->need_crosstalk_fix) {
+               u32 sfp_cage_full;
+
+               sfp_cage_full = IXGBE_READ_REG(hw, IXGBE_ESDP) &
+                               IXGBE_ESDP_SDP2;
+               if (ixgbe_is_sfp(hw) && link_up && !sfp_cage_full)
+                       link_up = false;
+       }
+
        if (adapter->ixgbe_ieee_pfc)
                pfc_en |= !!(adapter->ixgbe_ieee_pfc->pfc_en);
 
@@ -7008,6 +7069,16 @@ static void ixgbe_sfp_detection_subtask(struct ixgbe_adapter *adapter)
        struct ixgbe_hw *hw = &adapter->hw;
        s32 err;
 
+       /* If crosstalk fix enabled verify the SFP+ cage is full */
+       if (adapter->need_crosstalk_fix) {
+               u32 sfp_cage_full;
+
+               sfp_cage_full = IXGBE_READ_REG(hw, IXGBE_ESDP) &
+                               IXGBE_ESDP_SDP2;
+               if (!sfp_cage_full)
+                       return;
+       }
+
        /* not searching for SFP so there is nothing to do here */
        if (!(adapter->flags2 & IXGBE_FLAG2_SEARCH_FOR_SFP) &&
            !(adapter->flags2 & IXGBE_FLAG2_SFP_NEEDS_RESET))
@@ -7220,9 +7291,18 @@ static int ixgbe_tso(struct ixgbe_ring *tx_ring,
                     struct ixgbe_tx_buffer *first,
                     u8 *hdr_len)
 {
+       u32 vlan_macip_lens, type_tucmd, mss_l4len_idx;
        struct sk_buff *skb = first->skb;
-       u32 vlan_macip_lens, type_tucmd;
-       u32 mss_l4len_idx, l4len;
+       union {
+               struct iphdr *v4;
+               struct ipv6hdr *v6;
+               unsigned char *hdr;
+       } ip;
+       union {
+               struct tcphdr *tcp;
+               unsigned char *hdr;
+       } l4;
+       u32 paylen, l4_offset;
        int err;
 
        if (skb->ip_summed != CHECKSUM_PARTIAL)
@@ -7235,46 +7315,52 @@ static int ixgbe_tso(struct ixgbe_ring *tx_ring,
        if (err < 0)
                return err;
 
+       ip.hdr = skb_network_header(skb);
+       l4.hdr = skb_checksum_start(skb);
+
        /* ADV DTYP TUCMD MKRLOC/ISCSIHEDLEN */
        type_tucmd = IXGBE_ADVTXD_TUCMD_L4T_TCP;
 
-       if (first->protocol == htons(ETH_P_IP)) {
-               struct iphdr *iph = ip_hdr(skb);
-               iph->tot_len = 0;
-               iph->check = 0;
-               tcp_hdr(skb)->check = ~csum_tcpudp_magic(iph->saddr,
-                                                        iph->daddr, 0,
-                                                        IPPROTO_TCP,
-                                                        0);
+       /* initialize outer IP header fields */
+       if (ip.v4->version == 4) {
+               /* IP header will have to cancel out any data that
+                * is not a part of the outer IP header
+                */
+               ip.v4->check = csum_fold(csum_add(lco_csum(skb),
+                                                 csum_unfold(l4.tcp->check)));
                type_tucmd |= IXGBE_ADVTXD_TUCMD_IPV4;
+
+               ip.v4->tot_len = 0;
                first->tx_flags |= IXGBE_TX_FLAGS_TSO |
                                   IXGBE_TX_FLAGS_CSUM |
                                   IXGBE_TX_FLAGS_IPV4;
-       } else if (skb_is_gso_v6(skb)) {
-               ipv6_hdr(skb)->payload_len = 0;
-               tcp_hdr(skb)->check =
-                   ~csum_ipv6_magic(&ipv6_hdr(skb)->saddr,
-                                    &ipv6_hdr(skb)->daddr,
-                                    0, IPPROTO_TCP, 0);
+       } else {
+               ip.v6->payload_len = 0;
                first->tx_flags |= IXGBE_TX_FLAGS_TSO |
                                   IXGBE_TX_FLAGS_CSUM;
        }
 
-       /* compute header lengths */
-       l4len = tcp_hdrlen(skb);
-       *hdr_len = skb_transport_offset(skb) + l4len;
+       /* determine offset of inner transport header */
+       l4_offset = l4.hdr - skb->data;
+
+       /* compute length of segmentation header */
+       *hdr_len = (l4.tcp->doff * 4) + l4_offset;
+
+       /* remove payload length from inner checksum */
+       paylen = skb->len - l4_offset;
+       csum_replace_by_diff(&l4.tcp->check, htonl(paylen));
 
        /* update gso size and bytecount with header size */
        first->gso_segs = skb_shinfo(skb)->gso_segs;
        first->bytecount += (first->gso_segs - 1) * *hdr_len;
 
        /* mss_l4len_id: use 0 as index for TSO */
-       mss_l4len_idx = l4len << IXGBE_ADVTXD_L4LEN_SHIFT;
+       mss_l4len_idx = (*hdr_len - l4_offset) << IXGBE_ADVTXD_L4LEN_SHIFT;
        mss_l4len_idx |= skb_shinfo(skb)->gso_size << IXGBE_ADVTXD_MSS_SHIFT;
 
        /* vlan_macip_lens: HEADLEN, MACLEN, VLAN tag */
-       vlan_macip_lens = skb_network_header_len(skb);
-       vlan_macip_lens |= skb_network_offset(skb) << IXGBE_ADVTXD_MACLEN_SHIFT;
+       vlan_macip_lens = l4.hdr - ip.hdr;
+       vlan_macip_lens |= (ip.hdr - skb->data) << IXGBE_ADVTXD_MACLEN_SHIFT;
        vlan_macip_lens |= first->tx_flags & IXGBE_TX_FLAGS_VLAN_MASK;
 
        ixgbe_tx_ctxtdesc(tx_ring, vlan_macip_lens, 0, type_tucmd,
@@ -8268,6 +8354,134 @@ static int ixgbe_configure_clsu32_del_hnode(struct ixgbe_adapter *adapter,
        return 0;
 }
 
+#ifdef CONFIG_NET_CLS_ACT
+static int handle_redirect_action(struct ixgbe_adapter *adapter, int ifindex,
+                                 u8 *queue, u64 *action)
+{
+       unsigned int num_vfs = adapter->num_vfs, vf;
+       struct net_device *upper;
+       struct list_head *iter;
+
+       /* redirect to a SRIOV VF */
+       for (vf = 0; vf < num_vfs; ++vf) {
+               upper = pci_get_drvdata(adapter->vfinfo[vf].vfdev);
+               if (upper->ifindex == ifindex) {
+                       if (adapter->num_rx_pools > 1)
+                               *queue = vf * 2;
+                       else
+                               *queue = vf * adapter->num_rx_queues_per_pool;
+
+                       *action = vf + 1;
+                       *action <<= ETHTOOL_RX_FLOW_SPEC_RING_VF_OFF;
+                       return 0;
+               }
+       }
+
+       /* redirect to a offloaded macvlan netdev */
+       netdev_for_each_all_upper_dev_rcu(adapter->netdev, upper, iter) {
+               if (netif_is_macvlan(upper)) {
+                       struct macvlan_dev *dfwd = netdev_priv(upper);
+                       struct ixgbe_fwd_adapter *vadapter = dfwd->fwd_priv;
+
+                       if (vadapter && vadapter->netdev->ifindex == ifindex) {
+                               *queue = adapter->rx_ring[vadapter->rx_base_queue]->reg_idx;
+                               *action = *queue;
+                               return 0;
+                       }
+               }
+       }
+
+       return -EINVAL;
+}
+
+static int parse_tc_actions(struct ixgbe_adapter *adapter,
+                           struct tcf_exts *exts, u64 *action, u8 *queue)
+{
+       const struct tc_action *a;
+       int err;
+
+       if (tc_no_actions(exts))
+               return -EINVAL;
+
+       tc_for_each_action(a, exts) {
+
+               /* Drop action */
+               if (is_tcf_gact_shot(a)) {
+                       *action = IXGBE_FDIR_DROP_QUEUE;
+                       *queue = IXGBE_FDIR_DROP_QUEUE;
+                       return 0;
+               }
+
+               /* Redirect to a VF or a offloaded macvlan */
+               if (is_tcf_mirred_redirect(a)) {
+                       int ifindex = tcf_mirred_ifindex(a);
+
+                       err = handle_redirect_action(adapter, ifindex, queue,
+                                                    action);
+                       if (err == 0)
+                               return err;
+               }
+       }
+
+       return -EINVAL;
+}
+#else
+static int parse_tc_actions(struct ixgbe_adapter *adapter,
+                           struct tcf_exts *exts, u64 *action, u8 *queue)
+{
+       return -EINVAL;
+}
+#endif /* CONFIG_NET_CLS_ACT */
+
+static int ixgbe_clsu32_build_input(struct ixgbe_fdir_filter *input,
+                                   union ixgbe_atr_input *mask,
+                                   struct tc_cls_u32_offload *cls,
+                                   struct ixgbe_mat_field *field_ptr,
+                                   struct ixgbe_nexthdr *nexthdr)
+{
+       int i, j, off;
+       __be32 val, m;
+       bool found_entry = false, found_jump_field = false;
+
+       for (i = 0; i < cls->knode.sel->nkeys; i++) {
+               off = cls->knode.sel->keys[i].off;
+               val = cls->knode.sel->keys[i].val;
+               m = cls->knode.sel->keys[i].mask;
+
+               for (j = 0; field_ptr[j].val; j++) {
+                       if (field_ptr[j].off == off) {
+                               field_ptr[j].val(input, mask, val, m);
+                               input->filter.formatted.flow_type |=
+                                       field_ptr[j].type;
+                               found_entry = true;
+                               break;
+                       }
+               }
+               if (nexthdr) {
+                       if (nexthdr->off == cls->knode.sel->keys[i].off &&
+                           nexthdr->val == cls->knode.sel->keys[i].val &&
+                           nexthdr->mask == cls->knode.sel->keys[i].mask)
+                               found_jump_field = true;
+                       else
+                               continue;
+               }
+       }
+
+       if (nexthdr && !found_jump_field)
+               return -EINVAL;
+
+       if (!found_entry)
+               return 0;
+
+       mask->formatted.flow_type = IXGBE_ATR_L4TYPE_IPV6_MASK |
+                                   IXGBE_ATR_L4TYPE_MASK;
+
+       if (input->filter.formatted.flow_type == IXGBE_ATR_FLOW_TYPE_IPV4)
+               mask->formatted.flow_type &= IXGBE_ATR_L4TYPE_IPV6_MASK;
+
+       return 0;
+}
+
 static int ixgbe_configure_clsu32(struct ixgbe_adapter *adapter,
                                  __be16 protocol,
                                  struct tc_cls_u32_offload *cls)
@@ -8275,16 +8489,13 @@ static int ixgbe_configure_clsu32(struct ixgbe_adapter *adapter,
        u32 loc = cls->knode.handle & 0xfffff;
        struct ixgbe_hw *hw = &adapter->hw;
        struct ixgbe_mat_field *field_ptr;
-       struct ixgbe_fdir_filter *input;
-       union ixgbe_atr_input mask;
-#ifdef CONFIG_NET_CLS_ACT
-       const struct tc_action *a;
-#endif
-       int i, err = 0;
+       struct ixgbe_fdir_filter *input = NULL;
+       union ixgbe_atr_input *mask = NULL;
+       struct ixgbe_jump_table *jump = NULL;
+       int i, err = -EINVAL;
        u8 queue;
        u32 uhtid, link_uhtid;
 
-       memset(&mask, 0, sizeof(union ixgbe_atr_input));
        uhtid = TC_U32_USERHTID(cls->knode.handle);
        link_uhtid = TC_U32_USERHTID(cls->knode.link_handle);
 
@@ -8296,39 +8507,11 @@ static int ixgbe_configure_clsu32(struct ixgbe_adapter *adapter,
         * headers when needed.
         */
        if (protocol != htons(ETH_P_IP))
-               return -EINVAL;
-
-       if (link_uhtid) {
-               struct ixgbe_nexthdr *nexthdr = ixgbe_ipv4_jumps;
-
-               if (link_uhtid >= IXGBE_MAX_LINK_HANDLE)
-                       return -EINVAL;
-
-               if (!test_bit(link_uhtid - 1, &adapter->tables))
-                       return -EINVAL;
-
-               for (i = 0; nexthdr[i].jump; i++) {
-                       if (nexthdr[i].o != cls->knode.sel->offoff ||
-                           nexthdr[i].s != cls->knode.sel->offshift ||
-                           nexthdr[i].m != cls->knode.sel->offmask ||
-                           /* do not support multiple key jumps its just mad */
-                           cls->knode.sel->nkeys > 1)
-                               return -EINVAL;
-
-                       if (nexthdr[i].off == cls->knode.sel->keys[0].off &&
-                           nexthdr[i].val == cls->knode.sel->keys[0].val &&
-                           nexthdr[i].mask == cls->knode.sel->keys[0].mask) {
-                               adapter->jump_tables[link_uhtid] =
-                                                               nexthdr[i].jump;
-                               break;
-                       }
-               }
-               return 0;
-       }
+               return err;
 
        if (loc >= ((1024 << adapter->fdir_pballoc) - 2)) {
                e_err(drv, "Location out of range\n");
-               return -EINVAL;
+               return err;
        }
 
        /* cls u32 is a graph starting at root node 0x800. The driver tracks
@@ -8339,87 +8522,123 @@ static int ixgbe_configure_clsu32(struct ixgbe_adapter *adapter,
         * this function _should_ be generic try not to hardcode values here.
         */
        if (uhtid == 0x800) {
-               field_ptr = adapter->jump_tables[0];
+               field_ptr = (adapter->jump_tables[0])->mat;
        } else {
                if (uhtid >= IXGBE_MAX_LINK_HANDLE)
-                       return -EINVAL;
-
-               field_ptr = adapter->jump_tables[uhtid];
+                       return err;
+               if (!adapter->jump_tables[uhtid])
+                       return err;
+               field_ptr = (adapter->jump_tables[uhtid])->mat;
        }
 
        if (!field_ptr)
-               return -EINVAL;
+               return err;
 
-       input = kzalloc(sizeof(*input), GFP_KERNEL);
-       if (!input)
-               return -ENOMEM;
+       /* At this point we know the field_ptr is valid and need to either
+        * build cls_u32 link or attach filter. Because adding a link to
+        * a handle that does not exist is invalid and the same for adding
+        * rules to handles that don't exist.
+        */
 
-       for (i = 0; i < cls->knode.sel->nkeys; i++) {
-               int off = cls->knode.sel->keys[i].off;
-               __be32 val = cls->knode.sel->keys[i].val;
-               __be32 m = cls->knode.sel->keys[i].mask;
-               bool found_entry = false;
-               int j;
+       if (link_uhtid) {
+               struct ixgbe_nexthdr *nexthdr = ixgbe_ipv4_jumps;
 
-               for (j = 0; field_ptr[j].val; j++) {
-                       if (field_ptr[j].off == off) {
-                               field_ptr[j].val(input, &mask, val, m);
-                               input->filter.formatted.flow_type |=
-                                       field_ptr[j].type;
-                               found_entry = true;
+               if (link_uhtid >= IXGBE_MAX_LINK_HANDLE)
+                       return err;
+
+               if (!test_bit(link_uhtid - 1, &adapter->tables))
+                       return err;
+
+               for (i = 0; nexthdr[i].jump; i++) {
+                       if (nexthdr[i].o != cls->knode.sel->offoff ||
+                           nexthdr[i].s != cls->knode.sel->offshift ||
+                           nexthdr[i].m != cls->knode.sel->offmask)
+                               return err;
+
+                       jump = kzalloc(sizeof(*jump), GFP_KERNEL);
+                       if (!jump)
+                               return -ENOMEM;
+                       input = kzalloc(sizeof(*input), GFP_KERNEL);
+                       if (!input) {
+                               err = -ENOMEM;
+                               goto free_jump;
+                       }
+                       mask = kzalloc(sizeof(*mask), GFP_KERNEL);
+                       if (!mask) {
+                               err = -ENOMEM;
+                               goto free_input;
+                       }
+                       jump->input = input;
+                       jump->mask = mask;
+                       err = ixgbe_clsu32_build_input(input, mask, cls,
+                                                      field_ptr, &nexthdr[i]);
+                       if (!err) {
+                               jump->mat = nexthdr[i].jump;
+                               adapter->jump_tables[link_uhtid] = jump;
                                break;
                        }
                }
-
-               if (!found_entry)
-                       goto err_out;
+               return 0;
        }
 
-       mask.formatted.flow_type = IXGBE_ATR_L4TYPE_IPV6_MASK |
-                                  IXGBE_ATR_L4TYPE_MASK;
-
-       if (input->filter.formatted.flow_type == IXGBE_ATR_FLOW_TYPE_IPV4)
-               mask.formatted.flow_type &= IXGBE_ATR_L4TYPE_IPV6_MASK;
+       input = kzalloc(sizeof(*input), GFP_KERNEL);
+       if (!input)
+               return -ENOMEM;
+       mask = kzalloc(sizeof(*mask), GFP_KERNEL);
+       if (!mask) {
+               err = -ENOMEM;
+               goto free_input;
+       }
 
-#ifdef CONFIG_NET_CLS_ACT
-       if (list_empty(&cls->knode.exts->actions))
+       if ((uhtid != 0x800) && (adapter->jump_tables[uhtid])) {
+               if ((adapter->jump_tables[uhtid])->input)
+                       memcpy(input, (adapter->jump_tables[uhtid])->input,
+                              sizeof(*input));
+               if ((adapter->jump_tables[uhtid])->mask)
+                       memcpy(mask, (adapter->jump_tables[uhtid])->mask,
+                              sizeof(*mask));
+       }
+       err = ixgbe_clsu32_build_input(input, mask, cls, field_ptr, NULL);
+       if (err)
                goto err_out;
 
-       list_for_each_entry(a, &cls->knode.exts->actions, list) {
-               if (!is_tcf_gact_shot(a))
-                       goto err_out;
-       }
-#endif
+       err = parse_tc_actions(adapter, cls->knode.exts, &input->action,
+                              &queue);
+       if (err < 0)
+               goto err_out;
 
-       input->action = IXGBE_FDIR_DROP_QUEUE;
-       queue = IXGBE_FDIR_DROP_QUEUE;
        input->sw_idx = loc;
 
        spin_lock(&adapter->fdir_perfect_lock);
 
        if (hlist_empty(&adapter->fdir_filter_list)) {
-               memcpy(&adapter->fdir_mask, &mask, sizeof(mask));
-               err = ixgbe_fdir_set_input_mask_82599(hw, &mask);
+               memcpy(&adapter->fdir_mask, mask, sizeof(*mask));
+               err = ixgbe_fdir_set_input_mask_82599(hw, mask);
                if (err)
                        goto err_out_w_lock;
-       } else if (memcmp(&adapter->fdir_mask, &mask, sizeof(mask))) {
+       } else if (memcmp(&adapter->fdir_mask, mask, sizeof(*mask))) {
                err = -EINVAL;
                goto err_out_w_lock;
        }
 
-       ixgbe_atr_compute_perfect_hash_82599(&input->filter, &mask);
+       ixgbe_atr_compute_perfect_hash_82599(&input->filter, mask);
        err = ixgbe_fdir_write_perfect_filter_82599(hw, &input->filter,
                                                    input->sw_idx, queue);
        if (!err)
                ixgbe_update_ethtool_fdir_entry(adapter, input, input->sw_idx);
        spin_unlock(&adapter->fdir_perfect_lock);
 
+       kfree(mask);
        return err;
 err_out_w_lock:
        spin_unlock(&adapter->fdir_perfect_lock);
 err_out:
+       kfree(mask);
+free_input:
        kfree(input);
-       return -EINVAL;
+free_jump:
+       kfree(jump);
+       return err;
 }
 
 static int __ixgbe_setup_tc(struct net_device *dev, u32 handle, __be16 proto,
@@ -8862,17 +9081,36 @@ static void ixgbe_fwd_del(struct net_device *pdev, void *priv)
        kfree(fwd_adapter);
 }
 
-#define IXGBE_MAX_TUNNEL_HDR_LEN 80
+#define IXGBE_MAX_MAC_HDR_LEN          127
+#define IXGBE_MAX_NETWORK_HDR_LEN      511
+
 static netdev_features_t
 ixgbe_features_check(struct sk_buff *skb, struct net_device *dev,
                     netdev_features_t features)
 {
-       if (!skb->encapsulation)
-               return features;
-
-       if (unlikely(skb_inner_mac_header(skb) - skb_transport_header(skb) >
-                    IXGBE_MAX_TUNNEL_HDR_LEN))
-               return features & ~NETIF_F_CSUM_MASK;
+       unsigned int network_hdr_len, mac_hdr_len;
+
+       /* Make certain the headers can be described by a context descriptor */
+       mac_hdr_len = skb_network_header(skb) - skb->data;
+       if (unlikely(mac_hdr_len > IXGBE_MAX_MAC_HDR_LEN))
+               return features & ~(NETIF_F_HW_CSUM |
+                                   NETIF_F_SCTP_CRC |
+                                   NETIF_F_HW_VLAN_CTAG_TX |
+                                   NETIF_F_TSO |
+                                   NETIF_F_TSO6);
+
+       network_hdr_len = skb_checksum_start(skb) - skb_network_header(skb);
+       if (unlikely(network_hdr_len >  IXGBE_MAX_NETWORK_HDR_LEN))
+               return features & ~(NETIF_F_HW_CSUM |
+                                   NETIF_F_SCTP_CRC |
+                                   NETIF_F_TSO |
+                                   NETIF_F_TSO6);
+
+       /* We can only support IPV4 TSO in tunnels if we can mangle the
+        * inner IP ID field, so strip TSO if MANGLEID is not supported.
+        */
+       if (skb->encapsulation && !(features & NETIF_F_TSO_MANGLEID))
+               features &= ~NETIF_F_TSO;
 
        return features;
 }
@@ -8973,7 +9211,7 @@ static inline int ixgbe_enumerate_functions(struct ixgbe_adapter *adapter)
 
 /**
  * ixgbe_wol_supported - Check whether device supports WoL
- * @hw: hw specific details
+ * @adapter: the adapter private structure
  * @device_id: the device ID
  * @subdev_id: the subsystem device ID
  *
@@ -8981,19 +9219,33 @@ static inline int ixgbe_enumerate_functions(struct ixgbe_adapter *adapter)
  * which devices have WoL support
  *
  **/
-int ixgbe_wol_supported(struct ixgbe_adapter *adapter, u16 device_id,
-                       u16 subdevice_id)
+bool ixgbe_wol_supported(struct ixgbe_adapter *adapter, u16 device_id,
+                        u16 subdevice_id)
 {
        struct ixgbe_hw *hw = &adapter->hw;
        u16 wol_cap = adapter->eeprom_cap & IXGBE_DEVICE_CAPS_WOL_MASK;
-       int is_wol_supported = 0;
 
+       /* WOL not supported on 82598 */
+       if (hw->mac.type == ixgbe_mac_82598EB)
+               return false;
+
+       /* check eeprom to see if WOL is enabled for X540 and newer */
+       if (hw->mac.type >= ixgbe_mac_X540) {
+               if ((wol_cap == IXGBE_DEVICE_CAPS_WOL_PORT0_1) ||
+                   ((wol_cap == IXGBE_DEVICE_CAPS_WOL_PORT0) &&
+                    (hw->bus.func == 0)))
+                       return true;
+       }
+
+       /* WOL is determined based on device IDs for 82599 MACs */
        switch (device_id) {
        case IXGBE_DEV_ID_82599_SFP:
                /* Only these subdevices could supports WOL */
                switch (subdevice_id) {
-               case IXGBE_SUBDEV_ID_82599_SFP_WOL0:
                case IXGBE_SUBDEV_ID_82599_560FLR:
+               case IXGBE_SUBDEV_ID_82599_LOM_SNAP6:
+               case IXGBE_SUBDEV_ID_82599_SFP_WOL0:
+               case IXGBE_SUBDEV_ID_82599_SFP_2OCP:
                        /* only support first port */
                        if (hw->bus.func != 0)
                                break;
@@ -9001,44 +9253,31 @@ int ixgbe_wol_supported(struct ixgbe_adapter *adapter, u16 device_id,
                case IXGBE_SUBDEV_ID_82599_SFP:
                case IXGBE_SUBDEV_ID_82599_RNDC:
                case IXGBE_SUBDEV_ID_82599_ECNA_DP:
-               case IXGBE_SUBDEV_ID_82599_LOM_SFP:
-                       is_wol_supported = 1;
-                       break;
+               case IXGBE_SUBDEV_ID_82599_SFP_1OCP:
+               case IXGBE_SUBDEV_ID_82599_SFP_LOM_OEM1:
+               case IXGBE_SUBDEV_ID_82599_SFP_LOM_OEM2:
+                       return true;
                }
                break;
        case IXGBE_DEV_ID_82599EN_SFP:
-               /* Only this subdevice supports WOL */
+               /* Only these subdevices support WOL */
                switch (subdevice_id) {
                case IXGBE_SUBDEV_ID_82599EN_SFP_OCP1:
-                       is_wol_supported = 1;
-                       break;
+                       return true;
                }
                break;
        case IXGBE_DEV_ID_82599_COMBO_BACKPLANE:
                /* All except this subdevice support WOL */
                if (subdevice_id != IXGBE_SUBDEV_ID_82599_KX4_KR_MEZZ)
-                       is_wol_supported = 1;
+                       return true;
                break;
        case IXGBE_DEV_ID_82599_KX4:
-               is_wol_supported = 1;
-               break;
-       case IXGBE_DEV_ID_X540T:
-       case IXGBE_DEV_ID_X540T1:
-       case IXGBE_DEV_ID_X550T:
-       case IXGBE_DEV_ID_X550T1:
-       case IXGBE_DEV_ID_X550EM_X_KX4:
-       case IXGBE_DEV_ID_X550EM_X_KR:
-       case IXGBE_DEV_ID_X550EM_X_10G_T:
-               /* check eeprom to see if enabled wol */
-               if ((wol_cap == IXGBE_DEVICE_CAPS_WOL_PORT0_1) ||
-                   ((wol_cap == IXGBE_DEVICE_CAPS_WOL_PORT0) &&
-                    (hw->bus.func == 0))) {
-                       is_wol_supported = 1;
-               }
+               return  true;
+       default:
                break;
        }
 
-       return is_wol_supported;
+       return false;
 }
 
 /**
@@ -9156,7 +9395,7 @@ static int ixgbe_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
                goto err_ioremap;
        }
        /* If EEPROM is valid (bit 8 = 1), use default otherwise use bit bang */
-       if (!(eec & (1 << 8)))
+       if (!(eec & BIT(8)))
                hw->eeprom.ops.read = &ixgbe_read_eeprom_bit_bang_generic;
 
        /* PHY */
@@ -9239,37 +9478,51 @@ skip_sriov:
                           NETIF_F_TSO6 |
                           NETIF_F_RXHASH |
                           NETIF_F_RXCSUM |
-                          NETIF_F_HW_CSUM |
-                          NETIF_F_HW_VLAN_CTAG_TX |
-                          NETIF_F_HW_VLAN_CTAG_RX |
-                          NETIF_F_HW_VLAN_CTAG_FILTER;
+                          NETIF_F_HW_CSUM;
+
+#define IXGBE_GSO_PARTIAL_FEATURES (NETIF_F_GSO_GRE | \
+                                   NETIF_F_GSO_GRE_CSUM | \
+                                   NETIF_F_GSO_IPIP | \
+                                   NETIF_F_GSO_SIT | \
+                                   NETIF_F_GSO_UDP_TUNNEL | \
+                                   NETIF_F_GSO_UDP_TUNNEL_CSUM)
+
+       netdev->gso_partial_features = IXGBE_GSO_PARTIAL_FEATURES;
+       netdev->features |= NETIF_F_GSO_PARTIAL |
+                           IXGBE_GSO_PARTIAL_FEATURES;
 
        if (hw->mac.type >= ixgbe_mac_82599EB)
                netdev->features |= NETIF_F_SCTP_CRC;
 
        /* copy netdev features into list of user selectable features */
-       netdev->hw_features |= netdev->features;
-       netdev->hw_features |= NETIF_F_RXALL |
+       netdev->hw_features |= netdev->features |
+                              NETIF_F_HW_VLAN_CTAG_RX |
+                              NETIF_F_HW_VLAN_CTAG_TX |
+                              NETIF_F_RXALL |
                               NETIF_F_HW_L2FW_DOFFLOAD;
 
        if (hw->mac.type >= ixgbe_mac_82599EB)
                netdev->hw_features |= NETIF_F_NTUPLE |
                                       NETIF_F_HW_TC;
 
-       netdev->vlan_features |= NETIF_F_SG |
-                                NETIF_F_TSO |
-                                NETIF_F_TSO6 |
-                                NETIF_F_HW_CSUM |
-                                NETIF_F_SCTP_CRC;
+       if (pci_using_dac)
+               netdev->features |= NETIF_F_HIGHDMA;
+
+       /* set this bit last since it cannot be part of vlan_features */
+       netdev->features |= NETIF_F_HW_VLAN_CTAG_FILTER |
+                           NETIF_F_HW_VLAN_CTAG_RX |
+                           NETIF_F_HW_VLAN_CTAG_TX;
 
+       netdev->vlan_features |= netdev->features | NETIF_F_TSO_MANGLEID;
+       netdev->hw_enc_features |= netdev->vlan_features;
        netdev->mpls_features |= NETIF_F_HW_CSUM;
-       netdev->hw_enc_features |= NETIF_F_HW_CSUM;
 
        netdev->priv_flags |= IFF_UNICAST_FLT;
        netdev->priv_flags |= IFF_SUPP_NOFCS;
 
 #ifdef CONFIG_IXGBE_DCB
-       netdev->dcbnl_ops = &dcbnl_ops;
+       if (adapter->flags & IXGBE_FLAG_DCB_CAPABLE)
+               netdev->dcbnl_ops = &dcbnl_ops;
 #endif
 
 #ifdef IXGBE_FCOE
@@ -9294,10 +9547,6 @@ skip_sriov:
                                         NETIF_F_FCOE_MTU;
        }
 #endif /* IXGBE_FCOE */
-       if (pci_using_dac) {
-               netdev->features |= NETIF_F_HIGHDMA;
-               netdev->vlan_features |= NETIF_F_HIGHDMA;
-       }
 
        if (adapter->flags2 & IXGBE_FLAG2_RSC_CAPABLE)
                netdev->hw_features |= NETIF_F_LRO;
@@ -9463,6 +9712,7 @@ err_sw_init:
        ixgbe_disable_sriov(adapter);
        adapter->flags2 &= ~IXGBE_FLAG2_SEARCH_FOR_SFP;
        iounmap(adapter->io_addr);
+       kfree(adapter->jump_tables[0]);
        kfree(adapter->mac_table);
 err_ioremap:
        disable_dev = !test_and_set_bit(__IXGBE_DISABLED, &adapter->state);
@@ -9491,6 +9741,7 @@ static void ixgbe_remove(struct pci_dev *pdev)
        struct ixgbe_adapter *adapter = pci_get_drvdata(pdev);
        struct net_device *netdev;
        bool disable_dev;
+       int i;
 
        /* if !adapter then we already cleaned up in probe */
        if (!adapter)
@@ -9540,6 +9791,14 @@ static void ixgbe_remove(struct pci_dev *pdev)
 
        e_dev_info("complete\n");
 
+       for (i = 0; i < IXGBE_MAX_LINK_HANDLE; i++) {
+               if (adapter->jump_tables[i]) {
+                       kfree(adapter->jump_tables[i]->input);
+                       kfree(adapter->jump_tables[i]->mask);
+               }
+               kfree(adapter->jump_tables[i]);
+       }
+
        kfree(adapter->mac_table);
        disable_dev = !test_and_set_bit(__IXGBE_DISABLED, &adapter->state);
        free_netdev(netdev);
index b2125e3..a0cb843 100644 (file)
@@ -314,8 +314,8 @@ static s32 ixgbe_check_for_rst_pf(struct ixgbe_hw *hw, u16 vf_number)
                break;
        }
 
-       if (vflre & (1 << vf_shift)) {
-               IXGBE_WRITE_REG(hw, IXGBE_VFLREC(reg_offset), (1 << vf_shift));
+       if (vflre & BIT(vf_shift)) {
+               IXGBE_WRITE_REG(hw, IXGBE_VFLREC(reg_offset), BIT(vf_shift));
                hw->mbx.stats.rsts++;
                return 0;
        }
index 60adde5..a8bed3d 100644 (file)
@@ -38,6 +38,12 @@ struct ixgbe_mat_field {
        unsigned int type;
 };
 
+struct ixgbe_jump_table {
+       struct ixgbe_mat_field *mat;
+       struct ixgbe_fdir_filter *input;
+       union ixgbe_atr_input *mask;
+};
+
 static inline int ixgbe_mat_prgm_sip(struct ixgbe_fdir_filter *input,
                                     union ixgbe_atr_input *mask,
                                     u32 val, u32 m)
index cdf4c38..cc735ec 100644 (file)
 #define IXGBE_PE                               0xE0    /* Port expander addr */
 #define IXGBE_PE_OUTPUT                                1       /* Output reg offset */
 #define IXGBE_PE_CONFIG                                3       /* Config reg offset */
-#define IXGBE_PE_BIT1                          (1 << 1)
+#define IXGBE_PE_BIT1                          BIT(1)
 
 /* Flow control defines */
 #define IXGBE_TAF_SYM_PAUSE                  0x400
index bdc8fdc..e5431bf 100644 (file)
@@ -396,7 +396,7 @@ static int ixgbe_ptp_adjfreq_82599(struct ptp_clock_info *ptp, s32 ppb)
                if (incval > 0x00FFFFFFULL)
                        e_dev_warn("PTP ppb adjusted SYSTIME rate overflowed!\n");
                IXGBE_WRITE_REG(hw, IXGBE_TIMINCA,
-                               (1 << IXGBE_INCPER_SHIFT_82599) |
+                               BIT(IXGBE_INCPER_SHIFT_82599) |
                                ((u32)incval & 0x00FFFFFFUL));
                break;
        default:
@@ -1114,7 +1114,7 @@ void ixgbe_ptp_start_cyclecounter(struct ixgbe_adapter *adapter)
                incval >>= IXGBE_INCVAL_SHIFT_82599;
                cc.shift -= IXGBE_INCVAL_SHIFT_82599;
                IXGBE_WRITE_REG(hw, IXGBE_TIMINCA,
-                               (1 << IXGBE_INCPER_SHIFT_82599) | incval);
+                               BIT(IXGBE_INCPER_SHIFT_82599) | incval);
                break;
        default:
                /* other devices aren't supported */
index adcf000..c5caacd 100644 (file)
@@ -406,7 +406,7 @@ static int ixgbe_set_vf_multicasts(struct ixgbe_adapter *adapter,
                vector_reg = (vfinfo->vf_mc_hashes[i] >> 5) & 0x7F;
                vector_bit = vfinfo->vf_mc_hashes[i] & 0x1F;
                mta_reg = IXGBE_READ_REG(hw, IXGBE_MTA(vector_reg));
-               mta_reg |= (1 << vector_bit);
+               mta_reg |= BIT(vector_bit);
                IXGBE_WRITE_REG(hw, IXGBE_MTA(vector_reg), mta_reg);
        }
        vmolr |= IXGBE_VMOLR_ROMPE;
@@ -433,7 +433,7 @@ void ixgbe_restore_vf_multicasts(struct ixgbe_adapter *adapter)
                        vector_reg = (vfinfo->vf_mc_hashes[j] >> 5) & 0x7F;
                        vector_bit = vfinfo->vf_mc_hashes[j] & 0x1F;
                        mta_reg = IXGBE_READ_REG(hw, IXGBE_MTA(vector_reg));
-                       mta_reg |= (1 << vector_bit);
+                       mta_reg |= BIT(vector_bit);
                        IXGBE_WRITE_REG(hw, IXGBE_MTA(vector_reg), mta_reg);
                }
 
@@ -536,9 +536,9 @@ static s32 ixgbe_set_vf_lpe(struct ixgbe_adapter *adapter, u32 *msgbuf, u32 vf)
                /* enable or disable receive depending on error */
                vfre = IXGBE_READ_REG(hw, IXGBE_VFRE(reg_offset));
                if (err)
-                       vfre &= ~(1 << vf_shift);
+                       vfre &= ~BIT(vf_shift);
                else
-                       vfre |= 1 << vf_shift;
+                       vfre |= BIT(vf_shift);
                IXGBE_WRITE_REG(hw, IXGBE_VFRE(reg_offset), vfre);
 
                if (err) {
@@ -592,8 +592,8 @@ static void ixgbe_clear_vf_vlans(struct ixgbe_adapter *adapter, u32 vf)
        u32 vlvfb_mask, pool_mask, i;
 
        /* create mask for VF and other pools */
-       pool_mask = ~(1 << (VMDQ_P(0) % 32));
-       vlvfb_mask = 1 << (vf % 32);
+       pool_mask = ~BIT(VMDQ_P(0) % 32);
+       vlvfb_mask = BIT(vf % 32);
 
        /* post increment loop, covers VLVF_ENTRIES - 1 to 0 */
        for (i = IXGBE_VLVF_ENTRIES; i--;) {
@@ -629,7 +629,7 @@ static void ixgbe_clear_vf_vlans(struct ixgbe_adapter *adapter, u32 vf)
                        goto update_vlvfb;
 
                vid = vlvf & VLAN_VID_MASK;
-               mask = 1 << (vid % 32);
+               mask = BIT(vid % 32);
 
                /* clear bit from VFTA */
                vfta = IXGBE_READ_REG(hw, IXGBE_VFTA(vid / 32));
@@ -813,7 +813,7 @@ static int ixgbe_vf_reset_msg(struct ixgbe_adapter *adapter, u32 vf)
 
        /* enable transmit for vf */
        reg = IXGBE_READ_REG(hw, IXGBE_VFTE(reg_offset));
-       reg |= 1 << vf_shift;
+       reg |= BIT(vf_shift);
        IXGBE_WRITE_REG(hw, IXGBE_VFTE(reg_offset), reg);
 
        /* force drop enable for all VF Rx queues */
@@ -821,7 +821,7 @@ static int ixgbe_vf_reset_msg(struct ixgbe_adapter *adapter, u32 vf)
 
        /* enable receive for vf */
        reg = IXGBE_READ_REG(hw, IXGBE_VFRE(reg_offset));
-       reg |= 1 << vf_shift;
+       reg |= BIT(vf_shift);
        /*
         * The 82599 cannot support a mix of jumbo and non-jumbo PF/VFs.
         * For more info take a look at ixgbe_set_vf_lpe
@@ -837,7 +837,7 @@ static int ixgbe_vf_reset_msg(struct ixgbe_adapter *adapter, u32 vf)
 
 #endif /* CONFIG_FCOE */
                if (pf_max_frame > ETH_FRAME_LEN)
-                       reg &= ~(1 << vf_shift);
+                       reg &= ~BIT(vf_shift);
        }
        IXGBE_WRITE_REG(hw, IXGBE_VFRE(reg_offset), reg);
 
@@ -846,7 +846,7 @@ static int ixgbe_vf_reset_msg(struct ixgbe_adapter *adapter, u32 vf)
 
        /* Enable counting of spoofed packets in the SSVPC register */
        reg = IXGBE_READ_REG(hw, IXGBE_VMECM(reg_offset));
-       reg |= (1 << vf_shift);
+       reg |= BIT(vf_shift);
        IXGBE_WRITE_REG(hw, IXGBE_VMECM(reg_offset), reg);
 
        /*
@@ -908,8 +908,6 @@ static int ixgbe_set_vf_vlan_msg(struct ixgbe_adapter *adapter,
        u32 add = (msgbuf[0] & IXGBE_VT_MSGINFO_MASK) >> IXGBE_VT_MSGINFO_SHIFT;
        u32 vid = (msgbuf[1] & IXGBE_VLVF_VLANID_MASK);
        u8 tcs = netdev_get_num_tc(adapter->netdev);
-       struct ixgbe_hw *hw = &adapter->hw;
-       int err;
 
        if (adapter->vfinfo[vf].pf_vlan || tcs) {
                e_warn(drv,
@@ -923,19 +921,7 @@ static int ixgbe_set_vf_vlan_msg(struct ixgbe_adapter *adapter,
        if (!vid && !add)
                return 0;
 
-       err = ixgbe_set_vf_vlan(adapter, add, vid, vf);
-       if (err)
-               return err;
-
-       if (adapter->vfinfo[vf].spoofchk_enabled)
-               hw->mac.ops.set_vlan_anti_spoofing(hw, true, vf);
-
-       if (add)
-               adapter->vfinfo[vf].vlan_count++;
-       else if (adapter->vfinfo[vf].vlan_count)
-               adapter->vfinfo[vf].vlan_count--;
-
-       return 0;
+       return ixgbe_set_vf_vlan(adapter, add, vid, vf);
 }
 
 static int ixgbe_set_vf_macvlan_msg(struct ixgbe_adapter *adapter,
@@ -964,8 +950,11 @@ static int ixgbe_set_vf_macvlan_msg(struct ixgbe_adapter *adapter,
                 * If the VF is allowed to set MAC filters then turn off
                 * anti-spoofing to avoid false positives.
                 */
-               if (adapter->vfinfo[vf].spoofchk_enabled)
-                       ixgbe_ndo_set_vf_spoofchk(adapter->netdev, vf, false);
+               if (adapter->vfinfo[vf].spoofchk_enabled) {
+                       struct ixgbe_hw *hw = &adapter->hw;
+
+                       hw->mac.ops.set_mac_anti_spoofing(hw, false, vf);
+               }
        }
 
        err = ixgbe_set_vf_macvlan(adapter, vf, index, new_mac);
@@ -1321,9 +1310,6 @@ static int ixgbe_enable_port_vlan(struct ixgbe_adapter *adapter, int vf,
 
        ixgbe_set_vmvir(adapter, vlan, qos, vf);
        ixgbe_set_vmolr(hw, vf, false);
-       if (adapter->vfinfo[vf].spoofchk_enabled)
-               hw->mac.ops.set_vlan_anti_spoofing(hw, true, vf);
-       adapter->vfinfo[vf].vlan_count++;
 
        /* enable hide vlan on X550 */
        if (hw->mac.type >= ixgbe_mac_X550)
@@ -1356,9 +1342,6 @@ static int ixgbe_disable_port_vlan(struct ixgbe_adapter *adapter, int vf)
        ixgbe_set_vf_vlan(adapter, true, 0, vf);
        ixgbe_clear_vmvir(adapter, vf);
        ixgbe_set_vmolr(hw, vf, true);
-       hw->mac.ops.set_vlan_anti_spoofing(hw, false, vf);
-       if (adapter->vfinfo[vf].vlan_count)
-               adapter->vfinfo[vf].vlan_count--;
 
        /* disable hide VLAN on X550 */
        if (hw->mac.type >= ixgbe_mac_X550)
@@ -1525,27 +1508,34 @@ int ixgbe_ndo_set_vf_bw(struct net_device *netdev, int vf, int min_tx_rate,
 int ixgbe_ndo_set_vf_spoofchk(struct net_device *netdev, int vf, bool setting)
 {
        struct ixgbe_adapter *adapter = netdev_priv(netdev);
-       int vf_target_reg = vf >> 3;
-       int vf_target_shift = vf % 8;
        struct ixgbe_hw *hw = &adapter->hw;
-       u32 regval;
 
        if (vf >= adapter->num_vfs)
                return -EINVAL;
 
        adapter->vfinfo[vf].spoofchk_enabled = setting;
 
-       regval = IXGBE_READ_REG(hw, IXGBE_PFVFSPOOF(vf_target_reg));
-       regval &= ~(1 << vf_target_shift);
-       regval |= (setting << vf_target_shift);
-       IXGBE_WRITE_REG(hw, IXGBE_PFVFSPOOF(vf_target_reg), regval);
-
-       if (adapter->vfinfo[vf].vlan_count) {
-               vf_target_shift += IXGBE_SPOOF_VLANAS_SHIFT;
-               regval = IXGBE_READ_REG(hw, IXGBE_PFVFSPOOF(vf_target_reg));
-               regval &= ~(1 << vf_target_shift);
-               regval |= (setting << vf_target_shift);
-               IXGBE_WRITE_REG(hw, IXGBE_PFVFSPOOF(vf_target_reg), regval);
+       /* configure MAC spoofing */
+       hw->mac.ops.set_mac_anti_spoofing(hw, setting, vf);
+
+       /* configure VLAN spoofing */
+       hw->mac.ops.set_vlan_anti_spoofing(hw, setting, vf);
+
+       /* Ensure LLDP and FC is set for Ethertype Antispoofing if we will be
+        * calling set_ethertype_anti_spoofing for each VF in loop below
+        */
+       if (hw->mac.ops.set_ethertype_anti_spoofing) {
+               IXGBE_WRITE_REG(hw, IXGBE_ETQF(IXGBE_ETQF_FILTER_LLDP),
+                               (IXGBE_ETQF_FILTER_EN    |
+                                IXGBE_ETQF_TX_ANTISPOOF |
+                                IXGBE_ETH_P_LLDP));
+
+               IXGBE_WRITE_REG(hw, IXGBE_ETQF(IXGBE_ETQF_FILTER_FC),
+                               (IXGBE_ETQF_FILTER_EN |
+                                IXGBE_ETQF_TX_ANTISPOOF |
+                                ETH_P_PAUSE));
+
+               hw->mac.ops.set_ethertype_anti_spoofing(hw, setting, vf);
        }
 
        return 0;
index ba3b837..da3d835 100644 (file)
 #define IXGBE_SUBDEV_ID_82599_RNDC       0x1F72
 #define IXGBE_SUBDEV_ID_82599_560FLR     0x17D0
 #define IXGBE_SUBDEV_ID_82599_SP_560FLR  0x211B
+#define IXGBE_SUBDEV_ID_82599_LOM_SNAP6                0x2159
+#define IXGBE_SUBDEV_ID_82599_SFP_1OCP         0x000D
+#define IXGBE_SUBDEV_ID_82599_SFP_2OCP         0x0008
+#define IXGBE_SUBDEV_ID_82599_SFP_LOM_OEM1     0x8976
+#define IXGBE_SUBDEV_ID_82599_SFP_LOM_OEM2     0x06EE
 #define IXGBE_SUBDEV_ID_82599_ECNA_DP    0x0470
-#define IXGBE_SUBDEV_ID_82599_LOM_SFP    0x8976
 #define IXGBE_DEV_ID_82599_SFP_EM        0x1507
 #define IXGBE_DEV_ID_82599_SFP_SF2       0x154D
 #define IXGBE_DEV_ID_82599EN_SFP         0x1557
 #define IXGBE_DEV_ID_X550EM_A_SFP      0x15CE
 
 /* VF Device IDs */
-#define IXGBE_DEV_ID_X550_VF_HV        0x1564
-#define IXGBE_DEV_ID_X550_VF           0x1565
-#define IXGBE_DEV_ID_X550EM_X_VF       0x15A8
-#define IXGBE_DEV_ID_X550EM_X_VF_HV    0x15A9
-#define IXGBE_DEV_ID_82599_VF           0x10ED
-#define IXGBE_DEV_ID_X540_VF            0x1515
+#define IXGBE_DEV_ID_82599_VF          0x10ED
+#define IXGBE_DEV_ID_X540_VF           0x1515
 #define IXGBE_DEV_ID_X550_VF           0x1565
 #define IXGBE_DEV_ID_X550EM_X_VF       0x15A8
 #define IXGBE_DEV_ID_X550EM_A_VF       0x15C5
@@ -548,6 +548,7 @@ struct ixgbe_thermal_sensor_data {
 /* DCB registers */
 #define MAX_TRAFFIC_CLASS        8
 #define X540_TRAFFIC_CLASS       4
+#define DEF_TRAFFIC_CLASS        1
 #define IXGBE_RMCS      0x03D00
 #define IXGBE_DPMCS     0x07F40
 #define IXGBE_PDPMCS    0x0CD00
@@ -697,16 +698,16 @@ struct ixgbe_thermal_sensor_data {
 #define IXGBE_FCDMARW   0x02420 /* FC Receive DMA RW */
 #define IXGBE_FCINVST0  0x03FC0 /* FC Invalid DMA Context Status Reg 0 */
 #define IXGBE_FCINVST(_i)       (IXGBE_FCINVST0 + ((_i) * 4))
-#define IXGBE_FCBUFF_VALID      (1 << 0)   /* DMA Context Valid */
-#define IXGBE_FCBUFF_BUFFSIZE   (3 << 3)   /* User Buffer Size */
-#define IXGBE_FCBUFF_WRCONTX    (1 << 7)   /* 0: Initiator, 1: Target */
+#define IXGBE_FCBUFF_VALID      BIT(0)    /* DMA Context Valid */
+#define IXGBE_FCBUFF_BUFFSIZE   (3u << 3) /* User Buffer Size */
+#define IXGBE_FCBUFF_WRCONTX    BIT(7)    /* 0: Initiator, 1: Target */
 #define IXGBE_FCBUFF_BUFFCNT    0x0000ff00 /* Number of User Buffers */
 #define IXGBE_FCBUFF_OFFSET     0xffff0000 /* User Buffer Offset */
 #define IXGBE_FCBUFF_BUFFSIZE_SHIFT  3
 #define IXGBE_FCBUFF_BUFFCNT_SHIFT   8
 #define IXGBE_FCBUFF_OFFSET_SHIFT    16
-#define IXGBE_FCDMARW_WE        (1 << 14)   /* Write enable */
-#define IXGBE_FCDMARW_RE        (1 << 15)   /* Read enable */
+#define IXGBE_FCDMARW_WE        BIT(14)   /* Write enable */
+#define IXGBE_FCDMARW_RE        BIT(15)   /* Read enable */
 #define IXGBE_FCDMARW_FCOESEL   0x000001ff  /* FC X_ID: 11 bits */
 #define IXGBE_FCDMARW_LASTSIZE  0xffff0000  /* Last User Buffer Size */
 #define IXGBE_FCDMARW_LASTSIZE_SHIFT 16
@@ -723,23 +724,23 @@ struct ixgbe_thermal_sensor_data {
 #define IXGBE_FCFLT     0x05108 /* FC FLT Context */
 #define IXGBE_FCFLTRW   0x05110 /* FC Filter RW Control */
 #define IXGBE_FCPARAM   0x051d8 /* FC Offset Parameter */
-#define IXGBE_FCFLT_VALID       (1 << 0)   /* Filter Context Valid */
-#define IXGBE_FCFLT_FIRST       (1 << 1)   /* Filter First */
+#define IXGBE_FCFLT_VALID       BIT(0)   /* Filter Context Valid */
+#define IXGBE_FCFLT_FIRST       BIT(1)   /* Filter First */
 #define IXGBE_FCFLT_SEQID       0x00ff0000 /* Sequence ID */
 #define IXGBE_FCFLT_SEQCNT      0xff000000 /* Sequence Count */
-#define IXGBE_FCFLTRW_RVALDT    (1 << 13)  /* Fast Re-Validation */
-#define IXGBE_FCFLTRW_WE        (1 << 14)  /* Write Enable */
-#define IXGBE_FCFLTRW_RE        (1 << 15)  /* Read Enable */
+#define IXGBE_FCFLTRW_RVALDT    BIT(13)  /* Fast Re-Validation */
+#define IXGBE_FCFLTRW_WE        BIT(14)  /* Write Enable */
+#define IXGBE_FCFLTRW_RE        BIT(15)  /* Read Enable */
 /* FCoE Receive Control */
 #define IXGBE_FCRXCTRL  0x05100 /* FC Receive Control */
-#define IXGBE_FCRXCTRL_FCOELLI  (1 << 0)   /* Low latency interrupt */
-#define IXGBE_FCRXCTRL_SAVBAD   (1 << 1)   /* Save Bad Frames */
-#define IXGBE_FCRXCTRL_FRSTRDH  (1 << 2)   /* EN 1st Read Header */
-#define IXGBE_FCRXCTRL_LASTSEQH (1 << 3)   /* EN Last Header in Seq */
-#define IXGBE_FCRXCTRL_ALLH     (1 << 4)   /* EN All Headers */
-#define IXGBE_FCRXCTRL_FRSTSEQH (1 << 5)   /* EN 1st Seq. Header */
-#define IXGBE_FCRXCTRL_ICRC     (1 << 6)   /* Ignore Bad FC CRC */
-#define IXGBE_FCRXCTRL_FCCRCBO  (1 << 7)   /* FC CRC Byte Ordering */
+#define IXGBE_FCRXCTRL_FCOELLI  BIT(0)   /* Low latency interrupt */
+#define IXGBE_FCRXCTRL_SAVBAD   BIT(1)   /* Save Bad Frames */
+#define IXGBE_FCRXCTRL_FRSTRDH  BIT(2)   /* EN 1st Read Header */
+#define IXGBE_FCRXCTRL_LASTSEQH BIT(3)   /* EN Last Header in Seq */
+#define IXGBE_FCRXCTRL_ALLH     BIT(4)   /* EN All Headers */
+#define IXGBE_FCRXCTRL_FRSTSEQH BIT(5)   /* EN 1st Seq. Header */
+#define IXGBE_FCRXCTRL_ICRC     BIT(6)   /* Ignore Bad FC CRC */
+#define IXGBE_FCRXCTRL_FCCRCBO  BIT(7)   /* FC CRC Byte Ordering */
 #define IXGBE_FCRXCTRL_FCOEVER  0x00000f00 /* FCoE Version: 4 bits */
 #define IXGBE_FCRXCTRL_FCOEVER_SHIFT 8
 /* FCoE Redirection */
@@ -1060,15 +1061,9 @@ struct ixgbe_thermal_sensor_data {
 #define IXGBE_TIC_DW2(_i) (0x082B0 + ((_i) * 4))
 #define IXGBE_TDPROBE     0x07F20
 #define IXGBE_TXBUFCTRL   0x0C600
-#define IXGBE_TXBUFDATA0  0x0C610
-#define IXGBE_TXBUFDATA1  0x0C614
-#define IXGBE_TXBUFDATA2  0x0C618
-#define IXGBE_TXBUFDATA3  0x0C61C
+#define IXGBE_TXBUFDATA(_i) (0x0C610 + ((_i) * 4)) /* 4 of these (0-3) */
 #define IXGBE_RXBUFCTRL   0x03600
-#define IXGBE_RXBUFDATA0  0x03610
-#define IXGBE_RXBUFDATA1  0x03614
-#define IXGBE_RXBUFDATA2  0x03618
-#define IXGBE_RXBUFDATA3  0x0361C
+#define IXGBE_RXBUFDATA(_i) (0x03610 + ((_i) * 4)) /* 4 of these (0-3) */
 #define IXGBE_PCIE_DIAG(_i)     (0x11090 + ((_i) * 4)) /* 8 of these */
 #define IXGBE_RFVAL     0x050A4
 #define IXGBE_MDFTC1    0x042B8
@@ -1131,6 +1126,7 @@ struct ixgbe_thermal_sensor_data {
 #define IXGBE_XPCSS     0x04290
 #define IXGBE_MFLCN     0x04294
 #define IXGBE_SERDESC   0x04298
+#define IXGBE_MAC_SGMII_BUSY 0x04298
 #define IXGBE_MACS      0x0429C
 #define IXGBE_AUTOC     0x042A0
 #define IXGBE_LINKS     0x042A4
@@ -1255,20 +1251,20 @@ struct ixgbe_thermal_sensor_data {
 #define IXGBE_DCA_RXCTRL_CPUID_MASK 0x0000001F /* Rx CPUID Mask */
 #define IXGBE_DCA_RXCTRL_CPUID_MASK_82599  0xFF000000 /* Rx CPUID Mask */
 #define IXGBE_DCA_RXCTRL_CPUID_SHIFT_82599 24 /* Rx CPUID Shift */
-#define IXGBE_DCA_RXCTRL_DESC_DCA_EN (1 << 5) /* DCA Rx Desc enable */
-#define IXGBE_DCA_RXCTRL_HEAD_DCA_EN (1 << 6) /* DCA Rx Desc header enable */
-#define IXGBE_DCA_RXCTRL_DATA_DCA_EN (1 << 7) /* DCA Rx Desc payload enable */
-#define IXGBE_DCA_RXCTRL_DESC_RRO_EN (1 << 9) /* DCA Rx rd Desc Relax Order */
-#define IXGBE_DCA_RXCTRL_DATA_WRO_EN (1 << 13) /* Rx wr data Relax Order */
-#define IXGBE_DCA_RXCTRL_HEAD_WRO_EN (1 << 15) /* Rx wr header RO */
+#define IXGBE_DCA_RXCTRL_DESC_DCA_EN BIT(5) /* DCA Rx Desc enable */
+#define IXGBE_DCA_RXCTRL_HEAD_DCA_EN BIT(6) /* DCA Rx Desc header enable */
+#define IXGBE_DCA_RXCTRL_DATA_DCA_EN BIT(7) /* DCA Rx Desc payload enable */
+#define IXGBE_DCA_RXCTRL_DESC_RRO_EN BIT(9) /* DCA Rx rd Desc Relax Order */
+#define IXGBE_DCA_RXCTRL_DATA_WRO_EN BIT(13) /* Rx wr data Relax Order */
+#define IXGBE_DCA_RXCTRL_HEAD_WRO_EN BIT(15) /* Rx wr header RO */
 
 #define IXGBE_DCA_TXCTRL_CPUID_MASK 0x0000001F /* Tx CPUID Mask */
 #define IXGBE_DCA_TXCTRL_CPUID_MASK_82599  0xFF000000 /* Tx CPUID Mask */
 #define IXGBE_DCA_TXCTRL_CPUID_SHIFT_82599 24 /* Tx CPUID Shift */
-#define IXGBE_DCA_TXCTRL_DESC_DCA_EN (1 << 5) /* DCA Tx Desc enable */
-#define IXGBE_DCA_TXCTRL_DESC_RRO_EN (1 << 9) /* Tx rd Desc Relax Order */
-#define IXGBE_DCA_TXCTRL_DESC_WRO_EN (1 << 11) /* Tx Desc writeback RO bit */
-#define IXGBE_DCA_TXCTRL_DATA_RRO_EN (1 << 13) /* Tx rd data Relax Order */
+#define IXGBE_DCA_TXCTRL_DESC_DCA_EN BIT(5) /* DCA Tx Desc enable */
+#define IXGBE_DCA_TXCTRL_DESC_RRO_EN BIT(9) /* Tx rd Desc Relax Order */
+#define IXGBE_DCA_TXCTRL_DESC_WRO_EN BIT(11) /* Tx Desc writeback RO bit */
+#define IXGBE_DCA_TXCTRL_DATA_RRO_EN BIT(13) /* Tx rd data Relax Order */
 #define IXGBE_DCA_MAX_QUEUES_82598   16 /* DCA regs only on 16 queues */
 
 /* MSCA Bit Masks */
@@ -1747,7 +1743,7 @@ enum {
 #define IXGBE_ETQF_TX_ANTISPOOF        0x20000000 /* bit 29 */
 #define IXGBE_ETQF_1588         0x40000000 /* bit 30 */
 #define IXGBE_ETQF_FILTER_EN    0x80000000 /* bit 31 */
-#define IXGBE_ETQF_POOL_ENABLE   (1 << 26) /* bit 26 */
+#define IXGBE_ETQF_POOL_ENABLE   BIT(26) /* bit 26 */
 #define IXGBE_ETQF_POOL_SHIFT          20
 
 #define IXGBE_ETQS_RX_QUEUE     0x007F0000 /* bits 22:16 */
@@ -1873,20 +1869,20 @@ enum {
 #define IXGBE_AUTOC_1G_PMA_PMD_SHIFT   9
 #define IXGBE_AUTOC_10G_PMA_PMD_MASK   0x00000180
 #define IXGBE_AUTOC_10G_PMA_PMD_SHIFT  7
-#define IXGBE_AUTOC_10G_XAUI   (0x0 << IXGBE_AUTOC_10G_PMA_PMD_SHIFT)
-#define IXGBE_AUTOC_10G_KX4    (0x1 << IXGBE_AUTOC_10G_PMA_PMD_SHIFT)
-#define IXGBE_AUTOC_10G_CX4    (0x2 << IXGBE_AUTOC_10G_PMA_PMD_SHIFT)
-#define IXGBE_AUTOC_1G_BX      (0x0 << IXGBE_AUTOC_1G_PMA_PMD_SHIFT)
-#define IXGBE_AUTOC_1G_KX      (0x1 << IXGBE_AUTOC_1G_PMA_PMD_SHIFT)
-#define IXGBE_AUTOC_1G_SFI     (0x0 << IXGBE_AUTOC_1G_PMA_PMD_SHIFT)
-#define IXGBE_AUTOC_1G_KX_BX   (0x1 << IXGBE_AUTOC_1G_PMA_PMD_SHIFT)
+#define IXGBE_AUTOC_10G_XAUI   (0u << IXGBE_AUTOC_10G_PMA_PMD_SHIFT)
+#define IXGBE_AUTOC_10G_KX4    (1u << IXGBE_AUTOC_10G_PMA_PMD_SHIFT)
+#define IXGBE_AUTOC_10G_CX4    (2u << IXGBE_AUTOC_10G_PMA_PMD_SHIFT)
+#define IXGBE_AUTOC_1G_BX      (0u << IXGBE_AUTOC_1G_PMA_PMD_SHIFT)
+#define IXGBE_AUTOC_1G_KX      (1u << IXGBE_AUTOC_1G_PMA_PMD_SHIFT)
+#define IXGBE_AUTOC_1G_SFI     (0u << IXGBE_AUTOC_1G_PMA_PMD_SHIFT)
+#define IXGBE_AUTOC_1G_KX_BX   (1u << IXGBE_AUTOC_1G_PMA_PMD_SHIFT)
 
 #define IXGBE_AUTOC2_UPPER_MASK  0xFFFF0000
 #define IXGBE_AUTOC2_10G_SERIAL_PMA_PMD_MASK  0x00030000
 #define IXGBE_AUTOC2_10G_SERIAL_PMA_PMD_SHIFT 16
-#define IXGBE_AUTOC2_10G_KR  (0x0 << IXGBE_AUTOC2_10G_SERIAL_PMA_PMD_SHIFT)
-#define IXGBE_AUTOC2_10G_XFI (0x1 << IXGBE_AUTOC2_10G_SERIAL_PMA_PMD_SHIFT)
-#define IXGBE_AUTOC2_10G_SFI (0x2 << IXGBE_AUTOC2_10G_SERIAL_PMA_PMD_SHIFT)
+#define IXGBE_AUTOC2_10G_KR  (0u << IXGBE_AUTOC2_10G_SERIAL_PMA_PMD_SHIFT)
+#define IXGBE_AUTOC2_10G_XFI (1u << IXGBE_AUTOC2_10G_SERIAL_PMA_PMD_SHIFT)
+#define IXGBE_AUTOC2_10G_SFI (2u << IXGBE_AUTOC2_10G_SERIAL_PMA_PMD_SHIFT)
 #define IXGBE_AUTOC2_LINK_DISABLE_ON_D3_MASK  0x50000000
 #define IXGBE_AUTOC2_LINK_DISABLE_MASK        0x70000000
 
@@ -2123,6 +2119,7 @@ enum {
 #define IXGBE_SAN_MAC_ADDR_PORT1_OFFSET  0x3
 #define IXGBE_DEVICE_CAPS_ALLOW_ANY_SFP  0x1
 #define IXGBE_DEVICE_CAPS_FCOE_OFFLOADS  0x2
+#define IXGBE_DEVICE_CAPS_NO_CROSSTALK_WR      BIT(7)
 #define IXGBE_FW_LESM_PARAMETERS_PTR     0x2
 #define IXGBE_FW_LESM_STATE_1            0x1
 #define IXGBE_FW_LESM_STATE_ENABLED      0x8000 /* LESM Enable bit */
@@ -2838,15 +2835,15 @@ struct ixgbe_adv_tx_context_desc {
 #define IXGBE_ADVTXD_TUCMD_IPSEC_TYPE_ESP 0x00002000 /* IPSec Type ESP */
 #define IXGBE_ADVTXD_TUCMD_IPSEC_ENCRYPT_EN 0x00004000/* ESP Encrypt Enable */
 #define IXGBE_ADVTXT_TUCMD_FCOE      0x00008000       /* FCoE Frame Type */
-#define IXGBE_ADVTXD_FCOEF_EOF_MASK  (0x3 << 10)      /* FC EOF index */
-#define IXGBE_ADVTXD_FCOEF_SOF       ((1 << 2) << 10) /* FC SOF index */
-#define IXGBE_ADVTXD_FCOEF_PARINC    ((1 << 3) << 10) /* Rel_Off in F_CTL */
-#define IXGBE_ADVTXD_FCOEF_ORIE      ((1 << 4) << 10) /* Orientation: End */
-#define IXGBE_ADVTXD_FCOEF_ORIS      ((1 << 5) << 10) /* Orientation: Start */
-#define IXGBE_ADVTXD_FCOEF_EOF_N     (0x0 << 10)      /* 00: EOFn */
-#define IXGBE_ADVTXD_FCOEF_EOF_T     (0x1 << 10)      /* 01: EOFt */
-#define IXGBE_ADVTXD_FCOEF_EOF_NI    (0x2 << 10)      /* 10: EOFni */
-#define IXGBE_ADVTXD_FCOEF_EOF_A     (0x3 << 10)      /* 11: EOFa */
+#define IXGBE_ADVTXD_FCOEF_SOF       (BIT(2) << 10) /* FC SOF index */
+#define IXGBE_ADVTXD_FCOEF_PARINC    (BIT(3) << 10) /* Rel_Off in F_CTL */
+#define IXGBE_ADVTXD_FCOEF_ORIE      (BIT(4) << 10) /* Orientation: End */
+#define IXGBE_ADVTXD_FCOEF_ORIS      (BIT(5) << 10) /* Orientation: Start */
+#define IXGBE_ADVTXD_FCOEF_EOF_N     (0u << 10)  /* 00: EOFn */
+#define IXGBE_ADVTXD_FCOEF_EOF_T     (1u << 10)  /* 01: EOFt */
+#define IXGBE_ADVTXD_FCOEF_EOF_NI    (2u << 10)  /* 10: EOFni */
+#define IXGBE_ADVTXD_FCOEF_EOF_A     (3u << 10)  /* 11: EOFa */
+#define IXGBE_ADVTXD_FCOEF_EOF_MASK  (3u << 10)  /* FC EOF index */
 #define IXGBE_ADVTXD_L4LEN_SHIFT     8  /* Adv ctxt L4LEN shift */
 #define IXGBE_ADVTXD_MSS_SHIFT       16  /* Adv ctxt MSS shift */
 
@@ -3581,7 +3578,7 @@ struct ixgbe_info {
 
 #define IXGBE_FUSES0_GROUP(_i)         (0x11158 + ((_i) * 4))
 #define IXGBE_FUSES0_300MHZ            BIT(5)
-#define IXGBE_FUSES0_REV_MASK          (3 << 6)
+#define IXGBE_FUSES0_REV_MASK          (3u << 6)
 
 #define IXGBE_KRM_PORT_CAR_GEN_CTRL(P) ((P) ? 0x8010 : 0x4010)
 #define IXGBE_KRM_LINK_CTRL_1(P)       ((P) ? 0x820C : 0x420C)
@@ -3595,25 +3592,25 @@ struct ixgbe_info {
 #define IXGBE_KRM_TX_COEFF_CTRL_1(P)   ((P) ? 0x9520 : 0x5520)
 #define IXGBE_KRM_RX_ANA_CTL(P)                ((P) ? 0x9A00 : 0x5A00)
 
-#define IXGBE_KRM_PORT_CAR_GEN_CTRL_NELB_32B           (1 << 9)
-#define IXGBE_KRM_PORT_CAR_GEN_CTRL_NELB_KRPCS         (1 << 11)
+#define IXGBE_KRM_PORT_CAR_GEN_CTRL_NELB_32B           BIT(9)
+#define IXGBE_KRM_PORT_CAR_GEN_CTRL_NELB_KRPCS         BIT(11)
 
-#define IXGBE_KRM_LINK_CTRL_1_TETH_FORCE_SPEED_MASK    (0x7 << 8)
-#define IXGBE_KRM_LINK_CTRL_1_TETH_FORCE_SPEED_1G      (2 << 8)
-#define IXGBE_KRM_LINK_CTRL_1_TETH_FORCE_SPEED_10G     (4 << 8)
+#define IXGBE_KRM_LINK_CTRL_1_TETH_FORCE_SPEED_MASK    (7u << 8)
+#define IXGBE_KRM_LINK_CTRL_1_TETH_FORCE_SPEED_1G      (2u << 8)
+#define IXGBE_KRM_LINK_CTRL_1_TETH_FORCE_SPEED_10G     (4u << 8)
 #define IXGBE_KRM_LINK_CTRL_1_TETH_AN_SGMII_EN         BIT(12)
 #define IXGBE_KRM_LINK_CTRL_1_TETH_AN_CLAUSE_37_EN     BIT(13)
-#define IXGBE_KRM_LINK_CTRL_1_TETH_AN_FEC_REQ          (1 << 14)
-#define IXGBE_KRM_LINK_CTRL_1_TETH_AN_CAP_FEC          (1 << 15)
-#define IXGBE_KRM_LINK_CTRL_1_TETH_AN_CAP_KX           (1 << 16)
-#define IXGBE_KRM_LINK_CTRL_1_TETH_AN_CAP_KR           (1 << 18)
-#define IXGBE_KRM_LINK_CTRL_1_TETH_EEE_CAP_KX          (1 << 24)
-#define IXGBE_KRM_LINK_CTRL_1_TETH_EEE_CAP_KR          (1 << 26)
-#define IXGBE_KRM_LINK_CTRL_1_TETH_AN_ENABLE           (1 << 29)
-#define IXGBE_KRM_LINK_CTRL_1_TETH_AN_RESTART          (1 << 31)
-
-#define IXGBE_KRM_AN_CNTL_1_SYM_PAUSE                  (1 << 28)
-#define IXGBE_KRM_AN_CNTL_1_ASM_PAUSE                  (1 << 29)
+#define IXGBE_KRM_LINK_CTRL_1_TETH_AN_FEC_REQ          BIT(14)
+#define IXGBE_KRM_LINK_CTRL_1_TETH_AN_CAP_FEC          BIT(15)
+#define IXGBE_KRM_LINK_CTRL_1_TETH_AN_CAP_KX           BIT(16)
+#define IXGBE_KRM_LINK_CTRL_1_TETH_AN_CAP_KR           BIT(18)
+#define IXGBE_KRM_LINK_CTRL_1_TETH_EEE_CAP_KX          BIT(24)
+#define IXGBE_KRM_LINK_CTRL_1_TETH_EEE_CAP_KR          BIT(26)
+#define IXGBE_KRM_LINK_CTRL_1_TETH_AN_ENABLE           BIT(29)
+#define IXGBE_KRM_LINK_CTRL_1_TETH_AN_RESTART          BIT(31)
+
+#define IXGBE_KRM_AN_CNTL_1_SYM_PAUSE                  BIT(28)
+#define IXGBE_KRM_AN_CNTL_1_ASM_PAUSE                  BIT(29)
 
 #define IXGBE_KRM_AN_CNTL_8_LINEAR                     BIT(0)
 #define IXGBE_KRM_AN_CNTL_8_LIMITING                   BIT(1)
@@ -3621,28 +3618,28 @@ struct ixgbe_info {
 #define IXGBE_KRM_SGMII_CTRL_MAC_TAR_FORCE_100_D       BIT(12)
 #define IXGBE_KRM_SGMII_CTRL_MAC_TAR_FORCE_10_D                BIT(19)
 
-#define IXGBE_KRM_DSP_TXFFE_STATE_C0_EN                        (1 << 6)
-#define IXGBE_KRM_DSP_TXFFE_STATE_CP1_CN1_EN           (1 << 15)
-#define IXGBE_KRM_DSP_TXFFE_STATE_CO_ADAPT_EN          (1 << 16)
+#define IXGBE_KRM_DSP_TXFFE_STATE_C0_EN                        BIT(6)
+#define IXGBE_KRM_DSP_TXFFE_STATE_CP1_CN1_EN           BIT(15)
+#define IXGBE_KRM_DSP_TXFFE_STATE_CO_ADAPT_EN          BIT(16)
 
-#define IXGBE_KRM_RX_TRN_LINKUP_CTRL_CONV_WO_PROTOCOL  (1 << 4)
-#define IXGBE_KRM_RX_TRN_LINKUP_CTRL_PROTOCOL_BYPASS   (1 << 2)
+#define IXGBE_KRM_RX_TRN_LINKUP_CTRL_CONV_WO_PROTOCOL  BIT(4)
+#define IXGBE_KRM_RX_TRN_LINKUP_CTRL_PROTOCOL_BYPASS   BIT(2)
 
-#define IXGBE_KRM_PMD_DFX_BURNIN_TX_RX_KR_LB_MASK      (0x3 << 16)
+#define IXGBE_KRM_PMD_DFX_BURNIN_TX_RX_KR_LB_MASK      (3u << 16)
 
-#define IXGBE_KRM_TX_COEFF_CTRL_1_CMINUS1_OVRRD_EN     (1 << 1)
-#define IXGBE_KRM_TX_COEFF_CTRL_1_CPLUS1_OVRRD_EN      (1 << 2)
-#define IXGBE_KRM_TX_COEFF_CTRL_1_CZERO_EN             (1 << 3)
-#define IXGBE_KRM_TX_COEFF_CTRL_1_OVRRD_EN             (1 << 31)
+#define IXGBE_KRM_TX_COEFF_CTRL_1_CMINUS1_OVRRD_EN     BIT(1)
+#define IXGBE_KRM_TX_COEFF_CTRL_1_CPLUS1_OVRRD_EN      BIT(2)
+#define IXGBE_KRM_TX_COEFF_CTRL_1_CZERO_EN             BIT(3)
+#define IXGBE_KRM_TX_COEFF_CTRL_1_OVRRD_EN             BIT(31)
 
 #define IXGBE_KX4_LINK_CNTL_1                          0x4C
-#define IXGBE_KX4_LINK_CNTL_1_TETH_AN_CAP_KX           (1 << 16)
-#define IXGBE_KX4_LINK_CNTL_1_TETH_AN_CAP_KX4          (1 << 17)
-#define IXGBE_KX4_LINK_CNTL_1_TETH_EEE_CAP_KX          (1 << 24)
-#define IXGBE_KX4_LINK_CNTL_1_TETH_EEE_CAP_KX4         (1 << 25)
-#define IXGBE_KX4_LINK_CNTL_1_TETH_AN_ENABLE           (1 << 29)
-#define IXGBE_KX4_LINK_CNTL_1_TETH_FORCE_LINK_UP       (1 << 30)
-#define IXGBE_KX4_LINK_CNTL_1_TETH_AN_RESTART          (1 << 31)
+#define IXGBE_KX4_LINK_CNTL_1_TETH_AN_CAP_KX           BIT(16)
+#define IXGBE_KX4_LINK_CNTL_1_TETH_AN_CAP_KX4          BIT(17)
+#define IXGBE_KX4_LINK_CNTL_1_TETH_EEE_CAP_KX          BIT(24)
+#define IXGBE_KX4_LINK_CNTL_1_TETH_EEE_CAP_KX4         BIT(25)
+#define IXGBE_KX4_LINK_CNTL_1_TETH_AN_ENABLE           BIT(29)
+#define IXGBE_KX4_LINK_CNTL_1_TETH_FORCE_LINK_UP       BIT(30)
+#define IXGBE_KX4_LINK_CNTL_1_TETH_AN_RESTART          BIT(31)
 
 #define IXGBE_SB_IOSF_INDIRECT_CTRL            0x00011144
 #define IXGBE_SB_IOSF_INDIRECT_DATA            0x00011148
@@ -3658,7 +3655,7 @@ struct ixgbe_info {
 #define IXGBE_SB_IOSF_CTRL_TARGET_SELECT_SHIFT 28
 #define IXGBE_SB_IOSF_CTRL_TARGET_SELECT_MASK  0x7
 #define IXGBE_SB_IOSF_CTRL_BUSY_SHIFT          31
-#define IXGBE_SB_IOSF_CTRL_BUSY                (1 << IXGBE_SB_IOSF_CTRL_BUSY_SHIFT)
+#define IXGBE_SB_IOSF_CTRL_BUSY                BIT(IXGBE_SB_IOSF_CTRL_BUSY_SHIFT)
 #define IXGBE_SB_IOSF_TARGET_KR_PHY    0
 #define IXGBE_SB_IOSF_TARGET_KX4_UNIPHY        1
 #define IXGBE_SB_IOSF_TARGET_KX4_PCS0  2
index 40824d8..f2b1d48 100644 (file)
@@ -214,8 +214,8 @@ s32 ixgbe_init_eeprom_params_X540(struct ixgbe_hw *hw)
                eec = IXGBE_READ_REG(hw, IXGBE_EEC(hw));
                eeprom_size = (u16)((eec & IXGBE_EEC_SIZE) >>
                                    IXGBE_EEC_SIZE_SHIFT);
-               eeprom->word_size = 1 << (eeprom_size +
-                                         IXGBE_EEPROM_WORD_SIZE_SHIFT);
+               eeprom->word_size = BIT(eeprom_size +
+                                       IXGBE_EEPROM_WORD_SIZE_SHIFT);
 
                hw_dbg(hw, "Eeprom params: type = %d, size = %d\n",
                       eeprom->type, eeprom->word_size);
index c71e93e..19b75cd 100644 (file)
@@ -335,8 +335,8 @@ static s32 ixgbe_init_eeprom_params_X550(struct ixgbe_hw *hw)
                eec = IXGBE_READ_REG(hw, IXGBE_EEC(hw));
                eeprom_size = (u16)((eec & IXGBE_EEC_SIZE) >>
                                    IXGBE_EEC_SIZE_SHIFT);
-               eeprom->word_size = 1 << (eeprom_size +
-                                         IXGBE_EEPROM_WORD_SIZE_SHIFT);
+               eeprom->word_size = BIT(eeprom_size +
+                                       IXGBE_EEPROM_WORD_SIZE_SHIFT);
 
                hw_dbg(hw, "Eeprom params: type = %d, size = %d\n",
                       eeprom->type, eeprom->word_size);
@@ -2646,9 +2646,9 @@ static void ixgbe_set_ethertype_anti_spoofing_X550(struct ixgbe_hw *hw,
 
        pfvfspoof = IXGBE_READ_REG(hw, IXGBE_PFVFSPOOF(vf_target_reg));
        if (enable)
-               pfvfspoof |= (1 << vf_target_shift);
+               pfvfspoof |= BIT(vf_target_shift);
        else
-               pfvfspoof &= ~(1 << vf_target_shift);
+               pfvfspoof &= ~BIT(vf_target_shift);
 
        IXGBE_WRITE_REG(hw, IXGBE_PFVFSPOOF(vf_target_reg), pfvfspoof);
 }
@@ -2765,7 +2765,7 @@ static s32 ixgbe_acquire_swfw_sync_x550em_a(struct ixgbe_hw *hw, u32 mask)
                        ixgbe_release_swfw_sync_X540(hw, hmask);
                if (status != IXGBE_ERR_TOKEN_RETRY)
                        return status;
-               udelay(FW_PHY_TOKEN_DELAY * 1000);
+               msleep(FW_PHY_TOKEN_DELAY);
        }
 
        return status;
@@ -2908,7 +2908,7 @@ static const struct ixgbe_mac_operations mac_ops_X550EM_x = {
        .get_media_type         = &ixgbe_get_media_type_X550em,
        .get_san_mac_addr       = NULL,
        .get_wwn_prefix         = NULL,
-       .setup_link             = NULL, /* defined later */
+       .setup_link             = &ixgbe_setup_mac_link_X540,
        .get_link_capabilities  = &ixgbe_get_link_capabilities_X550em,
        .get_bus_info           = &ixgbe_get_bus_info_X550em,
        .setup_sfp              = ixgbe_setup_sfp_modules_X550em,
@@ -2932,7 +2932,7 @@ static struct ixgbe_mac_operations mac_ops_x550em_a = {
        .setup_sfp              = ixgbe_setup_sfp_modules_X550em,
        .acquire_swfw_sync      = ixgbe_acquire_swfw_sync_x550em_a,
        .release_swfw_sync      = ixgbe_release_swfw_sync_x550em_a,
-       .setup_fc               = ixgbe_setup_fc_generic,
+       .setup_fc               = ixgbe_setup_fc_x550em,
        .read_iosf_sb_reg       = ixgbe_read_iosf_sb_reg_x550a,
        .write_iosf_sb_reg      = ixgbe_write_iosf_sb_reg_x550a,
 };
index 5843458..ae09d60 100644 (file)
 #define IXGBE_DEV_ID_X550_VF           0x1565
 #define IXGBE_DEV_ID_X550EM_X_VF       0x15A8
 
+#define IXGBE_DEV_ID_82599_VF_HV       0x152E
+#define IXGBE_DEV_ID_X540_VF_HV                0x1530
+#define IXGBE_DEV_ID_X550_VF_HV                0x1564
+#define IXGBE_DEV_ID_X550EM_X_VF_HV    0x15A9
+
 #define IXGBE_VF_IRQ_CLEAR_MASK                7
 #define IXGBE_VF_MAX_TX_QUEUES         8
 #define IXGBE_VF_MAX_RX_QUEUES         8
@@ -74,7 +79,7 @@ typedef u32 ixgbe_link_speed;
 #define IXGBE_RXDCTL_RLPML_EN  0x00008000
 
 /* DCA Control */
-#define IXGBE_DCA_TXCTRL_TX_WB_RO_EN (1 << 11) /* Tx Desc writeback RO bit */
+#define IXGBE_DCA_TXCTRL_TX_WB_RO_EN BIT(11) /* Tx Desc writeback RO bit */
 
 /* PSRTYPE bit definitions */
 #define IXGBE_PSRTYPE_TCPHDR   0x00000010
@@ -296,16 +301,16 @@ struct ixgbe_adv_tx_context_desc {
 #define IXGBE_TXDCTL_SWFLSH            0x04000000 /* Tx Desc. wr-bk flushing */
 #define IXGBE_TXDCTL_WTHRESH_SHIFT     16         /* shift to WTHRESH bits */
 
-#define IXGBE_DCA_RXCTRL_DESC_DCA_EN   (1 << 5)  /* Rx Desc enable */
-#define IXGBE_DCA_RXCTRL_HEAD_DCA_EN   (1 << 6)  /* Rx Desc header ena */
-#define IXGBE_DCA_RXCTRL_DATA_DCA_EN   (1 << 7)  /* Rx Desc payload ena */
-#define IXGBE_DCA_RXCTRL_DESC_RRO_EN   (1 << 9)  /* Rx rd Desc Relax Order */
-#define IXGBE_DCA_RXCTRL_DATA_WRO_EN   (1 << 13) /* Rx wr data Relax Order */
-#define IXGBE_DCA_RXCTRL_HEAD_WRO_EN   (1 << 15) /* Rx wr header RO */
-
-#define IXGBE_DCA_TXCTRL_DESC_DCA_EN   (1 << 5)  /* DCA Tx Desc enable */
-#define IXGBE_DCA_TXCTRL_DESC_RRO_EN   (1 << 9)  /* Tx rd Desc Relax Order */
-#define IXGBE_DCA_TXCTRL_DESC_WRO_EN   (1 << 11) /* Tx Desc writeback RO bit */
-#define IXGBE_DCA_TXCTRL_DATA_RRO_EN   (1 << 13) /* Tx rd data Relax Order */
+#define IXGBE_DCA_RXCTRL_DESC_DCA_EN   BIT(5)  /* Rx Desc enable */
+#define IXGBE_DCA_RXCTRL_HEAD_DCA_EN   BIT(6)  /* Rx Desc header ena */
+#define IXGBE_DCA_RXCTRL_DATA_DCA_EN   BIT(7)  /* Rx Desc payload ena */
+#define IXGBE_DCA_RXCTRL_DESC_RRO_EN   BIT(9)  /* Rx rd Desc Relax Order */
+#define IXGBE_DCA_RXCTRL_DATA_WRO_EN   BIT(13) /* Rx wr data Relax Order */
+#define IXGBE_DCA_RXCTRL_HEAD_WRO_EN   BIT(15) /* Rx wr header RO */
+
+#define IXGBE_DCA_TXCTRL_DESC_DCA_EN   BIT(5)  /* DCA Tx Desc enable */
+#define IXGBE_DCA_TXCTRL_DESC_RRO_EN   BIT(9)  /* Tx rd Desc Relax Order */
+#define IXGBE_DCA_TXCTRL_DESC_WRO_EN   BIT(11) /* Tx Desc writeback RO bit */
+#define IXGBE_DCA_TXCTRL_DATA_RRO_EN   BIT(13) /* Tx rd data Relax Order */
 
 #endif /* _IXGBEVF_DEFINES_H_ */
index d7aa4b2..508e72c 100644 (file)
 
 #define IXGBE_ALL_RAR_ENTRIES 16
 
+enum {NETDEV_STATS, IXGBEVF_STATS};
+
 struct ixgbe_stats {
        char stat_string[ETH_GSTRING_LEN];
-       struct {
-               int sizeof_stat;
-               int stat_offset;
-               int base_stat_offset;
-               int saved_reset_offset;
-       };
+       int type;
+       int sizeof_stat;
+       int stat_offset;
 };
 
-#define IXGBEVF_STAT(m, b, r) { \
-       .sizeof_stat = FIELD_SIZEOF(struct ixgbevf_adapter, m), \
-       .stat_offset = offsetof(struct ixgbevf_adapter, m), \
-       .base_stat_offset = offsetof(struct ixgbevf_adapter, b), \
-       .saved_reset_offset = offsetof(struct ixgbevf_adapter, r) \
+#define IXGBEVF_STAT(_name, _stat) { \
+       .stat_string = _name, \
+       .type = IXGBEVF_STATS, \
+       .sizeof_stat = FIELD_SIZEOF(struct ixgbevf_adapter, _stat), \
+       .stat_offset = offsetof(struct ixgbevf_adapter, _stat) \
 }
 
-#define IXGBEVF_ZSTAT(m) { \
-       .sizeof_stat = FIELD_SIZEOF(struct ixgbevf_adapter, m), \
-       .stat_offset = offsetof(struct ixgbevf_adapter, m), \
-       .base_stat_offset = -1, \
-       .saved_reset_offset = -1 \
+#define IXGBEVF_NETDEV_STAT(_net_stat) { \
+       .stat_string = #_net_stat, \
+       .type = NETDEV_STATS, \
+       .sizeof_stat = FIELD_SIZEOF(struct net_device_stats, _net_stat), \
+       .stat_offset = offsetof(struct net_device_stats, _net_stat) \
 }
 
-static const struct ixgbe_stats ixgbe_gstrings_stats[] = {
-       {"rx_packets", IXGBEVF_STAT(stats.vfgprc, stats.base_vfgprc,
-                                   stats.saved_reset_vfgprc)},
-       {"tx_packets", IXGBEVF_STAT(stats.vfgptc, stats.base_vfgptc,
-                                   stats.saved_reset_vfgptc)},
-       {"rx_bytes", IXGBEVF_STAT(stats.vfgorc, stats.base_vfgorc,
-                                 stats.saved_reset_vfgorc)},
-       {"tx_bytes", IXGBEVF_STAT(stats.vfgotc, stats.base_vfgotc,
-                                 stats.saved_reset_vfgotc)},
-       {"tx_busy", IXGBEVF_ZSTAT(tx_busy)},
-       {"tx_restart_queue", IXGBEVF_ZSTAT(restart_queue)},
-       {"tx_timeout_count", IXGBEVF_ZSTAT(tx_timeout_count)},
-       {"multicast", IXGBEVF_STAT(stats.vfmprc, stats.base_vfmprc,
-                                  stats.saved_reset_vfmprc)},
-       {"rx_csum_offload_errors", IXGBEVF_ZSTAT(hw_csum_rx_error)},
-#ifdef BP_EXTENDED_STATS
-       {"rx_bp_poll_yield", IXGBEVF_ZSTAT(bp_rx_yields)},
-       {"rx_bp_cleaned", IXGBEVF_ZSTAT(bp_rx_cleaned)},
-       {"rx_bp_misses", IXGBEVF_ZSTAT(bp_rx_missed)},
-       {"tx_bp_napi_yield", IXGBEVF_ZSTAT(bp_tx_yields)},
-       {"tx_bp_cleaned", IXGBEVF_ZSTAT(bp_tx_cleaned)},
-       {"tx_bp_misses", IXGBEVF_ZSTAT(bp_tx_missed)},
-#endif
+static struct ixgbe_stats ixgbevf_gstrings_stats[] = {
+       IXGBEVF_NETDEV_STAT(rx_packets),
+       IXGBEVF_NETDEV_STAT(tx_packets),
+       IXGBEVF_NETDEV_STAT(rx_bytes),
+       IXGBEVF_NETDEV_STAT(tx_bytes),
+       IXGBEVF_STAT("tx_busy", tx_busy),
+       IXGBEVF_STAT("tx_restart_queue", restart_queue),
+       IXGBEVF_STAT("tx_timeout_count", tx_timeout_count),
+       IXGBEVF_NETDEV_STAT(multicast),
+       IXGBEVF_STAT("rx_csum_offload_errors", hw_csum_rx_error),
 };
 
-#define IXGBE_QUEUE_STATS_LEN 0
-#define IXGBE_GLOBAL_STATS_LEN ARRAY_SIZE(ixgbe_gstrings_stats)
+#define IXGBEVF_QUEUE_STATS_LEN ( \
+       (((struct ixgbevf_adapter *)netdev_priv(netdev))->num_tx_queues + \
+        ((struct ixgbevf_adapter *)netdev_priv(netdev))->num_rx_queues) * \
+        (sizeof(struct ixgbe_stats) / sizeof(u64)))
+#define IXGBEVF_GLOBAL_STATS_LEN ARRAY_SIZE(ixgbevf_gstrings_stats)
 
-#define IXGBEVF_STATS_LEN (IXGBE_GLOBAL_STATS_LEN + IXGBE_QUEUE_STATS_LEN)
+#define IXGBEVF_STATS_LEN (IXGBEVF_GLOBAL_STATS_LEN + IXGBEVF_QUEUE_STATS_LEN)
 static const char ixgbe_gstrings_test[][ETH_GSTRING_LEN] = {
        "Register test  (offline)",
        "Link test   (on/offline)"
 };
 
-#define IXGBE_TEST_LEN (sizeof(ixgbe_gstrings_test) / ETH_GSTRING_LEN)
+#define IXGBEVF_TEST_LEN (sizeof(ixgbe_gstrings_test) / ETH_GSTRING_LEN)
 
 static int ixgbevf_get_settings(struct net_device *netdev,
                                struct ethtool_cmd *ecmd)
@@ -177,7 +166,8 @@ static void ixgbevf_get_regs(struct net_device *netdev,
 
        memset(p, 0, regs_len);
 
-       regs->version = (1 << 24) | hw->revision_id << 16 | hw->device_id;
+       /* generate a number suitable for ethtool's register version */
+       regs->version = (1u << 24) | (hw->revision_id << 16) | hw->device_id;
 
        /* General Registers */
        regs_buff[0] = IXGBE_READ_REG(hw, IXGBE_VFCTRL);
@@ -392,13 +382,13 @@ clear_reset:
        return err;
 }
 
-static int ixgbevf_get_sset_count(struct net_device *dev, int stringset)
+static int ixgbevf_get_sset_count(struct net_device *netdev, int stringset)
 {
        switch (stringset) {
        case ETH_SS_TEST:
-               return IXGBE_TEST_LEN;
+               return IXGBEVF_TEST_LEN;
        case ETH_SS_STATS:
-               return IXGBE_GLOBAL_STATS_LEN;
+               return IXGBEVF_STATS_LEN;
        default:
                return -EINVAL;
        }
@@ -408,70 +398,138 @@ static void ixgbevf_get_ethtool_stats(struct net_device *netdev,
                                      struct ethtool_stats *stats, u64 *data)
 {
        struct ixgbevf_adapter *adapter = netdev_priv(netdev);
-       char *base = (char *)adapter;
-       int i;
-#ifdef BP_EXTENDED_STATS
-       u64 rx_yields = 0, rx_cleaned = 0, rx_missed = 0,
-           tx_yields = 0, tx_cleaned = 0, tx_missed = 0;
+       struct rtnl_link_stats64 temp;
+       const struct rtnl_link_stats64 *net_stats;
+       unsigned int start;
+       struct ixgbevf_ring *ring;
+       int i, j;
+       char *p;
 
-       for (i = 0; i < adapter->num_rx_queues; i++) {
-               rx_yields += adapter->rx_ring[i]->stats.yields;
-               rx_cleaned += adapter->rx_ring[i]->stats.cleaned;
-               rx_yields += adapter->rx_ring[i]->stats.yields;
-       }
+       ixgbevf_update_stats(adapter);
+       net_stats = dev_get_stats(netdev, &temp);
+       for (i = 0; i < IXGBEVF_GLOBAL_STATS_LEN; i++) {
+               switch (ixgbevf_gstrings_stats[i].type) {
+               case NETDEV_STATS:
+                       p = (char *)net_stats +
+                                       ixgbevf_gstrings_stats[i].stat_offset;
+                       break;
+               case IXGBEVF_STATS:
+                       p = (char *)adapter +
+                                       ixgbevf_gstrings_stats[i].stat_offset;
+                       break;
+               default:
+                       data[i] = 0;
+                       continue;
+               }
 
-       for (i = 0; i < adapter->num_tx_queues; i++) {
-               tx_yields += adapter->tx_ring[i]->stats.yields;
-               tx_cleaned += adapter->tx_ring[i]->stats.cleaned;
-               tx_yields += adapter->tx_ring[i]->stats.yields;
+               data[i] = (ixgbevf_gstrings_stats[i].sizeof_stat ==
+                          sizeof(u64)) ? *(u64 *)p : *(u32 *)p;
        }
 
-       adapter->bp_rx_yields = rx_yields;
-       adapter->bp_rx_cleaned = rx_cleaned;
-       adapter->bp_rx_missed = rx_missed;
+       /* populate Tx queue data */
+       for (j = 0; j < adapter->num_tx_queues; j++) {
+               ring = adapter->tx_ring[j];
+               if (!ring) {
+                       data[i++] = 0;
+                       data[i++] = 0;
+#ifdef BP_EXTENDED_STATS
+                       data[i++] = 0;
+                       data[i++] = 0;
+                       data[i++] = 0;
+#endif
+                       continue;
+               }
 
-       adapter->bp_tx_yields = tx_yields;
-       adapter->bp_tx_cleaned = tx_cleaned;
-       adapter->bp_tx_missed = tx_missed;
+               do {
+                       start = u64_stats_fetch_begin_irq(&ring->syncp);
+                       data[i]   = ring->stats.packets;
+                       data[i + 1] = ring->stats.bytes;
+               } while (u64_stats_fetch_retry_irq(&ring->syncp, start));
+               i += 2;
+#ifdef BP_EXTENDED_STATS
+               data[i] = ring->stats.yields;
+               data[i + 1] = ring->stats.misses;
+               data[i + 2] = ring->stats.cleaned;
+               i += 3;
 #endif
+       }
 
-       ixgbevf_update_stats(adapter);
-       for (i = 0; i < IXGBE_GLOBAL_STATS_LEN; i++) {
-               char *p = base + ixgbe_gstrings_stats[i].stat_offset;
-               char *b = base + ixgbe_gstrings_stats[i].base_stat_offset;
-               char *r = base + ixgbe_gstrings_stats[i].saved_reset_offset;
-
-               if (ixgbe_gstrings_stats[i].sizeof_stat == sizeof(u64)) {
-                       if (ixgbe_gstrings_stats[i].base_stat_offset >= 0)
-                               data[i] = *(u64 *)p - *(u64 *)b + *(u64 *)r;
-                       else
-                               data[i] = *(u64 *)p;
-               } else {
-                       if (ixgbe_gstrings_stats[i].base_stat_offset >= 0)
-                               data[i] = *(u32 *)p - *(u32 *)b + *(u32 *)r;
-                       else
-                               data[i] = *(u32 *)p;
+       /* populate Rx queue data */
+       for (j = 0; j < adapter->num_rx_queues; j++) {
+               ring = adapter->rx_ring[j];
+               if (!ring) {
+                       data[i++] = 0;
+                       data[i++] = 0;
+#ifdef BP_EXTENDED_STATS
+                       data[i++] = 0;
+                       data[i++] = 0;
+                       data[i++] = 0;
+#endif
+                       continue;
                }
+
+               do {
+                       start = u64_stats_fetch_begin_irq(&ring->syncp);
+                       data[i]   = ring->stats.packets;
+                       data[i + 1] = ring->stats.bytes;
+               } while (u64_stats_fetch_retry_irq(&ring->syncp, start));
+               i += 2;
+#ifdef BP_EXTENDED_STATS
+               data[i] = ring->stats.yields;
+               data[i + 1] = ring->stats.misses;
+               data[i + 2] = ring->stats.cleaned;
+               i += 3;
+#endif
        }
 }
 
 static void ixgbevf_get_strings(struct net_device *netdev, u32 stringset,
                                u8 *data)
 {
+       struct ixgbevf_adapter *adapter = netdev_priv(netdev);
        char *p = (char *)data;
        int i;
 
        switch (stringset) {
        case ETH_SS_TEST:
                memcpy(data, *ixgbe_gstrings_test,
-                      IXGBE_TEST_LEN * ETH_GSTRING_LEN);
+                      IXGBEVF_TEST_LEN * ETH_GSTRING_LEN);
                break;
        case ETH_SS_STATS:
-               for (i = 0; i < IXGBE_GLOBAL_STATS_LEN; i++) {
-                       memcpy(p, ixgbe_gstrings_stats[i].stat_string,
+               for (i = 0; i < IXGBEVF_GLOBAL_STATS_LEN; i++) {
+                       memcpy(p, ixgbevf_gstrings_stats[i].stat_string,
                               ETH_GSTRING_LEN);
                        p += ETH_GSTRING_LEN;
                }
+
+               for (i = 0; i < adapter->num_tx_queues; i++) {
+                       sprintf(p, "tx_queue_%u_packets", i);
+                       p += ETH_GSTRING_LEN;
+                       sprintf(p, "tx_queue_%u_bytes", i);
+                       p += ETH_GSTRING_LEN;
+#ifdef BP_EXTENDED_STATS
+                       sprintf(p, "tx_queue_%u_bp_napi_yield", i);
+                       p += ETH_GSTRING_LEN;
+                       sprintf(p, "tx_queue_%u_bp_misses", i);
+                       p += ETH_GSTRING_LEN;
+                       sprintf(p, "tx_queue_%u_bp_cleaned", i);
+                       p += ETH_GSTRING_LEN;
+#endif /* BP_EXTENDED_STATS */
+               }
+               for (i = 0; i < adapter->num_rx_queues; i++) {
+                       sprintf(p, "rx_queue_%u_packets", i);
+                       p += ETH_GSTRING_LEN;
+                       sprintf(p, "rx_queue_%u_bytes", i);
+                       p += ETH_GSTRING_LEN;
+#ifdef BP_EXTENDED_STATS
+                       sprintf(p, "rx_queue_%u_bp_poll_yield", i);
+                       p += ETH_GSTRING_LEN;
+                       sprintf(p, "rx_queue_%u_bp_misses", i);
+                       p += ETH_GSTRING_LEN;
+                       sprintf(p, "rx_queue_%u_bp_cleaned", i);
+                       p += ETH_GSTRING_LEN;
+#endif /* BP_EXTENDED_STATS */
+               }
                break;
        }
 }
index 5ac60ee..d5944c3 100644 (file)
@@ -166,10 +166,10 @@ struct ixgbevf_ring {
 
 #define MAXIMUM_ETHERNET_VLAN_SIZE (VLAN_ETH_FRAME_LEN + ETH_FCS_LEN)
 
-#define IXGBE_TX_FLAGS_CSUM            (u32)(1)
-#define IXGBE_TX_FLAGS_VLAN            (u32)(1 << 1)
-#define IXGBE_TX_FLAGS_TSO             (u32)(1 << 2)
-#define IXGBE_TX_FLAGS_IPV4            (u32)(1 << 3)
+#define IXGBE_TX_FLAGS_CSUM            BIT(0)
+#define IXGBE_TX_FLAGS_VLAN            BIT(1)
+#define IXGBE_TX_FLAGS_TSO             BIT(2)
+#define IXGBE_TX_FLAGS_IPV4            BIT(3)
 #define IXGBE_TX_FLAGS_VLAN_MASK       0xffff0000
 #define IXGBE_TX_FLAGS_VLAN_PRIO_MASK  0x0000e000
 #define IXGBE_TX_FLAGS_VLAN_SHIFT      16
@@ -422,16 +422,6 @@ struct ixgbevf_adapter {
        unsigned int tx_ring_count;
        unsigned int rx_ring_count;
 
-#ifdef BP_EXTENDED_STATS
-       u64 bp_rx_yields;
-       u64 bp_rx_cleaned;
-       u64 bp_rx_missed;
-
-       u64 bp_tx_yields;
-       u64 bp_tx_cleaned;
-       u64 bp_tx_missed;
-#endif
-
        u8 __iomem *io_addr; /* Mainly for iounmap use */
        u32 link_speed;
        bool link_up;
@@ -460,9 +450,13 @@ enum ixbgevf_state_t {
 
 enum ixgbevf_boards {
        board_82599_vf,
+       board_82599_vf_hv,
        board_X540_vf,
+       board_X540_vf_hv,
        board_X550_vf,
+       board_X550_vf_hv,
        board_X550EM_x_vf,
+       board_X550EM_x_vf_hv,
 };
 
 enum ixgbevf_xcast_modes {
@@ -477,6 +471,12 @@ extern const struct ixgbevf_info ixgbevf_X550_vf_info;
 extern const struct ixgbevf_info ixgbevf_X550EM_x_vf_info;
 extern const struct ixgbe_mbx_operations ixgbevf_mbx_ops;
 
+extern const struct ixgbevf_info ixgbevf_82599_vf_hv_info;
+extern const struct ixgbevf_info ixgbevf_X540_vf_hv_info;
+extern const struct ixgbevf_info ixgbevf_X550_vf_hv_info;
+extern const struct ixgbevf_info ixgbevf_X550EM_x_vf_hv_info;
+extern const struct ixgbe_mbx_operations ixgbevf_hv_mbx_ops;
+
 /* needed by ethtool.c */
 extern const char ixgbevf_driver_name[];
 extern const char ixgbevf_driver_version[];
index 007cbe0..5e348b1 100644 (file)
@@ -62,10 +62,14 @@ static char ixgbevf_copyright[] =
        "Copyright (c) 2009 - 2015 Intel Corporation.";
 
 static const struct ixgbevf_info *ixgbevf_info_tbl[] = {
-       [board_82599_vf] = &ixgbevf_82599_vf_info,
-       [board_X540_vf]  = &ixgbevf_X540_vf_info,
-       [board_X550_vf]  = &ixgbevf_X550_vf_info,
-       [board_X550EM_x_vf] = &ixgbevf_X550EM_x_vf_info,
+       [board_82599_vf]        = &ixgbevf_82599_vf_info,
+       [board_82599_vf_hv]     = &ixgbevf_82599_vf_hv_info,
+       [board_X540_vf]         = &ixgbevf_X540_vf_info,
+       [board_X540_vf_hv]      = &ixgbevf_X540_vf_hv_info,
+       [board_X550_vf]         = &ixgbevf_X550_vf_info,
+       [board_X550_vf_hv]      = &ixgbevf_X550_vf_hv_info,
+       [board_X550EM_x_vf]     = &ixgbevf_X550EM_x_vf_info,
+       [board_X550EM_x_vf_hv]  = &ixgbevf_X550EM_x_vf_hv_info,
 };
 
 /* ixgbevf_pci_tbl - PCI Device ID Table
@@ -78,9 +82,13 @@ static const struct ixgbevf_info *ixgbevf_info_tbl[] = {
  */
 static const struct pci_device_id ixgbevf_pci_tbl[] = {
        {PCI_VDEVICE(INTEL, IXGBE_DEV_ID_82599_VF), board_82599_vf },
+       {PCI_VDEVICE(INTEL, IXGBE_DEV_ID_82599_VF_HV), board_82599_vf_hv },
        {PCI_VDEVICE(INTEL, IXGBE_DEV_ID_X540_VF), board_X540_vf },
+       {PCI_VDEVICE(INTEL, IXGBE_DEV_ID_X540_VF_HV), board_X540_vf_hv },
        {PCI_VDEVICE(INTEL, IXGBE_DEV_ID_X550_VF), board_X550_vf },
+       {PCI_VDEVICE(INTEL, IXGBE_DEV_ID_X550_VF_HV), board_X550_vf_hv },
        {PCI_VDEVICE(INTEL, IXGBE_DEV_ID_X550EM_X_VF), board_X550EM_x_vf },
+       {PCI_VDEVICE(INTEL, IXGBE_DEV_ID_X550EM_X_VF_HV), board_X550EM_x_vf_hv},
        /* required last entry */
        {0, }
 };
@@ -1056,7 +1064,7 @@ static int ixgbevf_poll(struct napi_struct *napi, int budget)
        if (!test_bit(__IXGBEVF_DOWN, &adapter->state) &&
            !test_bit(__IXGBEVF_REMOVING, &adapter->state))
                ixgbevf_irq_enable_queues(adapter,
-                                         1 << q_vector->v_idx);
+                                         BIT(q_vector->v_idx));
 
        return 0;
 }
@@ -1158,14 +1166,14 @@ static void ixgbevf_configure_msix(struct ixgbevf_adapter *adapter)
                }
 
                /* add q_vector eims value to global eims_enable_mask */
-               adapter->eims_enable_mask |= 1 << v_idx;
+               adapter->eims_enable_mask |= BIT(v_idx);
 
                ixgbevf_write_eitr(q_vector);
        }
 
        ixgbevf_set_ivar(adapter, -1, 1, v_idx);
        /* setup eims_other and add value to global eims_enable_mask */
-       adapter->eims_other = 1 << v_idx;
+       adapter->eims_other = BIT(v_idx);
        adapter->eims_enable_mask |= adapter->eims_other;
 }
 
@@ -1589,8 +1597,8 @@ static void ixgbevf_configure_tx_ring(struct ixgbevf_adapter *adapter,
        txdctl |= (8 << 16);    /* WTHRESH = 8 */
 
        /* Setting PTHRESH to 32 both improves performance */
-       txdctl |= (1 << 8) |    /* HTHRESH = 1 */
-                 32;          /* PTHRESH = 32 */
+       txdctl |= (1u << 8) |    /* HTHRESH = 1 */
+                  32;           /* PTHRESH = 32 */
 
        clear_bit(__IXGBEVF_HANG_CHECK_ARMED, &ring->state);
 
@@ -1646,7 +1654,7 @@ static void ixgbevf_setup_psrtype(struct ixgbevf_adapter *adapter)
                      IXGBE_PSRTYPE_L2HDR;
 
        if (adapter->num_rx_queues > 1)
-               psrtype |= 1 << 29;
+               psrtype |= BIT(29);
 
        IXGBE_WRITE_REG(hw, IXGBE_VFPSRTYPE, psrtype);
 }
@@ -1752,9 +1760,15 @@ static void ixgbevf_configure_rx_ring(struct ixgbevf_adapter *adapter,
        IXGBE_WRITE_REG(hw, IXGBE_VFRDLEN(reg_idx),
                        ring->count * sizeof(union ixgbe_adv_rx_desc));
 
+#ifndef CONFIG_SPARC
        /* enable relaxed ordering */
        IXGBE_WRITE_REG(hw, IXGBE_VFDCA_RXCTRL(reg_idx),
                        IXGBE_DCA_RXCTRL_DESC_RRO_EN);
+#else
+       IXGBE_WRITE_REG(hw, IXGBE_VFDCA_RXCTRL(reg_idx),
+                       IXGBE_DCA_RXCTRL_DESC_RRO_EN |
+                       IXGBE_DCA_RXCTRL_DATA_WRO_EN);
+#endif
 
        /* reset head and tail pointers */
        IXGBE_WRITE_REG(hw, IXGBE_VFRDH(reg_idx), 0);
@@ -1795,7 +1809,7 @@ static void ixgbevf_configure_rx(struct ixgbevf_adapter *adapter)
                ixgbevf_setup_vfmrqc(adapter);
 
        /* notify the PF of our intent to use this size of frame */
-       ixgbevf_rlpml_set_vf(hw, netdev->mtu + ETH_HLEN + ETH_FCS_LEN);
+       hw->mac.ops.set_rlpml(hw, netdev->mtu + ETH_HLEN + ETH_FCS_LEN);
 
        /* Setup the HW Rx Head and Tail Descriptor Pointers and
         * the Base and Length of the Rx Descriptor Ring
@@ -1908,7 +1922,7 @@ static void ixgbevf_set_rx_mode(struct net_device *netdev)
 
        spin_lock_bh(&adapter->mbx_lock);
 
-       hw->mac.ops.update_xcast_mode(hw, netdev, xcast_mode);
+       hw->mac.ops.update_xcast_mode(hw, xcast_mode);
 
        /* reprogram multicast list */
        hw->mac.ops.update_mc_addr_list(hw, netdev);
@@ -2056,7 +2070,7 @@ static void ixgbevf_negotiate_api(struct ixgbevf_adapter *adapter)
        spin_lock_bh(&adapter->mbx_lock);
 
        while (api[idx] != ixgbe_mbox_api_unknown) {
-               err = ixgbevf_negotiate_api_version(hw, api[idx]);
+               err = hw->mac.ops.negotiate_api_version(hw, api[idx]);
                if (!err)
                        break;
                idx++;
@@ -2797,7 +2811,7 @@ static void ixgbevf_check_hang_subtask(struct ixgbevf_adapter *adapter)
                struct ixgbevf_q_vector *qv = adapter->q_vector[i];
 
                if (qv->rx.ring || qv->tx.ring)
-                       eics |= 1 << i;
+                       eics |= BIT(i);
        }
 
        /* Cause software interrupt to ensure rings are cleaned */
@@ -3272,9 +3286,18 @@ static int ixgbevf_tso(struct ixgbevf_ring *tx_ring,
                       struct ixgbevf_tx_buffer *first,
                       u8 *hdr_len)
 {
+       u32 vlan_macip_lens, type_tucmd, mss_l4len_idx;
        struct sk_buff *skb = first->skb;
-       u32 vlan_macip_lens, type_tucmd;
-       u32 mss_l4len_idx, l4len;
+       union {
+               struct iphdr *v4;
+               struct ipv6hdr *v6;
+               unsigned char *hdr;
+       } ip;
+       union {
+               struct tcphdr *tcp;
+               unsigned char *hdr;
+       } l4;
+       u32 paylen, l4_offset;
        int err;
 
        if (skb->ip_summed != CHECKSUM_PARTIAL)
@@ -3287,49 +3310,53 @@ static int ixgbevf_tso(struct ixgbevf_ring *tx_ring,
        if (err < 0)
                return err;
 
+       ip.hdr = skb_network_header(skb);
+       l4.hdr = skb_checksum_start(skb);
+
        /* ADV DTYP TUCMD MKRLOC/ISCSIHEDLEN */
        type_tucmd = IXGBE_ADVTXD_TUCMD_L4T_TCP;
 
-       if (first->protocol == htons(ETH_P_IP)) {
-               struct iphdr *iph = ip_hdr(skb);
-
-               iph->tot_len = 0;
-               iph->check = 0;
-               tcp_hdr(skb)->check = ~csum_tcpudp_magic(iph->saddr,
-                                                        iph->daddr, 0,
-                                                        IPPROTO_TCP,
-                                                        0);
+       /* initialize outer IP header fields */
+       if (ip.v4->version == 4) {
+               /* IP header will have to cancel out any data that
+                * is not a part of the outer IP header
+                */
+               ip.v4->check = csum_fold(csum_add(lco_csum(skb),
+                                                 csum_unfold(l4.tcp->check)));
                type_tucmd |= IXGBE_ADVTXD_TUCMD_IPV4;
+
+               ip.v4->tot_len = 0;
                first->tx_flags |= IXGBE_TX_FLAGS_TSO |
                                   IXGBE_TX_FLAGS_CSUM |
                                   IXGBE_TX_FLAGS_IPV4;
-       } else if (skb_is_gso_v6(skb)) {
-               ipv6_hdr(skb)->payload_len = 0;
-               tcp_hdr(skb)->check =
-                   ~csum_ipv6_magic(&ipv6_hdr(skb)->saddr,
-                                    &ipv6_hdr(skb)->daddr,
-                                    0, IPPROTO_TCP, 0);
+       } else {
+               ip.v6->payload_len = 0;
                first->tx_flags |= IXGBE_TX_FLAGS_TSO |
                                   IXGBE_TX_FLAGS_CSUM;
        }
 
-       /* compute header lengths */
-       l4len = tcp_hdrlen(skb);
-       *hdr_len += l4len;
-       *hdr_len = skb_transport_offset(skb) + l4len;
+       /* determine offset of inner transport header */
+       l4_offset = l4.hdr - skb->data;
+
+       /* compute length of segmentation header */
+       *hdr_len = (l4.tcp->doff * 4) + l4_offset;
 
-       /* update GSO size and bytecount with header size */
+       /* remove payload length from inner checksum */
+       paylen = skb->len - l4_offset;
+       csum_replace_by_diff(&l4.tcp->check, htonl(paylen));
+
+       /* update gso size and bytecount with header size */
        first->gso_segs = skb_shinfo(skb)->gso_segs;
        first->bytecount += (first->gso_segs - 1) * *hdr_len;
 
        /* mss_l4len_id: use 1 as index for TSO */
-       mss_l4len_idx = l4len << IXGBE_ADVTXD_L4LEN_SHIFT;
+       mss_l4len_idx = (*hdr_len - l4_offset) << IXGBE_ADVTXD_L4LEN_SHIFT;
        mss_l4len_idx |= skb_shinfo(skb)->gso_size << IXGBE_ADVTXD_MSS_SHIFT;
-       mss_l4len_idx |= 1 << IXGBE_ADVTXD_IDX_SHIFT;
+       mss_l4len_idx |= (1u << IXGBE_ADVTXD_IDX_SHIFT);
 
        /* vlan_macip_lens: HEADLEN, MACLEN, VLAN tag */
-       vlan_macip_lens = skb_network_header_len(skb);
-       vlan_macip_lens |= skb_network_offset(skb) << IXGBE_ADVTXD_MACLEN_SHIFT;
+       vlan_macip_lens = l4.hdr - ip.hdr;
+       vlan_macip_lens |= (ip.hdr - skb->data) << IXGBE_ADVTXD_MACLEN_SHIFT;
        vlan_macip_lens |= first->tx_flags & IXGBE_TX_FLAGS_VLAN_MASK;
 
        ixgbevf_tx_ctxtdesc(tx_ring, vlan_macip_lens,
@@ -3422,7 +3449,7 @@ static void ixgbevf_tx_olinfo_status(union ixgbe_adv_tx_desc *tx_desc,
 
        /* use index 1 context for TSO/FSO/FCOE */
        if (tx_flags & IXGBE_TX_FLAGS_TSO)
-               olinfo_status |= cpu_to_le32(1 << IXGBE_ADVTXD_IDX_SHIFT);
+               olinfo_status |= cpu_to_le32(1u << IXGBE_ADVTXD_IDX_SHIFT);
 
        /* Check Context must be set if Tx switch is enabled, which it
         * always is for case where virtual functions are running
@@ -3727,7 +3754,7 @@ static int ixgbevf_change_mtu(struct net_device *netdev, int new_mtu)
        netdev->mtu = new_mtu;
 
        /* notify the PF of our intent to use this size of frame */
-       ixgbevf_rlpml_set_vf(hw, max_frame);
+       hw->mac.ops.set_rlpml(hw, max_frame);
 
        return 0;
 }
@@ -3870,6 +3897,40 @@ static struct rtnl_link_stats64 *ixgbevf_get_stats(struct net_device *netdev,
        return stats;
 }
 
+#define IXGBEVF_MAX_MAC_HDR_LEN                127
+#define IXGBEVF_MAX_NETWORK_HDR_LEN    511
+
+static netdev_features_t
+ixgbevf_features_check(struct sk_buff *skb, struct net_device *dev,
+                      netdev_features_t features)
+{
+       unsigned int network_hdr_len, mac_hdr_len;
+
+       /* Make certain the headers can be described by a context descriptor */
+       mac_hdr_len = skb_network_header(skb) - skb->data;
+       if (unlikely(mac_hdr_len > IXGBEVF_MAX_MAC_HDR_LEN))
+               return features & ~(NETIF_F_HW_CSUM |
+                                   NETIF_F_SCTP_CRC |
+                                   NETIF_F_HW_VLAN_CTAG_TX |
+                                   NETIF_F_TSO |
+                                   NETIF_F_TSO6);
+
+       network_hdr_len = skb_checksum_start(skb) - skb_network_header(skb);
+       if (unlikely(network_hdr_len >  IXGBEVF_MAX_NETWORK_HDR_LEN))
+               return features & ~(NETIF_F_HW_CSUM |
+                                   NETIF_F_SCTP_CRC |
+                                   NETIF_F_TSO |
+                                   NETIF_F_TSO6);
+
+       /* We can only support IPV4 TSO in tunnels if we can mangle the
+        * inner IP ID field, so strip TSO if MANGLEID is not supported.
+        */
+       if (skb->encapsulation && !(features & NETIF_F_TSO_MANGLEID))
+               features &= ~NETIF_F_TSO;
+
+       return features;
+}
+
 static const struct net_device_ops ixgbevf_netdev_ops = {
        .ndo_open               = ixgbevf_open,
        .ndo_stop               = ixgbevf_close,
@@ -3888,7 +3949,7 @@ static const struct net_device_ops ixgbevf_netdev_ops = {
 #ifdef CONFIG_NET_POLL_CONTROLLER
        .ndo_poll_controller    = ixgbevf_netpoll,
 #endif
-       .ndo_features_check     = passthru_features_check,
+       .ndo_features_check     = ixgbevf_features_check,
 };
 
 static void ixgbevf_assign_netdev_ops(struct net_device *dev)
@@ -3999,23 +4060,31 @@ static int ixgbevf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
                              NETIF_F_HW_CSUM |
                              NETIF_F_SCTP_CRC;
 
-       netdev->features = netdev->hw_features |
-                          NETIF_F_HW_VLAN_CTAG_TX |
-                          NETIF_F_HW_VLAN_CTAG_RX |
-                          NETIF_F_HW_VLAN_CTAG_FILTER;
+#define IXGBEVF_GSO_PARTIAL_FEATURES (NETIF_F_GSO_GRE | \
+                                     NETIF_F_GSO_GRE_CSUM | \
+                                     NETIF_F_GSO_IPIP | \
+                                     NETIF_F_GSO_SIT | \
+                                     NETIF_F_GSO_UDP_TUNNEL | \
+                                     NETIF_F_GSO_UDP_TUNNEL_CSUM)
 
-       netdev->vlan_features |= NETIF_F_SG |
-                                NETIF_F_TSO |
-                                NETIF_F_TSO6 |
-                                NETIF_F_HW_CSUM |
-                                NETIF_F_SCTP_CRC;
+       netdev->gso_partial_features = IXGBEVF_GSO_PARTIAL_FEATURES;
+       netdev->hw_features |= NETIF_F_GSO_PARTIAL |
+                              IXGBEVF_GSO_PARTIAL_FEATURES;
 
-       netdev->mpls_features |= NETIF_F_HW_CSUM;
-       netdev->hw_enc_features |= NETIF_F_HW_CSUM;
+       netdev->features = netdev->hw_features;
 
        if (pci_using_dac)
                netdev->features |= NETIF_F_HIGHDMA;
 
+       netdev->vlan_features |= netdev->features | NETIF_F_TSO_MANGLEID;
+       netdev->mpls_features |= NETIF_F_HW_CSUM;
+       netdev->hw_enc_features |= netdev->vlan_features;
+
+       /* set this bit last since it cannot be part of vlan_features */
+       netdev->features |= NETIF_F_HW_VLAN_CTAG_FILTER |
+                           NETIF_F_HW_VLAN_CTAG_RX |
+                           NETIF_F_HW_VLAN_CTAG_TX;
+
        netdev->priv_flags |= IFF_UNICAST_FLT;
 
        if (IXGBE_REMOVED(hw->hw_addr)) {
index dc68fea..61a80da 100644 (file)
@@ -346,3 +346,14 @@ const struct ixgbe_mbx_operations ixgbevf_mbx_ops = {
        .check_for_rst  = ixgbevf_check_for_rst_vf,
 };
 
+/* Mailbox operations when running on Hyper-V.
+ * On Hyper-V, PF/VF communication is not through the
+ * hardware mailbox; this communication is through
+ * a software mediated path.
+ * Most mail box operations are noop while running on
+ * Hyper-V.
+ */
+const struct ixgbe_mbx_operations ixgbevf_hv_mbx_ops = {
+       .init_params    = ixgbevf_init_mbx_params_vf,
+       .check_for_rst  = ixgbevf_check_for_rst_vf,
+};
index 4d613a4..e670d3b 100644 (file)
 #include "vf.h"
 #include "ixgbevf.h"
 
+/* On Hyper-V, to reset, we need to read from this offset
+ * from the PCI config space. This is the mechanism used on
+ * Hyper-V to support PF/VF communication.
+ */
+#define IXGBE_HV_RESET_OFFSET           0x201
+
 /**
  *  ixgbevf_start_hw_vf - Prepare hardware for Tx/Rx
  *  @hw: pointer to hardware structure
@@ -125,6 +131,27 @@ static s32 ixgbevf_reset_hw_vf(struct ixgbe_hw *hw)
        return 0;
 }
 
+/**
+ * Hyper-V variant; the VF/PF communication is through the PCI
+ * config space.
+ */
+static s32 ixgbevf_hv_reset_hw_vf(struct ixgbe_hw *hw)
+{
+#if IS_ENABLED(CONFIG_PCI_MMCONFIG)
+       struct ixgbevf_adapter *adapter = hw->back;
+       int i;
+
+       for (i = 0; i < 6; i++)
+               pci_read_config_byte(adapter->pdev,
+                                    (i + IXGBE_HV_RESET_OFFSET),
+                                    &hw->mac.perm_addr[i]);
+       return 0;
+#else
+       pr_err("PCI_MMCONFIG needs to be enabled for Hyper-V\n");
+       return -EOPNOTSUPP;
+#endif
+}
+
 /**
  *  ixgbevf_stop_hw_vf - Generic stop Tx/Rx units
  *  @hw: pointer to hardware structure
@@ -258,6 +285,11 @@ static s32 ixgbevf_set_uc_addr_vf(struct ixgbe_hw *hw, u32 index, u8 *addr)
        return ret_val;
 }
 
+static s32 ixgbevf_hv_set_uc_addr_vf(struct ixgbe_hw *hw, u32 index, u8 *addr)
+{
+       return -EOPNOTSUPP;
+}
+
 /**
  * ixgbevf_get_reta_locked - get the RSS redirection table (RETA) contents.
  * @adapter: pointer to the port handle
@@ -416,6 +448,26 @@ static s32 ixgbevf_set_rar_vf(struct ixgbe_hw *hw, u32 index, u8 *addr,
        return ret_val;
 }
 
+/**
+ *  ixgbevf_hv_set_rar_vf - set device MAC address Hyper-V variant
+ *  @hw: pointer to hardware structure
+ *  @index: Receive address register to write
+ *  @addr: Address to put into receive address register
+ *  @vmdq: Unused in this implementation
+ *
+ * We don't really allow setting the device MAC address. However,
+ * if the address being set is the permanent MAC address we will
+ * permit that.
+ **/
+static s32 ixgbevf_hv_set_rar_vf(struct ixgbe_hw *hw, u32 index, u8 *addr,
+                                u32 vmdq)
+{
+       if (ether_addr_equal(addr, hw->mac.perm_addr))
+               return 0;
+
+       return -EOPNOTSUPP;
+}
+
 static void ixgbevf_write_msg_read_ack(struct ixgbe_hw *hw,
                                       u32 *msg, u16 size)
 {
@@ -472,16 +524,23 @@ static s32 ixgbevf_update_mc_addr_list_vf(struct ixgbe_hw *hw,
        return 0;
 }
 
+/**
+ * Hyper-V variant - just a stub.
+ */
+static s32 ixgbevf_hv_update_mc_addr_list_vf(struct ixgbe_hw *hw,
+                                            struct net_device *netdev)
+{
+       return -EOPNOTSUPP;
+}
+
 /**
  *  ixgbevf_update_xcast_mode - Update Multicast mode
  *  @hw: pointer to the HW structure
- *  @netdev: pointer to net device structure
  *  @xcast_mode: new multicast mode
  *
  *  Updates the Multicast Mode of VF.
  **/
-static s32 ixgbevf_update_xcast_mode(struct ixgbe_hw *hw,
-                                    struct net_device *netdev, int xcast_mode)
+static s32 ixgbevf_update_xcast_mode(struct ixgbe_hw *hw, int xcast_mode)
 {
        struct ixgbe_mbx_info *mbx = &hw->mbx;
        u32 msgbuf[2];
@@ -512,6 +571,14 @@ static s32 ixgbevf_update_xcast_mode(struct ixgbe_hw *hw,
        return 0;
 }
 
+/**
+ * Hyper-V variant - just a stub.
+ */
+static s32 ixgbevf_hv_update_xcast_mode(struct ixgbe_hw *hw, int xcast_mode)
+{
+       return -EOPNOTSUPP;
+}
+
 /**
  *  ixgbevf_set_vfta_vf - Set/Unset VLAN filter table address
  *  @hw: pointer to the HW structure
@@ -550,6 +617,15 @@ mbx_err:
        return err;
 }
 
+/**
+ * Hyper-V variant - just a stub.
+ */
+static s32 ixgbevf_hv_set_vfta_vf(struct ixgbe_hw *hw, u32 vlan, u32 vind,
+                                 bool vlan_on)
+{
+       return -EOPNOTSUPP;
+}
+
 /**
  *  ixgbevf_setup_mac_link_vf - Setup MAC link settings
  *  @hw: pointer to hardware structure
@@ -656,11 +732,72 @@ out:
 }
 
 /**
- *  ixgbevf_rlpml_set_vf - Set the maximum receive packet length
+ * Hyper-V variant; there is no mailbox communication.
+ */
+static s32 ixgbevf_hv_check_mac_link_vf(struct ixgbe_hw *hw,
+                                       ixgbe_link_speed *speed,
+                                       bool *link_up,
+                                       bool autoneg_wait_to_complete)
+{
+       struct ixgbe_mbx_info *mbx = &hw->mbx;
+       struct ixgbe_mac_info *mac = &hw->mac;
+       u32 links_reg;
+
+       /* If we were hit with a reset drop the link */
+       if (!mbx->ops.check_for_rst(hw) || !mbx->timeout)
+               mac->get_link_status = true;
+
+       if (!mac->get_link_status)
+               goto out;
+
+       /* if link status is down no point in checking to see if pf is up */
+       links_reg = IXGBE_READ_REG(hw, IXGBE_VFLINKS);
+       if (!(links_reg & IXGBE_LINKS_UP))
+               goto out;
+
+       /* for SFP+ modules and DA cables on 82599 it can take up to 500usecs
+        * before the link status is correct
+        */
+       if (mac->type == ixgbe_mac_82599_vf) {
+               int i;
+
+               for (i = 0; i < 5; i++) {
+                       udelay(100);
+                       links_reg = IXGBE_READ_REG(hw, IXGBE_VFLINKS);
+
+                       if (!(links_reg & IXGBE_LINKS_UP))
+                               goto out;
+               }
+       }
+
+       switch (links_reg & IXGBE_LINKS_SPEED_82599) {
+       case IXGBE_LINKS_SPEED_10G_82599:
+               *speed = IXGBE_LINK_SPEED_10GB_FULL;
+               break;
+       case IXGBE_LINKS_SPEED_1G_82599:
+               *speed = IXGBE_LINK_SPEED_1GB_FULL;
+               break;
+       case IXGBE_LINKS_SPEED_100_82599:
+               *speed = IXGBE_LINK_SPEED_100_FULL;
+               break;
+       }
+
+       /* if we passed all the tests above then the link is up and we no
+        * longer need to check for link
+        */
+       mac->get_link_status = false;
+
+out:
+       *link_up = !mac->get_link_status;
+       return 0;
+}
+
+/**
+ *  ixgbevf_set_rlpml_vf - Set the maximum receive packet length
  *  @hw: pointer to the HW structure
  *  @max_size: value to assign to max frame size
  **/
-void ixgbevf_rlpml_set_vf(struct ixgbe_hw *hw, u16 max_size)
+static void ixgbevf_set_rlpml_vf(struct ixgbe_hw *hw, u16 max_size)
 {
        u32 msgbuf[2];
 
@@ -670,11 +807,30 @@ void ixgbevf_rlpml_set_vf(struct ixgbe_hw *hw, u16 max_size)
 }
 
 /**
- *  ixgbevf_negotiate_api_version - Negotiate supported API version
+ * ixgbevf_hv_set_rlpml_vf - Set the maximum receive packet length
+ * @hw: pointer to the HW structure
+ * @max_size: value to assign to max frame size
+ * Hyper-V variant.
+ **/
+static void ixgbevf_hv_set_rlpml_vf(struct ixgbe_hw *hw, u16 max_size)
+{
+       u32 reg;
+
+       /* If we are on Hyper-V, we implement this functionality
+        * differently.
+        */
+       reg =  IXGBE_READ_REG(hw, IXGBE_VFRXDCTL(0));
+       /* CRC == 4 */
+       reg |= ((max_size + 4) | IXGBE_RXDCTL_RLPML_EN);
+       IXGBE_WRITE_REG(hw, IXGBE_VFRXDCTL(0), reg);
+}
+
+/**
+ *  ixgbevf_negotiate_api_version_vf - Negotiate supported API version
  *  @hw: pointer to the HW structure
  *  @api: integer containing requested API version
  **/
-int ixgbevf_negotiate_api_version(struct ixgbe_hw *hw, int api)
+static int ixgbevf_negotiate_api_version_vf(struct ixgbe_hw *hw, int api)
 {
        int err;
        u32 msg[3];
@@ -703,6 +859,21 @@ int ixgbevf_negotiate_api_version(struct ixgbe_hw *hw, int api)
        return err;
 }
 
+/**
+ *  ixgbevf_hv_negotiate_api_version_vf - Negotiate supported API version
+ *  @hw: pointer to the HW structure
+ *  @api: integer containing requested API version
+ *  Hyper-V version - only ixgbe_mbox_api_10 supported.
+ **/
+static int ixgbevf_hv_negotiate_api_version_vf(struct ixgbe_hw *hw, int api)
+{
+       /* Hyper-V only supports api version ixgbe_mbox_api_10 */
+       if (api != ixgbe_mbox_api_10)
+               return IXGBE_ERR_INVALID_ARGUMENT;
+
+       return 0;
+}
+
 int ixgbevf_get_queues(struct ixgbe_hw *hw, unsigned int *num_tcs,
                       unsigned int *default_tc)
 {
@@ -769,11 +940,30 @@ static const struct ixgbe_mac_operations ixgbevf_mac_ops = {
        .stop_adapter           = ixgbevf_stop_hw_vf,
        .setup_link             = ixgbevf_setup_mac_link_vf,
        .check_link             = ixgbevf_check_mac_link_vf,
+       .negotiate_api_version  = ixgbevf_negotiate_api_version_vf,
        .set_rar                = ixgbevf_set_rar_vf,
        .update_mc_addr_list    = ixgbevf_update_mc_addr_list_vf,
        .update_xcast_mode      = ixgbevf_update_xcast_mode,
        .set_uc_addr            = ixgbevf_set_uc_addr_vf,
        .set_vfta               = ixgbevf_set_vfta_vf,
+       .set_rlpml              = ixgbevf_set_rlpml_vf,
+};
+
+static const struct ixgbe_mac_operations ixgbevf_hv_mac_ops = {
+       .init_hw                = ixgbevf_init_hw_vf,
+       .reset_hw               = ixgbevf_hv_reset_hw_vf,
+       .start_hw               = ixgbevf_start_hw_vf,
+       .get_mac_addr           = ixgbevf_get_mac_addr_vf,
+       .stop_adapter           = ixgbevf_stop_hw_vf,
+       .setup_link             = ixgbevf_setup_mac_link_vf,
+       .check_link             = ixgbevf_hv_check_mac_link_vf,
+       .negotiate_api_version  = ixgbevf_hv_negotiate_api_version_vf,
+       .set_rar                = ixgbevf_hv_set_rar_vf,
+       .update_mc_addr_list    = ixgbevf_hv_update_mc_addr_list_vf,
+       .update_xcast_mode      = ixgbevf_hv_update_xcast_mode,
+       .set_uc_addr            = ixgbevf_hv_set_uc_addr_vf,
+       .set_vfta               = ixgbevf_hv_set_vfta_vf,
+       .set_rlpml              = ixgbevf_hv_set_rlpml_vf,
 };
 
 const struct ixgbevf_info ixgbevf_82599_vf_info = {
@@ -781,17 +971,37 @@ const struct ixgbevf_info ixgbevf_82599_vf_info = {
        .mac_ops = &ixgbevf_mac_ops,
 };
 
+const struct ixgbevf_info ixgbevf_82599_vf_hv_info = {
+       .mac = ixgbe_mac_82599_vf,
+       .mac_ops = &ixgbevf_hv_mac_ops,
+};
+
 const struct ixgbevf_info ixgbevf_X540_vf_info = {
        .mac = ixgbe_mac_X540_vf,
        .mac_ops = &ixgbevf_mac_ops,
 };
 
+const struct ixgbevf_info ixgbevf_X540_vf_hv_info = {
+       .mac = ixgbe_mac_X540_vf,
+       .mac_ops = &ixgbevf_hv_mac_ops,
+};
+
 const struct ixgbevf_info ixgbevf_X550_vf_info = {
        .mac = ixgbe_mac_X550_vf,
        .mac_ops = &ixgbevf_mac_ops,
 };
 
+const struct ixgbevf_info ixgbevf_X550_vf_hv_info = {
+       .mac = ixgbe_mac_X550_vf,
+       .mac_ops = &ixgbevf_hv_mac_ops,
+};
+
 const struct ixgbevf_info ixgbevf_X550EM_x_vf_info = {
        .mac = ixgbe_mac_X550EM_x_vf,
        .mac_ops = &ixgbevf_mac_ops,
 };
+
+const struct ixgbevf_info ixgbevf_X550EM_x_vf_hv_info = {
+       .mac = ixgbe_mac_X550EM_x_vf,
+       .mac_ops = &ixgbevf_hv_mac_ops,
+};
index ef9f773..2cac610 100644 (file)
@@ -51,6 +51,7 @@ struct ixgbe_mac_operations {
        s32 (*get_mac_addr)(struct ixgbe_hw *, u8 *);
        s32 (*stop_adapter)(struct ixgbe_hw *);
        s32 (*get_bus_info)(struct ixgbe_hw *);
+       s32 (*negotiate_api_version)(struct ixgbe_hw *hw, int api);
 
        /* Link */
        s32 (*setup_link)(struct ixgbe_hw *, ixgbe_link_speed, bool, bool);
@@ -63,11 +64,12 @@ struct ixgbe_mac_operations {
        s32 (*set_uc_addr)(struct ixgbe_hw *, u32, u8 *);
        s32 (*init_rx_addrs)(struct ixgbe_hw *);
        s32 (*update_mc_addr_list)(struct ixgbe_hw *, struct net_device *);
-       s32 (*update_xcast_mode)(struct ixgbe_hw *, struct net_device *, int);
+       s32 (*update_xcast_mode)(struct ixgbe_hw *, int);
        s32 (*enable_mc)(struct ixgbe_hw *);
        s32 (*disable_mc)(struct ixgbe_hw *);
        s32 (*clear_vfta)(struct ixgbe_hw *);
        s32 (*set_vfta)(struct ixgbe_hw *, u32, u32, bool);
+       void (*set_rlpml)(struct ixgbe_hw *, u16);
 };
 
 enum ixgbe_mac_type {
@@ -207,8 +209,6 @@ static inline u32 ixgbe_read_reg_array(struct ixgbe_hw *hw, u32 reg,
 
 #define IXGBE_READ_REG_ARRAY(h, r, o) ixgbe_read_reg_array(h, r, o)
 
-void ixgbevf_rlpml_set_vf(struct ixgbe_hw *hw, u16 max_size);
-int ixgbevf_negotiate_api_version(struct ixgbe_hw *hw, int api);
 int ixgbevf_get_queues(struct ixgbe_hw *hw, unsigned int *num_tcs,
                       unsigned int *default_tc);
 int ixgbevf_get_reta_locked(struct ixgbe_hw *hw, u32 *reta, int num_rx_queues);
index d74f5f4..1799fe1 100644 (file)
@@ -152,7 +152,7 @@ static inline void korina_abort_dma(struct net_device *dev,
               writel(0x10, &ch->dmac);
 
               while (!(readl(&ch->dmas) & DMA_STAT_HALT))
-                      dev->trans_start = jiffies;
+                      netif_trans_update(dev);
 
               writel(0, &ch->dmas);
        }
@@ -283,7 +283,7 @@ static int korina_send_packet(struct sk_buff *skb, struct net_device *dev)
        }
        dma_cache_wback((u32) td, sizeof(*td));
 
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
        spin_unlock_irqrestore(&lp->lock, flags);
 
        return NETDEV_TX_OK;
@@ -622,7 +622,7 @@ korina_tx_dma_interrupt(int irq, void *dev_id)
                                &(lp->tx_dma_regs->dmandptr));
                        lp->tx_chain_status = desc_empty;
                        lp->tx_chain_head = lp->tx_chain_tail;
-                       dev->trans_start = jiffies;
+                       netif_trans_update(dev);
                }
                if (dmas & DMA_STAT_ERR)
                        printk(KERN_ERR "%s: DMA error\n", dev->name);
@@ -811,7 +811,7 @@ static int korina_init(struct net_device *dev)
        /* reset ethernet logic */
        writel(0, &lp->eth_regs->ethintfc);
        while ((readl(&lp->eth_regs->ethintfc) & ETH_INT_FC_RIP))
-               dev->trans_start = jiffies;
+               netif_trans_update(dev);
 
        /* Enable Ethernet Interface */
        writel(ETH_INT_FC_EN, &lp->eth_regs->ethintfc);
index b630ef1..dc82b1b 100644 (file)
@@ -519,7 +519,7 @@ ltq_etop_tx(struct sk_buff *skb, struct net_device *dev)
        byte_offset = CPHYSADDR(skb->data) % 16;
        ch->skb[ch->dma.desc] = skb;
 
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
 
        spin_lock_irqsave(&priv->lock, flags);
        desc->addr = ((unsigned int) dma_map_single(NULL, skb->data, len,
@@ -657,7 +657,7 @@ ltq_etop_tx_timeout(struct net_device *dev)
        err = ltq_etop_hw_init(dev);
        if (err)
                goto err_hw;
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
        netif_wake_queue(dev);
        return;
 
index 7fc4902..a6d26d3 100644 (file)
@@ -3354,8 +3354,7 @@ static int mvneta_percpu_notifier(struct notifier_block *nfb,
                /* Enable per-CPU interrupts on the CPU that is
                 * brought up.
                 */
-               smp_call_function_single(cpu, mvneta_percpu_enable,
-                                        pp, true);
+               mvneta_percpu_enable(pp);
 
                /* Enable per-CPU interrupt on the one CPU we care
                 * about.
@@ -3387,8 +3386,7 @@ static int mvneta_percpu_notifier(struct notifier_block *nfb,
                /* Disable per-CPU interrupts on the CPU that is
                 * brought down.
                 */
-               smp_call_function_single(cpu, mvneta_percpu_disable,
-                                        pp, true);
+               mvneta_percpu_disable(pp);
 
                break;
        case CPU_DEAD:
index 7ace07d..89d0d83 100644 (file)
@@ -979,8 +979,8 @@ static int pxa168_init_phy(struct net_device *dev)
                return 0;
 
        pep->phy = mdiobus_scan(pep->smi_bus, pep->phy_addr);
-       if (!pep->phy)
-               return -ENODEV;
+       if (IS_ERR(pep->phy))
+               return PTR_ERR(pep->phy);
 
        err = phy_connect_direct(dev, pep->phy, pxa168_eth_adjust_link,
                                 pep->phy_intf);
@@ -1295,7 +1295,7 @@ static int pxa168_eth_start_xmit(struct sk_buff *skb, struct net_device *dev)
 
        stats->tx_bytes += length;
        stats->tx_packets++;
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
        if (pep->tx_ring_size - pep->tx_desc_count <= 1) {
                /* We handled the current skb, but now we are out of space.*/
                netif_stop_queue(dev);
index ec0a221..467138b 100644 (file)
@@ -2418,7 +2418,7 @@ static int sky2_change_mtu(struct net_device *dev, int new_mtu)
        sky2_write32(hw, B0_IMSK, 0);
        sky2_read32(hw, B0_IMSK);
 
-       dev->trans_start = jiffies;     /* prevent tx timeout */
+       netif_trans_update(dev);        /* prevent tx timeout */
        napi_disable(&hw->napi);
        netif_tx_disable(dev);
 
index 0c51c69..249a458 100644 (file)
@@ -576,41 +576,48 @@ out:
 
        return res;
 }
-/*
- * Handling for queue buffers -- we allocate a bunch of memory and
- * register it in a memory region at HCA virtual address 0.  If the
- * requested size is > max_direct, we split the allocation into
- * multiple pages, so we don't require too much contiguous memory.
- */
 
-int mlx4_buf_alloc(struct mlx4_dev *dev, int size, int max_direct,
-                  struct mlx4_buf *buf, gfp_t gfp)
+static int mlx4_buf_direct_alloc(struct mlx4_dev *dev, int size,
+                                struct mlx4_buf *buf, gfp_t gfp)
 {
        dma_addr_t t;
 
-       if (size <= max_direct) {
-               buf->nbufs        = 1;
-               buf->npages       = 1;
-               buf->page_shift   = get_order(size) + PAGE_SHIFT;
-               buf->direct.buf   = dma_alloc_coherent(&dev->persist->pdev->dev,
-                                                      size, &t, gfp);
-               if (!buf->direct.buf)
-                       return -ENOMEM;
+       buf->nbufs        = 1;
+       buf->npages       = 1;
+       buf->page_shift   = get_order(size) + PAGE_SHIFT;
+       buf->direct.buf   =
+               dma_zalloc_coherent(&dev->persist->pdev->dev,
+                                   size, &t, gfp);
+       if (!buf->direct.buf)
+               return -ENOMEM;
 
-               buf->direct.map = t;
+       buf->direct.map = t;
 
-               while (t & ((1 << buf->page_shift) - 1)) {
-                       --buf->page_shift;
-                       buf->npages *= 2;
-               }
+       while (t & ((1 << buf->page_shift) - 1)) {
+               --buf->page_shift;
+               buf->npages *= 2;
+       }
 
-               memset(buf->direct.buf, 0, size);
+       return 0;
+}
+
+/* Handling for queue buffers -- we allocate a bunch of memory and
+ * register it in a memory region at HCA virtual address 0. If the
+ *  requested size is > max_direct, we split the allocation into
+ *  multiple pages, so we don't require too much contiguous memory.
+ */
+int mlx4_buf_alloc(struct mlx4_dev *dev, int size, int max_direct,
+                  struct mlx4_buf *buf, gfp_t gfp)
+{
+       if (size <= max_direct) {
+               return mlx4_buf_direct_alloc(dev, size, buf, gfp);
        } else {
+               dma_addr_t t;
                int i;
 
-               buf->direct.buf  = NULL;
-               buf->nbufs       = (size + PAGE_SIZE - 1) / PAGE_SIZE;
-               buf->npages      = buf->nbufs;
+               buf->direct.buf = NULL;
+               buf->nbufs      = (size + PAGE_SIZE - 1) / PAGE_SIZE;
+               buf->npages     = buf->nbufs;
                buf->page_shift  = PAGE_SHIFT;
                buf->page_list   = kcalloc(buf->nbufs, sizeof(*buf->page_list),
                                           gfp);
@@ -619,28 +626,12 @@ int mlx4_buf_alloc(struct mlx4_dev *dev, int size, int max_direct,
 
                for (i = 0; i < buf->nbufs; ++i) {
                        buf->page_list[i].buf =
-                               dma_alloc_coherent(&dev->persist->pdev->dev,
-                                                  PAGE_SIZE,
-                                                  &t, gfp);
+                               dma_zalloc_coherent(&dev->persist->pdev->dev,
+                                                   PAGE_SIZE, &t, gfp);
                        if (!buf->page_list[i].buf)
                                goto err_free;
 
                        buf->page_list[i].map = t;
-
-                       memset(buf->page_list[i].buf, 0, PAGE_SIZE);
-               }
-
-               if (BITS_PER_LONG == 64) {
-                       struct page **pages;
-                       pages = kmalloc(sizeof *pages * buf->nbufs, gfp);
-                       if (!pages)
-                               goto err_free;
-                       for (i = 0; i < buf->nbufs; ++i)
-                               pages[i] = virt_to_page(buf->page_list[i].buf);
-                       buf->direct.buf = vmap(pages, buf->nbufs, VM_MAP, PAGE_KERNEL);
-                       kfree(pages);
-                       if (!buf->direct.buf)
-                               goto err_free;
                }
        }
 
@@ -655,15 +646,11 @@ EXPORT_SYMBOL_GPL(mlx4_buf_alloc);
 
 void mlx4_buf_free(struct mlx4_dev *dev, int size, struct mlx4_buf *buf)
 {
-       int i;
-
-       if (buf->nbufs == 1)
+       if (buf->nbufs == 1) {
                dma_free_coherent(&dev->persist->pdev->dev, size,
-                                 buf->direct.buf,
-                                 buf->direct.map);
-       else {
-               if (BITS_PER_LONG == 64)
-                       vunmap(buf->direct.buf);
+                                 buf->direct.buf, buf->direct.map);
+       } else {
+               int i;
 
                for (i = 0; i < buf->nbufs; ++i)
                        if (buf->page_list[i].buf)
@@ -789,7 +776,7 @@ void mlx4_db_free(struct mlx4_dev *dev, struct mlx4_db *db)
 EXPORT_SYMBOL_GPL(mlx4_db_free);
 
 int mlx4_alloc_hwq_res(struct mlx4_dev *dev, struct mlx4_hwq_resources *wqres,
-                      int size, int max_direct)
+                      int size)
 {
        int err;
 
@@ -799,7 +786,7 @@ int mlx4_alloc_hwq_res(struct mlx4_dev *dev, struct mlx4_hwq_resources *wqres,
 
        *wqres->db.db = 0;
 
-       err = mlx4_buf_alloc(dev, size, max_direct, &wqres->buf, GFP_KERNEL);
+       err = mlx4_buf_direct_alloc(dev, size, &wqres->buf, GFP_KERNEL);
        if (err)
                goto err_db;
 
index af975a2..132cea6 100644 (file)
@@ -73,22 +73,16 @@ int mlx4_en_create_cq(struct mlx4_en_priv *priv,
         */
        set_dev_node(&mdev->dev->persist->pdev->dev, node);
        err = mlx4_alloc_hwq_res(mdev->dev, &cq->wqres,
-                               cq->buf_size, 2 * PAGE_SIZE);
+                               cq->buf_size);
        set_dev_node(&mdev->dev->persist->pdev->dev, mdev->dev->numa_node);
        if (err)
                goto err_cq;
 
-       err = mlx4_en_map_buffer(&cq->wqres.buf);
-       if (err)
-               goto err_res;
-
        cq->buf = (struct mlx4_cqe *)cq->wqres.buf.direct.buf;
        *pcq = cq;
 
        return 0;
 
-err_res:
-       mlx4_free_hwq_res(mdev->dev, &cq->wqres, cq->buf_size);
 err_cq:
        kfree(cq);
        *pcq = NULL;
@@ -177,7 +171,6 @@ void mlx4_en_destroy_cq(struct mlx4_en_priv *priv, struct mlx4_en_cq **pcq)
        struct mlx4_en_dev *mdev = priv->mdev;
        struct mlx4_en_cq *cq = *pcq;
 
-       mlx4_en_unmap_buffer(&cq->wqres.buf);
        mlx4_free_hwq_res(mdev->dev, &cq->wqres, cq->buf_size);
        if (mlx4_is_eq_vector_valid(mdev->dev, priv->port, cq->vector) &&
            cq->is_tx == RX)
index 8bd143d..92e0624 100644 (file)
@@ -2357,8 +2357,12 @@ out:
        }
 
        /* set offloads */
-       priv->dev->hw_enc_features |= NETIF_F_IP_CSUM | NETIF_F_RXCSUM |
-                                     NETIF_F_TSO | NETIF_F_GSO_UDP_TUNNEL;
+       priv->dev->hw_enc_features |= NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM |
+                                     NETIF_F_RXCSUM |
+                                     NETIF_F_TSO | NETIF_F_TSO6 |
+                                     NETIF_F_GSO_UDP_TUNNEL |
+                                     NETIF_F_GSO_UDP_TUNNEL_CSUM |
+                                     NETIF_F_GSO_PARTIAL;
 }
 
 static void mlx4_en_del_vxlan_offloads(struct work_struct *work)
@@ -2367,8 +2371,12 @@ static void mlx4_en_del_vxlan_offloads(struct work_struct *work)
        struct mlx4_en_priv *priv = container_of(work, struct mlx4_en_priv,
                                                 vxlan_del_task);
        /* unset offloads */
-       priv->dev->hw_enc_features &= ~(NETIF_F_IP_CSUM | NETIF_F_RXCSUM |
-                                     NETIF_F_TSO | NETIF_F_GSO_UDP_TUNNEL);
+       priv->dev->hw_enc_features &= ~(NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM |
+                                       NETIF_F_RXCSUM |
+                                       NETIF_F_TSO | NETIF_F_TSO6 |
+                                       NETIF_F_GSO_UDP_TUNNEL |
+                                       NETIF_F_GSO_UDP_TUNNEL_CSUM |
+                                       NETIF_F_GSO_PARTIAL);
 
        ret = mlx4_SET_PORT_VXLAN(priv->mdev->dev, priv->port,
                                  VXLAN_STEER_BY_OUTER_MAC, 0);
@@ -2427,7 +2435,18 @@ static netdev_features_t mlx4_en_features_check(struct sk_buff *skb,
                                                netdev_features_t features)
 {
        features = vlan_features_check(skb, features);
-       return vxlan_features_check(skb, features);
+       features = vxlan_features_check(skb, features);
+
+       /* The ConnectX-3 doesn't support outer IPv6 checksums but it does
+        * support inner IPv6 checksums and segmentation so  we need to
+        * strip that feature if this is an IPv6 encapsulated frame.
+        */
+       if (skb->encapsulation &&
+           (skb->ip_summed == CHECKSUM_PARTIAL) &&
+           (ip_hdr(skb)->version != 4))
+               features &= ~(NETIF_F_CSUM_MASK | NETIF_F_GSO_MASK);
+
+       return features;
 }
 #endif
 
@@ -2909,7 +2928,7 @@ int mlx4_en_init_netdev(struct mlx4_en_dev *mdev, int port,
 
        /* Allocate page for receive rings */
        err = mlx4_alloc_hwq_res(mdev->dev, &priv->res,
-                               MLX4_EN_PAGE_SIZE, MLX4_EN_PAGE_SIZE);
+                               MLX4_EN_PAGE_SIZE);
        if (err) {
                en_err(priv, "Failed to allocate page for rx qps\n");
                goto out;
@@ -2992,8 +3011,13 @@ int mlx4_en_init_netdev(struct mlx4_en_dev *mdev, int port,
        }
 
        if (mdev->dev->caps.tunnel_offload_mode == MLX4_TUNNEL_OFFLOAD_MODE_VXLAN) {
-               dev->hw_features |= NETIF_F_GSO_UDP_TUNNEL;
-               dev->features    |= NETIF_F_GSO_UDP_TUNNEL;
+               dev->hw_features |= NETIF_F_GSO_UDP_TUNNEL |
+                                   NETIF_F_GSO_UDP_TUNNEL_CSUM |
+                                   NETIF_F_GSO_PARTIAL;
+               dev->features    |= NETIF_F_GSO_UDP_TUNNEL |
+                                   NETIF_F_GSO_UDP_TUNNEL_CSUM |
+                                   NETIF_F_GSO_PARTIAL;
+               dev->gso_partial_features = NETIF_F_GSO_UDP_TUNNEL_CSUM;
        }
 
        mdev->pndev[port] = dev;
index 02e925d..a6b0db0 100644 (file)
@@ -107,37 +107,6 @@ int mlx4_en_change_mcast_lb(struct mlx4_en_priv *priv, struct mlx4_qp *qp,
        return ret;
 }
 
-int mlx4_en_map_buffer(struct mlx4_buf *buf)
-{
-       struct page **pages;
-       int i;
-
-       if (BITS_PER_LONG == 64 || buf->nbufs == 1)
-               return 0;
-
-       pages = kmalloc(sizeof *pages * buf->nbufs, GFP_KERNEL);
-       if (!pages)
-               return -ENOMEM;
-
-       for (i = 0; i < buf->nbufs; ++i)
-               pages[i] = virt_to_page(buf->page_list[i].buf);
-
-       buf->direct.buf = vmap(pages, buf->nbufs, VM_MAP, PAGE_KERNEL);
-       kfree(pages);
-       if (!buf->direct.buf)
-               return -ENOMEM;
-
-       return 0;
-}
-
-void mlx4_en_unmap_buffer(struct mlx4_buf *buf)
-{
-       if (BITS_PER_LONG == 64 || buf->nbufs == 1)
-               return;
-
-       vunmap(buf->direct.buf);
-}
-
 void mlx4_en_sqp_event(struct mlx4_qp *qp, enum mlx4_event event)
 {
     return;
index b723e3b..8ef6875 100644 (file)
@@ -394,17 +394,11 @@ int mlx4_en_create_rx_ring(struct mlx4_en_priv *priv,
 
        /* Allocate HW buffers on provided NUMA node */
        set_dev_node(&mdev->dev->persist->pdev->dev, node);
-       err = mlx4_alloc_hwq_res(mdev->dev, &ring->wqres,
-                                ring->buf_size, 2 * PAGE_SIZE);
+       err = mlx4_alloc_hwq_res(mdev->dev, &ring->wqres, ring->buf_size);
        set_dev_node(&mdev->dev->persist->pdev->dev, mdev->dev->numa_node);
        if (err)
                goto err_info;
 
-       err = mlx4_en_map_buffer(&ring->wqres.buf);
-       if (err) {
-               en_err(priv, "Failed to map RX buffer\n");
-               goto err_hwq;
-       }
        ring->buf = ring->wqres.buf.direct.buf;
 
        ring->hwtstamp_rx_filter = priv->hwtstamp_config.rx_filter;
@@ -412,8 +406,6 @@ int mlx4_en_create_rx_ring(struct mlx4_en_priv *priv,
        *pring = ring;
        return 0;
 
-err_hwq:
-       mlx4_free_hwq_res(mdev->dev, &ring->wqres, ring->buf_size);
 err_info:
        vfree(ring->rx_info);
        ring->rx_info = NULL;
@@ -517,7 +509,6 @@ void mlx4_en_destroy_rx_ring(struct mlx4_en_priv *priv,
        struct mlx4_en_dev *mdev = priv->mdev;
        struct mlx4_en_rx_ring *ring = *pring;
 
-       mlx4_en_unmap_buffer(&ring->wqres.buf);
        mlx4_free_hwq_res(mdev->dev, &ring->wqres, size * stride + TXBB_SIZE);
        vfree(ring->rx_info);
        ring->rx_info = NULL;
index c0d7b72..f6e6157 100644 (file)
@@ -41,6 +41,7 @@
 #include <linux/vmalloc.h>
 #include <linux/tcp.h>
 #include <linux/ip.h>
+#include <linux/ipv6.h>
 #include <linux/moduleparam.h>
 
 #include "mlx4_en.h"
@@ -93,20 +94,13 @@ int mlx4_en_create_tx_ring(struct mlx4_en_priv *priv,
 
        /* Allocate HW buffers on provided NUMA node */
        set_dev_node(&mdev->dev->persist->pdev->dev, node);
-       err = mlx4_alloc_hwq_res(mdev->dev, &ring->wqres, ring->buf_size,
-                                2 * PAGE_SIZE);
+       err = mlx4_alloc_hwq_res(mdev->dev, &ring->wqres, ring->buf_size);
        set_dev_node(&mdev->dev->persist->pdev->dev, mdev->dev->numa_node);
        if (err) {
                en_err(priv, "Failed allocating hwq resources\n");
                goto err_bounce;
        }
 
-       err = mlx4_en_map_buffer(&ring->wqres.buf);
-       if (err) {
-               en_err(priv, "Failed to map TX buffer\n");
-               goto err_hwq_res;
-       }
-
        ring->buf = ring->wqres.buf.direct.buf;
 
        en_dbg(DRV, priv, "Allocated TX ring (addr:%p) - buf:%p size:%d buf_size:%d dma:%llx\n",
@@ -117,7 +111,7 @@ int mlx4_en_create_tx_ring(struct mlx4_en_priv *priv,
                                    MLX4_RESERVE_ETH_BF_QP);
        if (err) {
                en_err(priv, "failed reserving qp for TX ring\n");
-               goto err_map;
+               goto err_hwq_res;
        }
 
        err = mlx4_qp_alloc(mdev->dev, ring->qpn, &ring->qp, GFP_KERNEL);
@@ -154,8 +148,6 @@ int mlx4_en_create_tx_ring(struct mlx4_en_priv *priv,
 
 err_reserve:
        mlx4_qp_release_range(mdev->dev, ring->qpn, 1);
-err_map:
-       mlx4_en_unmap_buffer(&ring->wqres.buf);
 err_hwq_res:
        mlx4_free_hwq_res(mdev->dev, &ring->wqres, ring->buf_size);
 err_bounce:
@@ -182,7 +174,6 @@ void mlx4_en_destroy_tx_ring(struct mlx4_en_priv *priv,
        mlx4_qp_remove(mdev->dev, &ring->qp);
        mlx4_qp_free(mdev->dev, &ring->qp);
        mlx4_qp_release_range(priv->mdev->dev, ring->qpn, 1);
-       mlx4_en_unmap_buffer(&ring->wqres.buf);
        mlx4_free_hwq_res(mdev->dev, &ring->wqres, ring->buf_size);
        kfree(ring->bounce_buf);
        ring->bounce_buf = NULL;
@@ -405,7 +396,6 @@ static bool mlx4_en_process_tx_cq(struct net_device *dev,
        u32 packets = 0;
        u32 bytes = 0;
        int factor = priv->cqe_factor;
-       u64 timestamp = 0;
        int done = 0;
        int budget = priv->tx_work_limit;
        u32 last_nr_txbb;
@@ -445,9 +435,12 @@ static bool mlx4_en_process_tx_cq(struct net_device *dev,
                new_index = be16_to_cpu(cqe->wqe_index) & size_mask;
 
                do {
+                       u64 timestamp = 0;
+
                        txbbs_skipped += last_nr_txbb;
                        ring_index = (ring_index + last_nr_txbb) & size_mask;
-                       if (ring->tx_info[ring_index].ts_requested)
+
+                       if (unlikely(ring->tx_info[ring_index].ts_requested))
                                timestamp = mlx4_en_get_cqe_ts(cqe);
 
                        /* free next descriptor */
@@ -918,8 +911,18 @@ netdev_tx_t mlx4_en_xmit(struct sk_buff *skb, struct net_device *dev)
                                 tx_ind, fragptr);
 
        if (skb->encapsulation) {
-               struct iphdr *ipv4 = (struct iphdr *)skb_inner_network_header(skb);
-               if (ipv4->protocol == IPPROTO_TCP || ipv4->protocol == IPPROTO_UDP)
+               union {
+                       struct iphdr *v4;
+                       struct ipv6hdr *v6;
+                       unsigned char *hdr;
+               } ip;
+               u8 proto;
+
+               ip.hdr = skb_inner_network_header(skb);
+               proto = (ip.v4->version == 4) ? ip.v4->protocol :
+                                               ip.v6->nexthdr;
+
+               if (proto == IPPROTO_TCP || proto == IPPROTO_UDP)
                        op_own |= cpu_to_be32(MLX4_WQE_CTRL_IIP | MLX4_WQE_CTRL_ILP);
                else
                        op_own |= cpu_to_be32(MLX4_WQE_CTRL_IIP);
index 63b1aea..cc84e09 100644 (file)
@@ -672,8 +672,6 @@ void mlx4_en_fill_qp_context(struct mlx4_en_priv *priv, int size, int stride,
                int is_tx, int rss, int qpn, int cqn, int user_prio,
                struct mlx4_qp_context *context);
 void mlx4_en_sqp_event(struct mlx4_qp *qp, enum mlx4_event event);
-int mlx4_en_map_buffer(struct mlx4_buf *buf);
-void mlx4_en_unmap_buffer(struct mlx4_buf *buf);
 int mlx4_en_change_mcast_lb(struct mlx4_en_priv *priv, struct mlx4_qp *qp,
                            int loopback);
 void mlx4_en_calc_rx_buf(struct net_device *dev);
index 1cf722e..559d11a 100644 (file)
@@ -14,6 +14,7 @@ config MLX5_CORE_EN
        bool "Mellanox Technologies ConnectX-4 Ethernet support"
        depends on NETDEVICES && ETHERNET && PCI && MLX5_CORE
        select PTP_1588_CLOCK
+       select VXLAN if MLX5_CORE=y
        default n
        ---help---
          Ethernet support in Mellanox Technologies ConnectX-4 NIC.
index 4fc45ee..b531d4f 100644 (file)
@@ -6,6 +6,6 @@ mlx5_core-y :=  main.o cmd.o debugfs.o fw.o eq.o uar.o pagealloc.o \
 
 mlx5_core-$(CONFIG_MLX5_CORE_EN) += wq.o eswitch.o \
                en_main.o en_fs.o en_ethtool.o en_tx.o en_rx.o \
-               en_txrx.o en_clock.o vxlan.o en_tc.o
+               en_txrx.o en_clock.o vxlan.o en_tc.o en_arfs.o
 
 mlx5_core-$(CONFIG_MLX5_CORE_EN_DCB) +=  en_dcbnl.o
index 6e24e82..bfa5daa 100644 (file)
@@ -46,6 +46,9 @@
 #include <linux/rhashtable.h>
 #include "wq.h"
 #include "mlx5_core.h"
+#include "en_stats.h"
+
+#define MLX5_SET_CFG(p, f, v) MLX5_SET(create_flow_group_in, p, f, v)
 
 #define MLX5E_MAX_NUM_TC       8
 
@@ -148,245 +151,6 @@ struct mlx5e_umr_wqe {
 #define MLX5E_MIN_BW_ALLOC 1   /* Min percentage of BW allocation */
 #endif
 
-static const char vport_strings[][ETH_GSTRING_LEN] = {
-       /* vport statistics */
-       "rx_packets",
-       "rx_bytes",
-       "tx_packets",
-       "tx_bytes",
-       "rx_error_packets",
-       "rx_error_bytes",
-       "tx_error_packets",
-       "tx_error_bytes",
-       "rx_unicast_packets",
-       "rx_unicast_bytes",
-       "tx_unicast_packets",
-       "tx_unicast_bytes",
-       "rx_multicast_packets",
-       "rx_multicast_bytes",
-       "tx_multicast_packets",
-       "tx_multicast_bytes",
-       "rx_broadcast_packets",
-       "rx_broadcast_bytes",
-       "tx_broadcast_packets",
-       "tx_broadcast_bytes",
-
-       /* SW counters */
-       "tso_packets",
-       "tso_bytes",
-       "tso_inner_packets",
-       "tso_inner_bytes",
-       "lro_packets",
-       "lro_bytes",
-       "rx_csum_good",
-       "rx_csum_none",
-       "rx_csum_sw",
-       "tx_csum_offload",
-       "tx_csum_inner",
-       "tx_queue_stopped",
-       "tx_queue_wake",
-       "tx_queue_dropped",
-       "rx_wqe_err",
-       "rx_mpwqe_filler",
-       "rx_mpwqe_frag",
-       "rx_buff_alloc_err",
-};
-
-struct mlx5e_vport_stats {
-       /* HW counters */
-       u64 rx_packets;
-       u64 rx_bytes;
-       u64 tx_packets;
-       u64 tx_bytes;
-       u64 rx_error_packets;
-       u64 rx_error_bytes;
-       u64 tx_error_packets;
-       u64 tx_error_bytes;
-       u64 rx_unicast_packets;
-       u64 rx_unicast_bytes;
-       u64 tx_unicast_packets;
-       u64 tx_unicast_bytes;
-       u64 rx_multicast_packets;
-       u64 rx_multicast_bytes;
-       u64 tx_multicast_packets;
-       u64 tx_multicast_bytes;
-       u64 rx_broadcast_packets;
-       u64 rx_broadcast_bytes;
-       u64 tx_broadcast_packets;
-       u64 tx_broadcast_bytes;
-
-       /* SW counters */
-       u64 tso_packets;
-       u64 tso_bytes;
-       u64 tso_inner_packets;
-       u64 tso_inner_bytes;
-       u64 lro_packets;
-       u64 lro_bytes;
-       u64 rx_csum_good;
-       u64 rx_csum_none;
-       u64 rx_csum_sw;
-       u64 tx_csum_offload;
-       u64 tx_csum_inner;
-       u64 tx_queue_stopped;
-       u64 tx_queue_wake;
-       u64 tx_queue_dropped;
-       u64 rx_wqe_err;
-       u64 rx_mpwqe_filler;
-       u64 rx_mpwqe_frag;
-       u64 rx_buff_alloc_err;
-
-#define NUM_VPORT_COUNTERS     38
-};
-
-static const char pport_strings[][ETH_GSTRING_LEN] = {
-       /* IEEE802.3 counters */
-       "frames_tx",
-       "frames_rx",
-       "check_seq_err",
-       "alignment_err",
-       "octets_tx",
-       "octets_received",
-       "multicast_xmitted",
-       "broadcast_xmitted",
-       "multicast_rx",
-       "broadcast_rx",
-       "in_range_len_errors",
-       "out_of_range_len",
-       "too_long_errors",
-       "symbol_err",
-       "mac_control_tx",
-       "mac_control_rx",
-       "unsupported_op_rx",
-       "pause_ctrl_rx",
-       "pause_ctrl_tx",
-
-       /* RFC2863 counters */
-       "in_octets",
-       "in_ucast_pkts",
-       "in_discards",
-       "in_errors",
-       "in_unknown_protos",
-       "out_octets",
-       "out_ucast_pkts",
-       "out_discards",
-       "out_errors",
-       "in_multicast_pkts",
-       "in_broadcast_pkts",
-       "out_multicast_pkts",
-       "out_broadcast_pkts",
-
-       /* RFC2819 counters */
-       "drop_events",
-       "octets",
-       "pkts",
-       "broadcast_pkts",
-       "multicast_pkts",
-       "crc_align_errors",
-       "undersize_pkts",
-       "oversize_pkts",
-       "fragments",
-       "jabbers",
-       "collisions",
-       "p64octets",
-       "p65to127octets",
-       "p128to255octets",
-       "p256to511octets",
-       "p512to1023octets",
-       "p1024to1518octets",
-       "p1519to2047octets",
-       "p2048to4095octets",
-       "p4096to8191octets",
-       "p8192to10239octets",
-};
-
-#define NUM_IEEE_802_3_COUNTERS                19
-#define NUM_RFC_2863_COUNTERS          13
-#define NUM_RFC_2819_COUNTERS          21
-#define NUM_PPORT_COUNTERS             (NUM_IEEE_802_3_COUNTERS + \
-                                        NUM_RFC_2863_COUNTERS + \
-                                        NUM_RFC_2819_COUNTERS)
-
-struct mlx5e_pport_stats {
-       __be64 IEEE_802_3_counters[NUM_IEEE_802_3_COUNTERS];
-       __be64 RFC_2863_counters[NUM_RFC_2863_COUNTERS];
-       __be64 RFC_2819_counters[NUM_RFC_2819_COUNTERS];
-};
-
-static const char qcounter_stats_strings[][ETH_GSTRING_LEN] = {
-       "rx_out_of_buffer",
-};
-
-struct mlx5e_qcounter_stats {
-       u32 rx_out_of_buffer;
-#define NUM_Q_COUNTERS 1
-};
-
-static const char rq_stats_strings[][ETH_GSTRING_LEN] = {
-       "packets",
-       "bytes",
-       "csum_none",
-       "csum_sw",
-       "lro_packets",
-       "lro_bytes",
-       "wqe_err",
-       "mpwqe_filler",
-       "mpwqe_frag",
-       "buff_alloc_err",
-};
-
-struct mlx5e_rq_stats {
-       u64 packets;
-       u64 bytes;
-       u64 csum_none;
-       u64 csum_sw;
-       u64 lro_packets;
-       u64 lro_bytes;
-       u64 wqe_err;
-       u64 mpwqe_filler;
-       u64 mpwqe_frag;
-       u64 buff_alloc_err;
-#define NUM_RQ_STATS 10
-};
-
-static const char sq_stats_strings[][ETH_GSTRING_LEN] = {
-       "packets",
-       "bytes",
-       "tso_packets",
-       "tso_bytes",
-       "tso_inner_packets",
-       "tso_inner_bytes",
-       "csum_offload_inner",
-       "nop",
-       "csum_offload_none",
-       "stopped",
-       "wake",
-       "dropped",
-};
-
-struct mlx5e_sq_stats {
-       /* commonly accessed in data path */
-       u64 packets;
-       u64 bytes;
-       u64 tso_packets;
-       u64 tso_bytes;
-       u64 tso_inner_packets;
-       u64 tso_inner_bytes;
-       u64 csum_offload_inner;
-       u64 nop;
-       /* less likely accessed in data path */
-       u64 csum_offload_none;
-       u64 stopped;
-       u64 wake;
-       u64 dropped;
-#define NUM_SQ_STATS 12
-};
-
-struct mlx5e_stats {
-       struct mlx5e_vport_stats   vport;
-       struct mlx5e_pport_stats   pport;
-       struct mlx5e_qcounter_stats qcnt;
-};
-
 struct mlx5e_params {
        u8  log_sq_size;
        u8  rq_wq_type;
@@ -404,6 +168,7 @@ struct mlx5e_params {
        u8  rss_hfunc;
        u8  toeplitz_hash_key[40];
        u32 indirection_rqt[MLX5E_INDIR_RQT_SIZE];
+       bool vlan_strip_disable;
 #ifdef CONFIG_MLX5_CORE_EN_DCB
        struct ieee_ets ets;
 #endif
@@ -622,42 +387,42 @@ enum mlx5e_traffic_types {
        MLX5E_TT_IPV6,
        MLX5E_TT_ANY,
        MLX5E_NUM_TT,
+       MLX5E_NUM_INDIR_TIRS = MLX5E_TT_ANY,
 };
 
-#define IS_HASHING_TT(tt) (tt != MLX5E_TT_ANY)
+enum {
+       MLX5E_STATE_ASYNC_EVENTS_ENABLE,
+       MLX5E_STATE_OPENED,
+       MLX5E_STATE_DESTROYING,
+};
 
-enum mlx5e_rqt_ix {
-       MLX5E_INDIRECTION_RQT,
-       MLX5E_SINGLE_RQ_RQT,
-       MLX5E_NUM_RQT,
+struct mlx5e_vxlan_db {
+       spinlock_t                      lock; /* protect vxlan table */
+       struct radix_tree_root          tree;
 };
 
-struct mlx5e_eth_addr_info {
+struct mlx5e_l2_rule {
        u8  addr[ETH_ALEN + 2];
-       u32 tt_vec;
-       struct mlx5_flow_rule *ft_rule[MLX5E_NUM_TT];
+       struct mlx5_flow_rule *rule;
 };
 
-#define MLX5E_ETH_ADDR_HASH_SIZE (1 << BITS_PER_BYTE)
-
-struct mlx5e_eth_addr_db {
-       struct hlist_head          netdev_uc[MLX5E_ETH_ADDR_HASH_SIZE];
-       struct hlist_head          netdev_mc[MLX5E_ETH_ADDR_HASH_SIZE];
-       struct mlx5e_eth_addr_info broadcast;
-       struct mlx5e_eth_addr_info allmulti;
-       struct mlx5e_eth_addr_info promisc;
-       bool                       broadcast_enabled;
-       bool                       allmulti_enabled;
-       bool                       promisc_enabled;
+struct mlx5e_flow_table {
+       int num_groups;
+       struct mlx5_flow_table *t;
+       struct mlx5_flow_group **g;
 };
 
-enum {
-       MLX5E_STATE_ASYNC_EVENTS_ENABLE,
-       MLX5E_STATE_OPENED,
-       MLX5E_STATE_DESTROYING,
+#define MLX5E_L2_ADDR_HASH_SIZE BIT(BITS_PER_BYTE)
+
+struct mlx5e_tc_table {
+       struct mlx5_flow_table          *t;
+
+       struct rhashtable_params        ht_params;
+       struct rhashtable               ht;
 };
 
-struct mlx5e_vlan_db {
+struct mlx5e_vlan_table {
+       struct mlx5e_flow_table         ft;
        unsigned long active_vlans[BITS_TO_LONGS(VLAN_N_VID)];
        struct mlx5_flow_rule   *active_vlans_rule[VLAN_N_VID];
        struct mlx5_flow_rule   *untagged_rule;
@@ -665,29 +430,74 @@ struct mlx5e_vlan_db {
        bool          filter_disabled;
 };
 
-struct mlx5e_vxlan_db {
-       spinlock_t                      lock; /* protect vxlan table */
-       struct radix_tree_root          tree;
+struct mlx5e_l2_table {
+       struct mlx5e_flow_table    ft;
+       struct hlist_head          netdev_uc[MLX5E_L2_ADDR_HASH_SIZE];
+       struct hlist_head          netdev_mc[MLX5E_L2_ADDR_HASH_SIZE];
+       struct mlx5e_l2_rule       broadcast;
+       struct mlx5e_l2_rule       allmulti;
+       struct mlx5e_l2_rule       promisc;
+       bool                       broadcast_enabled;
+       bool                       allmulti_enabled;
+       bool                       promisc_enabled;
 };
 
-struct mlx5e_flow_table {
-       int num_groups;
-       struct mlx5_flow_table          *t;
-       struct mlx5_flow_group          **g;
+/* L3/L4 traffic type classifier */
+struct mlx5e_ttc_table {
+       struct mlx5e_flow_table  ft;
+       struct mlx5_flow_rule    *rules[MLX5E_NUM_TT];
 };
 
-struct mlx5e_tc_flow_table {
-       struct mlx5_flow_table          *t;
+#define ARFS_HASH_SHIFT BITS_PER_BYTE
+#define ARFS_HASH_SIZE BIT(BITS_PER_BYTE)
+struct arfs_table {
+       struct mlx5e_flow_table  ft;
+       struct mlx5_flow_rule    *default_rule;
+       struct hlist_head        rules_hash[ARFS_HASH_SIZE];
+};
 
-       struct rhashtable_params        ht_params;
-       struct rhashtable               ht;
+enum  arfs_type {
+       ARFS_IPV4_TCP,
+       ARFS_IPV6_TCP,
+       ARFS_IPV4_UDP,
+       ARFS_IPV6_UDP,
+       ARFS_NUM_TYPES,
+};
+
+struct mlx5e_arfs_tables {
+       struct arfs_table arfs_tables[ARFS_NUM_TYPES];
+       /* Protect aRFS rules list */
+       spinlock_t                     arfs_lock;
+       struct list_head               rules;
+       int                            last_filter_id;
+       struct workqueue_struct        *wq;
+};
+
+/* NIC prio FTS */
+enum {
+       MLX5E_VLAN_FT_LEVEL = 0,
+       MLX5E_L2_FT_LEVEL,
+       MLX5E_TTC_FT_LEVEL,
+       MLX5E_ARFS_FT_LEVEL
+};
+
+struct mlx5e_flow_steering {
+       struct mlx5_flow_namespace      *ns;
+       struct mlx5e_tc_table           tc;
+       struct mlx5e_vlan_table         vlan;
+       struct mlx5e_l2_table           l2;
+       struct mlx5e_ttc_table          ttc;
+       struct mlx5e_arfs_tables        arfs;
 };
 
-struct mlx5e_flow_tables {
-       struct mlx5_flow_namespace      *ns;
-       struct mlx5e_tc_flow_table      tc;
-       struct mlx5e_flow_table         vlan;
-       struct mlx5e_flow_table         main;
+struct mlx5e_direct_tir {
+       u32              tirn;
+       u32              rqtn;
+};
+
+enum {
+       MLX5E_TC_PRIO = 0,
+       MLX5E_NIC_PRIO
 };
 
 struct mlx5e_priv {
@@ -707,15 +517,15 @@ struct mlx5e_priv {
 
        struct mlx5e_channel     **channel;
        u32                        tisn[MLX5E_MAX_NUM_TC];
-       u32                        rqtn[MLX5E_NUM_RQT];
-       u32                        tirn[MLX5E_NUM_TT];
+       u32                        indir_rqtn;
+       u32                        indir_tirn[MLX5E_NUM_INDIR_TIRS];
+       struct mlx5e_direct_tir    direct_tir[MLX5E_MAX_NUM_CHANNELS];
 
-       struct mlx5e_flow_tables   fts;
-       struct mlx5e_eth_addr_db   eth_addr;
-       struct mlx5e_vlan_db       vlan;
+       struct mlx5e_flow_steering fs;
        struct mlx5e_vxlan_db      vxlan;
 
        struct mlx5e_params        params;
+       struct workqueue_struct    *wq;
        struct work_struct         update_carrier_work;
        struct work_struct         set_rx_mode_work;
        struct delayed_work        update_stats_work;
@@ -747,7 +557,7 @@ enum mlx5e_link_mode {
        MLX5E_100GBASE_KR4       = 22,
        MLX5E_100GBASE_LR4       = 23,
        MLX5E_100BASE_TX         = 24,
-       MLX5E_100BASE_T          = 25,
+       MLX5E_1000BASE_T         = 25,
        MLX5E_10GBASE_T          = 26,
        MLX5E_25GBASE_CR         = 27,
        MLX5E_25GBASE_KR         = 28,
@@ -794,9 +604,10 @@ struct mlx5_cqe64 *mlx5e_get_cqe(struct mlx5e_cq *cq);
 
 void mlx5e_update_stats(struct mlx5e_priv *priv);
 
-int mlx5e_create_flow_tables(struct mlx5e_priv *priv);
-void mlx5e_destroy_flow_tables(struct mlx5e_priv *priv);
-void mlx5e_init_eth_addr(struct mlx5e_priv *priv);
+int mlx5e_create_flow_steering(struct mlx5e_priv *priv);
+void mlx5e_destroy_flow_steering(struct mlx5e_priv *priv);
+void mlx5e_init_l2_addr(struct mlx5e_priv *priv);
+void mlx5e_destroy_flow_table(struct mlx5e_flow_table *ft);
 void mlx5e_set_rx_mode_work(struct work_struct *work);
 
 void mlx5e_fill_hwstamp(struct mlx5e_tstamp *clock, u64 timestamp,
@@ -813,7 +624,9 @@ int mlx5e_vlan_rx_kill_vid(struct net_device *dev, __always_unused __be16 proto,
 void mlx5e_enable_vlan_filter(struct mlx5e_priv *priv);
 void mlx5e_disable_vlan_filter(struct mlx5e_priv *priv);
 
-int mlx5e_redirect_rqt(struct mlx5e_priv *priv, enum mlx5e_rqt_ix rqt_ix);
+int mlx5e_modify_rqs_vsd(struct mlx5e_priv *priv, bool vsd);
+
+int mlx5e_redirect_rqt(struct mlx5e_priv *priv, u32 rqtn, int sz, int ix);
 void mlx5e_build_tir_ctx_hash(void *tirc, struct mlx5e_priv *priv);
 
 int mlx5e_open_locked(struct net_device *netdev);
@@ -871,6 +684,32 @@ extern const struct dcbnl_rtnl_ops mlx5e_dcbnl_ops;
 int mlx5e_dcbnl_ieee_setets_core(struct mlx5e_priv *priv, struct ieee_ets *ets);
 #endif
 
+#ifndef CONFIG_RFS_ACCEL
+static inline int mlx5e_arfs_create_tables(struct mlx5e_priv *priv)
+{
+       return 0;
+}
+
+static inline void mlx5e_arfs_destroy_tables(struct mlx5e_priv *priv) {}
+
+static inline int mlx5e_arfs_enable(struct mlx5e_priv *priv)
+{
+       return -ENOTSUPP;
+}
+
+static inline int mlx5e_arfs_disable(struct mlx5e_priv *priv)
+{
+       return -ENOTSUPP;
+}
+#else
+int mlx5e_arfs_create_tables(struct mlx5e_priv *priv);
+void mlx5e_arfs_destroy_tables(struct mlx5e_priv *priv);
+int mlx5e_arfs_enable(struct mlx5e_priv *priv);
+int mlx5e_arfs_disable(struct mlx5e_priv *priv);
+int mlx5e_rx_flow_steer(struct net_device *dev, const struct sk_buff *skb,
+                       u16 rxq_index, u32 flow_id);
+#endif
+
 u16 mlx5e_get_max_inline_cap(struct mlx5_core_dev *mdev);
 
 #endif /* __MLX5_EN_H__ */
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_arfs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_arfs.c
new file mode 100644 (file)
index 0000000..3515e78
--- /dev/null
@@ -0,0 +1,752 @@
+/*
+ * Copyright (c) 2016, Mellanox Technologies. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses.  You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ *     Redistribution and use in source and binary forms, with or
+ *     without modification, are permitted provided that the following
+ *     conditions are met:
+ *
+ *      - Redistributions of source code must retain the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer.
+ *
+ *      - Redistributions in binary form must reproduce the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer in the documentation and/or other materials
+ *        provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#ifdef CONFIG_RFS_ACCEL
+
+#include <linux/hash.h>
+#include <linux/mlx5/fs.h>
+#include <linux/ip.h>
+#include <linux/ipv6.h>
+#include "en.h"
+
+struct arfs_tuple {
+       __be16 etype;
+       u8     ip_proto;
+       union {
+               __be32 src_ipv4;
+               struct in6_addr src_ipv6;
+       };
+       union {
+               __be32 dst_ipv4;
+               struct in6_addr dst_ipv6;
+       };
+       __be16 src_port;
+       __be16 dst_port;
+};
+
+struct arfs_rule {
+       struct mlx5e_priv       *priv;
+       struct work_struct      arfs_work;
+       struct mlx5_flow_rule   *rule;
+       struct hlist_node       hlist;
+       int                     rxq;
+       /* Flow ID passed to ndo_rx_flow_steer */
+       int                     flow_id;
+       /* Filter ID returned by ndo_rx_flow_steer */
+       int                     filter_id;
+       struct arfs_tuple       tuple;
+};
+
+#define mlx5e_for_each_arfs_rule(hn, tmp, arfs_tables, i, j) \
+       for (i = 0; i < ARFS_NUM_TYPES; i++) \
+               mlx5e_for_each_hash_arfs_rule(hn, tmp, arfs_tables[i].rules_hash, j)
+
+#define mlx5e_for_each_hash_arfs_rule(hn, tmp, hash, j) \
+       for (j = 0; j < ARFS_HASH_SIZE; j++) \
+               hlist_for_each_entry_safe(hn, tmp, &hash[j], hlist)
+
+static enum mlx5e_traffic_types arfs_get_tt(enum arfs_type type)
+{
+       switch (type) {
+       case ARFS_IPV4_TCP:
+               return MLX5E_TT_IPV4_TCP;
+       case ARFS_IPV4_UDP:
+               return MLX5E_TT_IPV4_UDP;
+       case ARFS_IPV6_TCP:
+               return MLX5E_TT_IPV6_TCP;
+       case ARFS_IPV6_UDP:
+               return MLX5E_TT_IPV6_UDP;
+       default:
+               return -EINVAL;
+       }
+}
+
+static int arfs_disable(struct mlx5e_priv *priv)
+{
+       struct mlx5_flow_destination dest;
+       u32 *tirn = priv->indir_tirn;
+       int err = 0;
+       int tt;
+       int i;
+
+       dest.type = MLX5_FLOW_DESTINATION_TYPE_TIR;
+       for (i = 0; i < ARFS_NUM_TYPES; i++) {
+               dest.tir_num = tirn[i];
+               tt = arfs_get_tt(i);
+               /* Modify ttc rules destination to bypass the aRFS tables*/
+               err = mlx5_modify_rule_destination(priv->fs.ttc.rules[tt],
+                                                  &dest);
+               if (err) {
+                       netdev_err(priv->netdev,
+                                  "%s: modify ttc destination failed\n",
+                                  __func__);
+                       return err;
+               }
+       }
+       return 0;
+}
+
+static void arfs_del_rules(struct mlx5e_priv *priv);
+
+int mlx5e_arfs_disable(struct mlx5e_priv *priv)
+{
+       arfs_del_rules(priv);
+
+       return arfs_disable(priv);
+}
+
+int mlx5e_arfs_enable(struct mlx5e_priv *priv)
+{
+       struct mlx5_flow_destination dest;
+       int err = 0;
+       int tt;
+       int i;
+
+       dest.type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE;
+       for (i = 0; i < ARFS_NUM_TYPES; i++) {
+               dest.ft = priv->fs.arfs.arfs_tables[i].ft.t;
+               tt = arfs_get_tt(i);
+               /* Modify ttc rules destination to point on the aRFS FTs */
+               err = mlx5_modify_rule_destination(priv->fs.ttc.rules[tt],
+                                                  &dest);
+               if (err) {
+                       netdev_err(priv->netdev,
+                                  "%s: modify ttc destination failed err=%d\n",
+                                  __func__, err);
+                       arfs_disable(priv);
+                       return err;
+               }
+       }
+       return 0;
+}
+
+static void arfs_destroy_table(struct arfs_table *arfs_t)
+{
+       mlx5_del_flow_rule(arfs_t->default_rule);
+       mlx5e_destroy_flow_table(&arfs_t->ft);
+}
+
+void mlx5e_arfs_destroy_tables(struct mlx5e_priv *priv)
+{
+       int i;
+
+       if (!(priv->netdev->hw_features & NETIF_F_NTUPLE))
+               return;
+
+       arfs_del_rules(priv);
+       destroy_workqueue(priv->fs.arfs.wq);
+       for (i = 0; i < ARFS_NUM_TYPES; i++) {
+               if (!IS_ERR_OR_NULL(priv->fs.arfs.arfs_tables[i].ft.t))
+                       arfs_destroy_table(&priv->fs.arfs.arfs_tables[i]);
+       }
+}
+
+static int arfs_add_default_rule(struct mlx5e_priv *priv,
+                                enum arfs_type type)
+{
+       struct arfs_table *arfs_t = &priv->fs.arfs.arfs_tables[type];
+       struct mlx5_flow_destination dest;
+       u8 match_criteria_enable = 0;
+       u32 *tirn = priv->indir_tirn;
+       u32 *match_criteria;
+       u32 *match_value;
+       int err = 0;
+
+       match_value     = mlx5_vzalloc(MLX5_ST_SZ_BYTES(fte_match_param));
+       match_criteria  = mlx5_vzalloc(MLX5_ST_SZ_BYTES(fte_match_param));
+       if (!match_value || !match_criteria) {
+               netdev_err(priv->netdev, "%s: alloc failed\n", __func__);
+               err = -ENOMEM;
+               goto out;
+       }
+
+       dest.type = MLX5_FLOW_DESTINATION_TYPE_TIR;
+       switch (type) {
+       case ARFS_IPV4_TCP:
+               dest.tir_num = tirn[MLX5E_TT_IPV4_TCP];
+               break;
+       case ARFS_IPV4_UDP:
+               dest.tir_num = tirn[MLX5E_TT_IPV4_UDP];
+               break;
+       case ARFS_IPV6_TCP:
+               dest.tir_num = tirn[MLX5E_TT_IPV6_TCP];
+               break;
+       case ARFS_IPV6_UDP:
+               dest.tir_num = tirn[MLX5E_TT_IPV6_UDP];
+               break;
+       default:
+               err = -EINVAL;
+               goto out;
+       }
+
+       arfs_t->default_rule = mlx5_add_flow_rule(arfs_t->ft.t, match_criteria_enable,
+                                                 match_criteria, match_value,
+                                                 MLX5_FLOW_CONTEXT_ACTION_FWD_DEST,
+                                                 MLX5_FS_DEFAULT_FLOW_TAG,
+                                                 &dest);
+       if (IS_ERR(arfs_t->default_rule)) {
+               err = PTR_ERR(arfs_t->default_rule);
+               arfs_t->default_rule = NULL;
+               netdev_err(priv->netdev, "%s: add rule failed, arfs type=%d\n",
+                          __func__, type);
+       }
+out:
+       kvfree(match_criteria);
+       kvfree(match_value);
+       return err;
+}
+
+#define MLX5E_ARFS_NUM_GROUPS  2
+#define MLX5E_ARFS_GROUP1_SIZE BIT(12)
+#define MLX5E_ARFS_GROUP2_SIZE BIT(0)
+#define MLX5E_ARFS_TABLE_SIZE  (MLX5E_ARFS_GROUP1_SIZE +\
+                                MLX5E_ARFS_GROUP2_SIZE)
+static int arfs_create_groups(struct mlx5e_flow_table *ft,
+                             enum  arfs_type type)
+{
+       int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in);
+       void *outer_headers_c;
+       int ix = 0;
+       u32 *in;
+       int err;
+       u8 *mc;
+
+       ft->g = kcalloc(MLX5E_ARFS_NUM_GROUPS,
+                       sizeof(*ft->g), GFP_KERNEL);
+       in = mlx5_vzalloc(inlen);
+       if  (!in || !ft->g) {
+               kvfree(ft->g);
+               kvfree(in);
+               return -ENOMEM;
+       }
+
+       mc = MLX5_ADDR_OF(create_flow_group_in, in, match_criteria);
+       outer_headers_c = MLX5_ADDR_OF(fte_match_param, mc,
+                                      outer_headers);
+       MLX5_SET_TO_ONES(fte_match_set_lyr_2_4, outer_headers_c, ethertype);
+       switch (type) {
+       case ARFS_IPV4_TCP:
+       case ARFS_IPV6_TCP:
+               MLX5_SET_TO_ONES(fte_match_set_lyr_2_4, outer_headers_c, tcp_dport);
+               MLX5_SET_TO_ONES(fte_match_set_lyr_2_4, outer_headers_c, tcp_sport);
+               break;
+       case ARFS_IPV4_UDP:
+       case ARFS_IPV6_UDP:
+               MLX5_SET_TO_ONES(fte_match_set_lyr_2_4, outer_headers_c, udp_dport);
+               MLX5_SET_TO_ONES(fte_match_set_lyr_2_4, outer_headers_c, udp_sport);
+               break;
+       default:
+               err = -EINVAL;
+               goto out;
+       }
+
+       switch (type) {
+       case ARFS_IPV4_TCP:
+       case ARFS_IPV4_UDP:
+               MLX5_SET_TO_ONES(fte_match_set_lyr_2_4, outer_headers_c,
+                                src_ipv4_src_ipv6.ipv4_layout.ipv4);
+               MLX5_SET_TO_ONES(fte_match_set_lyr_2_4, outer_headers_c,
+                                dst_ipv4_dst_ipv6.ipv4_layout.ipv4);
+               break;
+       case ARFS_IPV6_TCP:
+       case ARFS_IPV6_UDP:
+               memset(MLX5_ADDR_OF(fte_match_set_lyr_2_4, outer_headers_c,
+                                   src_ipv4_src_ipv6.ipv6_layout.ipv6),
+                      0xff, 16);
+               memset(MLX5_ADDR_OF(fte_match_set_lyr_2_4, outer_headers_c,
+                                   dst_ipv4_dst_ipv6.ipv6_layout.ipv6),
+                      0xff, 16);
+               break;
+       default:
+               err = -EINVAL;
+               goto out;
+       }
+
+       MLX5_SET_CFG(in, match_criteria_enable, MLX5_MATCH_OUTER_HEADERS);
+       MLX5_SET_CFG(in, start_flow_index, ix);
+       ix += MLX5E_ARFS_GROUP1_SIZE;
+       MLX5_SET_CFG(in, end_flow_index, ix - 1);
+       ft->g[ft->num_groups] = mlx5_create_flow_group(ft->t, in);
+       if (IS_ERR(ft->g[ft->num_groups]))
+               goto err;
+       ft->num_groups++;
+
+       memset(in, 0, inlen);
+       MLX5_SET_CFG(in, start_flow_index, ix);
+       ix += MLX5E_ARFS_GROUP2_SIZE;
+       MLX5_SET_CFG(in, end_flow_index, ix - 1);
+       ft->g[ft->num_groups] = mlx5_create_flow_group(ft->t, in);
+       if (IS_ERR(ft->g[ft->num_groups]))
+               goto err;
+       ft->num_groups++;
+
+       kvfree(in);
+       return 0;
+
+err:
+       err = PTR_ERR(ft->g[ft->num_groups]);
+       ft->g[ft->num_groups] = NULL;
+out:
+       kvfree(in);
+
+       return err;
+}
+
+static int arfs_create_table(struct mlx5e_priv *priv,
+                            enum arfs_type type)
+{
+       struct mlx5e_arfs_tables *arfs = &priv->fs.arfs;
+       struct mlx5e_flow_table *ft = &arfs->arfs_tables[type].ft;
+       int err;
+
+       ft->t = mlx5_create_flow_table(priv->fs.ns, MLX5E_NIC_PRIO,
+                                      MLX5E_ARFS_TABLE_SIZE, MLX5E_ARFS_FT_LEVEL);
+       if (IS_ERR(ft->t)) {
+               err = PTR_ERR(ft->t);
+               ft->t = NULL;
+               return err;
+       }
+
+       err = arfs_create_groups(ft, type);
+       if (err)
+               goto err;
+
+       err = arfs_add_default_rule(priv, type);
+       if (err)
+               goto err;
+
+       return 0;
+err:
+       mlx5e_destroy_flow_table(ft);
+       return err;
+}
+
+int mlx5e_arfs_create_tables(struct mlx5e_priv *priv)
+{
+       int err = 0;
+       int i;
+
+       if (!(priv->netdev->hw_features & NETIF_F_NTUPLE))
+               return 0;
+
+       spin_lock_init(&priv->fs.arfs.arfs_lock);
+       INIT_LIST_HEAD(&priv->fs.arfs.rules);
+       priv->fs.arfs.wq = create_singlethread_workqueue("mlx5e_arfs");
+       if (!priv->fs.arfs.wq)
+               return -ENOMEM;
+
+       for (i = 0; i < ARFS_NUM_TYPES; i++) {
+               err = arfs_create_table(priv, i);
+               if (err)
+                       goto err;
+       }
+       return 0;
+err:
+       mlx5e_arfs_destroy_tables(priv);
+       return err;
+}
+
+#define MLX5E_ARFS_EXPIRY_QUOTA 60
+
+static void arfs_may_expire_flow(struct mlx5e_priv *priv)
+{
+       struct arfs_rule *arfs_rule;
+       struct hlist_node *htmp;
+       int quota = 0;
+       int i;
+       int j;
+
+       HLIST_HEAD(del_list);
+       spin_lock_bh(&priv->fs.arfs.arfs_lock);
+       mlx5e_for_each_arfs_rule(arfs_rule, htmp, priv->fs.arfs.arfs_tables, i, j) {
+               if (quota++ > MLX5E_ARFS_EXPIRY_QUOTA)
+                       break;
+               if (!work_pending(&arfs_rule->arfs_work) &&
+                   rps_may_expire_flow(priv->netdev,
+                                       arfs_rule->rxq, arfs_rule->flow_id,
+                                       arfs_rule->filter_id)) {
+                       hlist_del_init(&arfs_rule->hlist);
+                       hlist_add_head(&arfs_rule->hlist, &del_list);
+               }
+       }
+       spin_unlock_bh(&priv->fs.arfs.arfs_lock);
+       hlist_for_each_entry_safe(arfs_rule, htmp, &del_list, hlist) {
+               if (arfs_rule->rule)
+                       mlx5_del_flow_rule(arfs_rule->rule);
+               hlist_del(&arfs_rule->hlist);
+               kfree(arfs_rule);
+       }
+}
+
+static void arfs_del_rules(struct mlx5e_priv *priv)
+{
+       struct hlist_node *htmp;
+       struct arfs_rule *rule;
+       int i;
+       int j;
+
+       HLIST_HEAD(del_list);
+       spin_lock_bh(&priv->fs.arfs.arfs_lock);
+       mlx5e_for_each_arfs_rule(rule, htmp, priv->fs.arfs.arfs_tables, i, j) {
+               hlist_del_init(&rule->hlist);
+               hlist_add_head(&rule->hlist, &del_list);
+       }
+       spin_unlock_bh(&priv->fs.arfs.arfs_lock);
+
+       hlist_for_each_entry_safe(rule, htmp, &del_list, hlist) {
+               cancel_work_sync(&rule->arfs_work);
+               if (rule->rule)
+                       mlx5_del_flow_rule(rule->rule);
+               hlist_del(&rule->hlist);
+               kfree(rule);
+       }
+}
+
+static struct hlist_head *
+arfs_hash_bucket(struct arfs_table *arfs_t, __be16 src_port,
+                __be16 dst_port)
+{
+       unsigned long l;
+       int bucket_idx;
+
+       l = (__force unsigned long)src_port |
+           ((__force unsigned long)dst_port << 2);
+
+       bucket_idx = hash_long(l, ARFS_HASH_SHIFT);
+
+       return &arfs_t->rules_hash[bucket_idx];
+}
+
+static u8 arfs_get_ip_proto(const struct sk_buff *skb)
+{
+       return (skb->protocol == htons(ETH_P_IP)) ?
+               ip_hdr(skb)->protocol : ipv6_hdr(skb)->nexthdr;
+}
+
+static struct arfs_table *arfs_get_table(struct mlx5e_arfs_tables *arfs,
+                                        u8 ip_proto, __be16 etype)
+{
+       if (etype == htons(ETH_P_IP) && ip_proto == IPPROTO_TCP)
+               return &arfs->arfs_tables[ARFS_IPV4_TCP];
+       if (etype == htons(ETH_P_IP) && ip_proto == IPPROTO_UDP)
+               return &arfs->arfs_tables[ARFS_IPV4_UDP];
+       if (etype == htons(ETH_P_IPV6) && ip_proto == IPPROTO_TCP)
+               return &arfs->arfs_tables[ARFS_IPV6_TCP];
+       if (etype == htons(ETH_P_IPV6) && ip_proto == IPPROTO_UDP)
+               return &arfs->arfs_tables[ARFS_IPV6_UDP];
+
+       return NULL;
+}
+
+static struct mlx5_flow_rule *arfs_add_rule(struct mlx5e_priv *priv,
+                                           struct arfs_rule *arfs_rule)
+{
+       struct mlx5e_arfs_tables *arfs = &priv->fs.arfs;
+       struct arfs_tuple *tuple = &arfs_rule->tuple;
+       struct mlx5_flow_rule *rule = NULL;
+       struct mlx5_flow_destination dest;
+       struct arfs_table *arfs_table;
+       u8 match_criteria_enable = 0;
+       struct mlx5_flow_table *ft;
+       u32 *match_criteria;
+       u32 *match_value;
+       int err = 0;
+
+       match_value     = mlx5_vzalloc(MLX5_ST_SZ_BYTES(fte_match_param));
+       match_criteria  = mlx5_vzalloc(MLX5_ST_SZ_BYTES(fte_match_param));
+       if (!match_value || !match_criteria) {
+               netdev_err(priv->netdev, "%s: alloc failed\n", __func__);
+               err = -ENOMEM;
+               goto out;
+       }
+       match_criteria_enable = MLX5_MATCH_OUTER_HEADERS;
+       MLX5_SET_TO_ONES(fte_match_param, match_criteria,
+                        outer_headers.ethertype);
+       MLX5_SET(fte_match_param, match_value, outer_headers.ethertype,
+                ntohs(tuple->etype));
+       arfs_table = arfs_get_table(arfs, tuple->ip_proto, tuple->etype);
+       if (!arfs_table) {
+               err = -EINVAL;
+               goto out;
+       }
+
+       ft = arfs_table->ft.t;
+       if (tuple->ip_proto == IPPROTO_TCP) {
+               MLX5_SET_TO_ONES(fte_match_param, match_criteria,
+                                outer_headers.tcp_dport);
+               MLX5_SET_TO_ONES(fte_match_param, match_criteria,
+                                outer_headers.tcp_sport);
+               MLX5_SET(fte_match_param, match_value, outer_headers.tcp_dport,
+                        ntohs(tuple->dst_port));
+               MLX5_SET(fte_match_param, match_value, outer_headers.tcp_sport,
+                        ntohs(tuple->src_port));
+       } else {
+               MLX5_SET_TO_ONES(fte_match_param, match_criteria,
+                                outer_headers.udp_dport);
+               MLX5_SET_TO_ONES(fte_match_param, match_criteria,
+                                outer_headers.udp_sport);
+               MLX5_SET(fte_match_param, match_value, outer_headers.udp_dport,
+                        ntohs(tuple->dst_port));
+               MLX5_SET(fte_match_param, match_value, outer_headers.udp_sport,
+                        ntohs(tuple->src_port));
+       }
+       if (tuple->etype == htons(ETH_P_IP)) {
+               memcpy(MLX5_ADDR_OF(fte_match_param, match_value,
+                                   outer_headers.src_ipv4_src_ipv6.ipv4_layout.ipv4),
+                      &tuple->src_ipv4,
+                      4);
+               memcpy(MLX5_ADDR_OF(fte_match_param, match_value,
+                                   outer_headers.dst_ipv4_dst_ipv6.ipv4_layout.ipv4),
+                      &tuple->dst_ipv4,
+                      4);
+               MLX5_SET_TO_ONES(fte_match_param, match_criteria,
+                                outer_headers.src_ipv4_src_ipv6.ipv4_layout.ipv4);
+               MLX5_SET_TO_ONES(fte_match_param, match_criteria,
+                                outer_headers.dst_ipv4_dst_ipv6.ipv4_layout.ipv4);
+       } else {
+               memcpy(MLX5_ADDR_OF(fte_match_param, match_value,
+                                   outer_headers.src_ipv4_src_ipv6.ipv6_layout.ipv6),
+                      &tuple->src_ipv6,
+                      16);
+               memcpy(MLX5_ADDR_OF(fte_match_param, match_value,
+                                   outer_headers.dst_ipv4_dst_ipv6.ipv6_layout.ipv6),
+                      &tuple->dst_ipv6,
+                      16);
+               memset(MLX5_ADDR_OF(fte_match_param, match_criteria,
+                                   outer_headers.src_ipv4_src_ipv6.ipv6_layout.ipv6),
+                      0xff,
+                      16);
+               memset(MLX5_ADDR_OF(fte_match_param, match_criteria,
+                                   outer_headers.dst_ipv4_dst_ipv6.ipv6_layout.ipv6),
+                      0xff,
+                      16);
+       }
+       dest.type = MLX5_FLOW_DESTINATION_TYPE_TIR;
+       dest.tir_num = priv->direct_tir[arfs_rule->rxq].tirn;
+       rule = mlx5_add_flow_rule(ft, match_criteria_enable, match_criteria,
+                                 match_value, MLX5_FLOW_CONTEXT_ACTION_FWD_DEST,
+                                 MLX5_FS_DEFAULT_FLOW_TAG,
+                                 &dest);
+       if (IS_ERR(rule)) {
+               err = PTR_ERR(rule);
+               netdev_err(priv->netdev, "%s: add rule(filter id=%d, rq idx=%d) failed, err=%d\n",
+                          __func__, arfs_rule->filter_id, arfs_rule->rxq, err);
+       }
+
+out:
+       kvfree(match_criteria);
+       kvfree(match_value);
+       return err ? ERR_PTR(err) : rule;
+}
+
+static void arfs_modify_rule_rq(struct mlx5e_priv *priv,
+                               struct mlx5_flow_rule *rule, u16 rxq)
+{
+       struct mlx5_flow_destination dst;
+       int err = 0;
+
+       dst.type = MLX5_FLOW_DESTINATION_TYPE_TIR;
+       dst.tir_num = priv->direct_tir[rxq].tirn;
+       err =  mlx5_modify_rule_destination(rule, &dst);
+       if (err)
+               netdev_warn(priv->netdev,
+                           "Failed to modfiy aRFS rule destination to rq=%d\n", rxq);
+}
+
+static void arfs_handle_work(struct work_struct *work)
+{
+       struct arfs_rule *arfs_rule = container_of(work,
+                                                  struct arfs_rule,
+                                                  arfs_work);
+       struct mlx5e_priv *priv = arfs_rule->priv;
+       struct mlx5_flow_rule *rule;
+
+       mutex_lock(&priv->state_lock);
+       if (!test_bit(MLX5E_STATE_OPENED, &priv->state)) {
+               spin_lock_bh(&priv->fs.arfs.arfs_lock);
+               hlist_del(&arfs_rule->hlist);
+               spin_unlock_bh(&priv->fs.arfs.arfs_lock);
+
+               mutex_unlock(&priv->state_lock);
+               kfree(arfs_rule);
+               goto out;
+       }
+       mutex_unlock(&priv->state_lock);
+
+       if (!arfs_rule->rule) {
+               rule = arfs_add_rule(priv, arfs_rule);
+               if (IS_ERR(rule))
+                       goto out;
+               arfs_rule->rule = rule;
+       } else {
+               arfs_modify_rule_rq(priv, arfs_rule->rule,
+                                   arfs_rule->rxq);
+       }
+out:
+       arfs_may_expire_flow(priv);
+}
+
+/* return L4 destination port from ip4/6 packets */
+static __be16 arfs_get_dst_port(const struct sk_buff *skb)
+{
+       char *transport_header;
+
+       transport_header = skb_transport_header(skb);
+       if (arfs_get_ip_proto(skb) == IPPROTO_TCP)
+               return ((struct tcphdr *)transport_header)->dest;
+       return ((struct udphdr *)transport_header)->dest;
+}
+
+/* return L4 source port from ip4/6 packets */
+static __be16 arfs_get_src_port(const struct sk_buff *skb)
+{
+       char *transport_header;
+
+       transport_header = skb_transport_header(skb);
+       if (arfs_get_ip_proto(skb) == IPPROTO_TCP)
+               return ((struct tcphdr *)transport_header)->source;
+       return ((struct udphdr *)transport_header)->source;
+}
+
+static struct arfs_rule *arfs_alloc_rule(struct mlx5e_priv *priv,
+                                        struct arfs_table *arfs_t,
+                                        const struct sk_buff *skb,
+                                        u16 rxq, u32 flow_id)
+{
+       struct arfs_rule *rule;
+       struct arfs_tuple *tuple;
+
+       rule = kzalloc(sizeof(*rule), GFP_ATOMIC);
+       if (!rule)
+               return NULL;
+
+       rule->priv = priv;
+       rule->rxq = rxq;
+       INIT_WORK(&rule->arfs_work, arfs_handle_work);
+
+       tuple = &rule->tuple;
+       tuple->etype = skb->protocol;
+       if (tuple->etype == htons(ETH_P_IP)) {
+               tuple->src_ipv4 = ip_hdr(skb)->saddr;
+               tuple->dst_ipv4 = ip_hdr(skb)->daddr;
+       } else {
+               memcpy(&tuple->src_ipv6, &ipv6_hdr(skb)->saddr,
+                      sizeof(struct in6_addr));
+               memcpy(&tuple->dst_ipv6, &ipv6_hdr(skb)->daddr,
+                      sizeof(struct in6_addr));
+       }
+       tuple->ip_proto = arfs_get_ip_proto(skb);
+       tuple->src_port = arfs_get_src_port(skb);
+       tuple->dst_port = arfs_get_dst_port(skb);
+
+       rule->flow_id = flow_id;
+       rule->filter_id = priv->fs.arfs.last_filter_id++ % RPS_NO_FILTER;
+
+       hlist_add_head(&rule->hlist,
+                      arfs_hash_bucket(arfs_t, tuple->src_port,
+                                       tuple->dst_port));
+       return rule;
+}
+
+static bool arfs_cmp_ips(struct arfs_tuple *tuple,
+                        const struct sk_buff *skb)
+{
+       if (tuple->etype == htons(ETH_P_IP) &&
+           tuple->src_ipv4 == ip_hdr(skb)->saddr &&
+           tuple->dst_ipv4 == ip_hdr(skb)->daddr)
+               return true;
+       if (tuple->etype == htons(ETH_P_IPV6) &&
+           (!memcmp(&tuple->src_ipv6, &ipv6_hdr(skb)->saddr,
+                    sizeof(struct in6_addr))) &&
+           (!memcmp(&tuple->dst_ipv6, &ipv6_hdr(skb)->daddr,
+                    sizeof(struct in6_addr))))
+               return true;
+       return false;
+}
+
+static struct arfs_rule *arfs_find_rule(struct arfs_table *arfs_t,
+                                       const struct sk_buff *skb)
+{
+       struct arfs_rule *arfs_rule;
+       struct hlist_head *head;
+       __be16 src_port = arfs_get_src_port(skb);
+       __be16 dst_port = arfs_get_dst_port(skb);
+
+       head = arfs_hash_bucket(arfs_t, src_port, dst_port);
+       hlist_for_each_entry(arfs_rule, head, hlist) {
+               if (arfs_rule->tuple.src_port == src_port &&
+                   arfs_rule->tuple.dst_port == dst_port &&
+                   arfs_cmp_ips(&arfs_rule->tuple, skb)) {
+                       return arfs_rule;
+               }
+       }
+
+       return NULL;
+}
+
+int mlx5e_rx_flow_steer(struct net_device *dev, const struct sk_buff *skb,
+                       u16 rxq_index, u32 flow_id)
+{
+       struct mlx5e_priv *priv = netdev_priv(dev);
+       struct mlx5e_arfs_tables *arfs = &priv->fs.arfs;
+       struct arfs_table *arfs_t;
+       struct arfs_rule *arfs_rule;
+
+       if (skb->protocol != htons(ETH_P_IP) &&
+           skb->protocol != htons(ETH_P_IPV6))
+               return -EPROTONOSUPPORT;
+
+       arfs_t = arfs_get_table(arfs, arfs_get_ip_proto(skb), skb->protocol);
+       if (!arfs_t)
+               return -EPROTONOSUPPORT;
+
+       spin_lock_bh(&arfs->arfs_lock);
+       arfs_rule = arfs_find_rule(arfs_t, skb);
+       if (arfs_rule) {
+               if (arfs_rule->rxq == rxq_index) {
+                       spin_unlock_bh(&arfs->arfs_lock);
+                       return arfs_rule->filter_id;
+               }
+               arfs_rule->rxq = rxq_index;
+       } else {
+               arfs_rule = arfs_alloc_rule(priv, arfs_t, skb,
+                                           rxq_index, flow_id);
+               if (!arfs_rule) {
+                       spin_unlock_bh(&arfs->arfs_lock);
+                       return -ENOMEM;
+               }
+       }
+       queue_work(priv->fs.arfs.wq, &arfs_rule->arfs_work);
+       spin_unlock_bh(&arfs->arfs_lock);
+       return arfs_rule->filter_id;
+}
+#endif
index 3036f27..b2db180 100644 (file)
@@ -174,8 +174,14 @@ static int mlx5e_dcbnl_ieee_getpfc(struct net_device *dev,
 {
        struct mlx5e_priv *priv = netdev_priv(dev);
        struct mlx5_core_dev *mdev = priv->mdev;
+       struct mlx5e_pport_stats *pstats = &priv->stats.pport;
+       int i;
 
        pfc->pfc_cap = mlx5_max_tc(mdev) + 1;
+       for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) {
+               pfc->requests[i]    = PPORT_PER_PRIO_GET(pstats, i, tx_pause);
+               pfc->indications[i] = PPORT_PER_PRIO_GET(pstats, i, rx_pause);
+       }
 
        return mlx5_query_port_pfc(mdev, &pfc->pfc_en, NULL);
 }
index 4077856..534d99e 100644 (file)
@@ -138,10 +138,10 @@ static const struct {
        [MLX5E_100BASE_TX]   = {
                .speed      = 100,
        },
-       [MLX5E_100BASE_T]    = {
-               .supported  = SUPPORTED_100baseT_Full,
-               .advertised = ADVERTISED_100baseT_Full,
-               .speed      = 100,
+       [MLX5E_1000BASE_T]    = {
+               .supported  = SUPPORTED_1000baseT_Full,
+               .advertised = ADVERTISED_1000baseT_Full,
+               .speed      = 1000,
        },
        [MLX5E_10GBASE_T]    = {
                .supported  = SUPPORTED_10000baseT_Full,
@@ -165,7 +165,26 @@ static const struct {
        },
 };
 
+static unsigned long mlx5e_query_pfc_combined(struct mlx5e_priv *priv)
+{
+       struct mlx5_core_dev *mdev = priv->mdev;
+       u8 pfc_en_tx;
+       u8 pfc_en_rx;
+       int err;
+
+       err = mlx5_query_port_pfc(mdev, &pfc_en_tx, &pfc_en_rx);
+
+       return err ? 0 : pfc_en_tx | pfc_en_rx;
+}
+
 #define MLX5E_NUM_Q_CNTRS(priv) (NUM_Q_COUNTERS * (!!priv->q_counter))
+#define MLX5E_NUM_RQ_STATS(priv) \
+       (NUM_RQ_STATS * priv->params.num_channels * \
+        test_bit(MLX5E_STATE_OPENED, &priv->state))
+#define MLX5E_NUM_SQ_STATS(priv) \
+       (NUM_SQ_STATS * priv->params.num_channels * priv->params.num_tc * \
+        test_bit(MLX5E_STATE_OPENED, &priv->state))
+#define MLX5E_NUM_PFC_COUNTERS(priv) hweight8(mlx5e_query_pfc_combined(priv))
 
 static int mlx5e_get_sset_count(struct net_device *dev, int sset)
 {
@@ -173,21 +192,85 @@ static int mlx5e_get_sset_count(struct net_device *dev, int sset)
 
        switch (sset) {
        case ETH_SS_STATS:
-               return NUM_VPORT_COUNTERS + NUM_PPORT_COUNTERS +
+               return NUM_SW_COUNTERS +
                       MLX5E_NUM_Q_CNTRS(priv) +
-                      priv->params.num_channels * NUM_RQ_STATS +
-                      priv->params.num_channels * priv->params.num_tc *
-                                                  NUM_SQ_STATS;
+                      NUM_VPORT_COUNTERS + NUM_PPORT_COUNTERS +
+                      MLX5E_NUM_RQ_STATS(priv) +
+                      MLX5E_NUM_SQ_STATS(priv) +
+                      MLX5E_NUM_PFC_COUNTERS(priv);
        /* fallthrough */
        default:
                return -EOPNOTSUPP;
        }
 }
 
+static void mlx5e_fill_stats_strings(struct mlx5e_priv *priv, uint8_t *data)
+{
+       int i, j, tc, prio, idx = 0;
+       unsigned long pfc_combined;
+
+       /* SW counters */
+       for (i = 0; i < NUM_SW_COUNTERS; i++)
+               strcpy(data + (idx++) * ETH_GSTRING_LEN, sw_stats_desc[i].name);
+
+       /* Q counters */
+       for (i = 0; i < MLX5E_NUM_Q_CNTRS(priv); i++)
+               strcpy(data + (idx++) * ETH_GSTRING_LEN, q_stats_desc[i].name);
+
+       /* VPORT counters */
+       for (i = 0; i < NUM_VPORT_COUNTERS; i++)
+               strcpy(data + (idx++) * ETH_GSTRING_LEN,
+                      vport_stats_desc[i].name);
+
+       /* PPORT counters */
+       for (i = 0; i < NUM_PPORT_802_3_COUNTERS; i++)
+               strcpy(data + (idx++) * ETH_GSTRING_LEN,
+                      pport_802_3_stats_desc[i].name);
+
+       for (i = 0; i < NUM_PPORT_2863_COUNTERS; i++)
+               strcpy(data + (idx++) * ETH_GSTRING_LEN,
+                      pport_2863_stats_desc[i].name);
+
+       for (i = 0; i < NUM_PPORT_2819_COUNTERS; i++)
+               strcpy(data + (idx++) * ETH_GSTRING_LEN,
+                      pport_2819_stats_desc[i].name);
+
+       for (prio = 0; prio < NUM_PPORT_PRIO; prio++) {
+               for (i = 0; i < NUM_PPORT_PER_PRIO_TRAFFIC_COUNTERS; i++)
+                       sprintf(data + (idx++) * ETH_GSTRING_LEN, "prio%d_%s",
+                               prio,
+                               pport_per_prio_traffic_stats_desc[i].name);
+       }
+
+       pfc_combined = mlx5e_query_pfc_combined(priv);
+       for_each_set_bit(prio, &pfc_combined, NUM_PPORT_PRIO) {
+               for (i = 0; i < NUM_PPORT_PER_PRIO_PFC_COUNTERS; i++) {
+                       sprintf(data + (idx++) * ETH_GSTRING_LEN, "prio%d_%s",
+                               prio, pport_per_prio_pfc_stats_desc[i].name);
+               }
+       }
+
+       if (!test_bit(MLX5E_STATE_OPENED, &priv->state))
+               return;
+
+       /* per channel counters */
+       for (i = 0; i < priv->params.num_channels; i++)
+               for (j = 0; j < NUM_RQ_STATS; j++)
+                       sprintf(data + (idx++) * ETH_GSTRING_LEN, "rx%d_%s", i,
+                               rq_stats_desc[j].name);
+
+       for (tc = 0; tc < priv->params.num_tc; tc++)
+               for (i = 0; i < priv->params.num_channels; i++)
+                       for (j = 0; j < NUM_SQ_STATS; j++)
+                               sprintf(data + (idx++) * ETH_GSTRING_LEN,
+                                       "tx%d_%s",
+                                       priv->channeltc_to_txq_map[i][tc],
+                                       sq_stats_desc[j].name);
+}
+
 static void mlx5e_get_strings(struct net_device *dev,
                              uint32_t stringset, uint8_t *data)
 {
-       int i, j, tc, idx = 0;
        struct mlx5e_priv *priv = netdev_priv(dev);
 
        switch (stringset) {
@@ -198,35 +281,7 @@ static void mlx5e_get_strings(struct net_device *dev,
                break;
 
        case ETH_SS_STATS:
-               /* VPORT counters */
-               for (i = 0; i < NUM_VPORT_COUNTERS; i++)
-                       strcpy(data + (idx++) * ETH_GSTRING_LEN,
-                              vport_strings[i]);
-
-               /* Q counters */
-               for (i = 0; i < MLX5E_NUM_Q_CNTRS(priv); i++)
-                       strcpy(data + (idx++) * ETH_GSTRING_LEN,
-                              qcounter_stats_strings[i]);
-
-               /* PPORT counters */
-               for (i = 0; i < NUM_PPORT_COUNTERS; i++)
-                       strcpy(data + (idx++) * ETH_GSTRING_LEN,
-                              pport_strings[i]);
-
-               /* per channel counters */
-               for (i = 0; i < priv->params.num_channels; i++)
-                       for (j = 0; j < NUM_RQ_STATS; j++)
-                               sprintf(data + (idx++) * ETH_GSTRING_LEN,
-                                       "rx%d_%s", i, rq_stats_strings[j]);
-
-               for (tc = 0; tc < priv->params.num_tc; tc++)
-                       for (i = 0; i < priv->params.num_channels; i++)
-                               for (j = 0; j < NUM_SQ_STATS; j++)
-                                       sprintf(data +
-                                             (idx++) * ETH_GSTRING_LEN,
-                                             "tx%d_%s",
-                                             priv->channeltc_to_txq_map[i][tc],
-                                             sq_stats_strings[j]);
+               mlx5e_fill_stats_strings(priv, data);
                break;
        }
 }
@@ -235,7 +290,8 @@ static void mlx5e_get_ethtool_stats(struct net_device *dev,
                                    struct ethtool_stats *stats, u64 *data)
 {
        struct mlx5e_priv *priv = netdev_priv(dev);
-       int i, j, tc, idx = 0;
+       int i, j, tc, prio, idx = 0;
+       unsigned long pfc_combined;
 
        if (!data)
                return;
@@ -245,28 +301,59 @@ static void mlx5e_get_ethtool_stats(struct net_device *dev,
                mlx5e_update_stats(priv);
        mutex_unlock(&priv->state_lock);
 
-       for (i = 0; i < NUM_VPORT_COUNTERS; i++)
-               data[idx++] = ((u64 *)&priv->stats.vport)[i];
+       for (i = 0; i < NUM_SW_COUNTERS; i++)
+               data[idx++] = MLX5E_READ_CTR64_CPU(&priv->stats.sw,
+                                                  sw_stats_desc, i);
 
        for (i = 0; i < MLX5E_NUM_Q_CNTRS(priv); i++)
-               data[idx++] = ((u32 *)&priv->stats.qcnt)[i];
+               data[idx++] = MLX5E_READ_CTR32_CPU(&priv->stats.qcnt,
+                                                  q_stats_desc, i);
+
+       for (i = 0; i < NUM_VPORT_COUNTERS; i++)
+               data[idx++] = MLX5E_READ_CTR64_BE(priv->stats.vport.query_vport_out,
+                                                 vport_stats_desc, i);
+
+       for (i = 0; i < NUM_PPORT_802_3_COUNTERS; i++)
+               data[idx++] = MLX5E_READ_CTR64_BE(&priv->stats.pport.IEEE_802_3_counters,
+                                                 pport_802_3_stats_desc, i);
+
+       for (i = 0; i < NUM_PPORT_2863_COUNTERS; i++)
+               data[idx++] = MLX5E_READ_CTR64_BE(&priv->stats.pport.RFC_2863_counters,
+                                                 pport_2863_stats_desc, i);
+
+       for (i = 0; i < NUM_PPORT_2819_COUNTERS; i++)
+               data[idx++] = MLX5E_READ_CTR64_BE(&priv->stats.pport.RFC_2819_counters,
+                                                 pport_2819_stats_desc, i);
+
+       for (prio = 0; prio < NUM_PPORT_PRIO; prio++) {
+               for (i = 0; i < NUM_PPORT_PER_PRIO_TRAFFIC_COUNTERS; i++)
+                       data[idx++] = MLX5E_READ_CTR64_BE(&priv->stats.pport.per_prio_counters[prio],
+                                                pport_per_prio_traffic_stats_desc, i);
+       }
 
-       for (i = 0; i < NUM_PPORT_COUNTERS; i++)
-               data[idx++] = be64_to_cpu(((__be64 *)&priv->stats.pport)[i]);
+       pfc_combined = mlx5e_query_pfc_combined(priv);
+       for_each_set_bit(prio, &pfc_combined, NUM_PPORT_PRIO) {
+               for (i = 0; i < NUM_PPORT_PER_PRIO_PFC_COUNTERS; i++) {
+                       data[idx++] = MLX5E_READ_CTR64_BE(&priv->stats.pport.per_prio_counters[prio],
+                                                         pport_per_prio_pfc_stats_desc, i);
+               }
+       }
+
+       if (!test_bit(MLX5E_STATE_OPENED, &priv->state))
+               return;
 
        /* per channel counters */
        for (i = 0; i < priv->params.num_channels; i++)
                for (j = 0; j < NUM_RQ_STATS; j++)
-                       data[idx++] = !test_bit(MLX5E_STATE_OPENED,
-                                               &priv->state) ? 0 :
-                                      ((u64 *)&priv->channel[i]->rq.stats)[j];
+                       data[idx++] =
+                              MLX5E_READ_CTR64_CPU(&priv->channel[i]->rq.stats,
+                                                   rq_stats_desc, j);
 
        for (tc = 0; tc < priv->params.num_tc; tc++)
                for (i = 0; i < priv->params.num_channels; i++)
                        for (j = 0; j < NUM_SQ_STATS; j++)
-                               data[idx++] = !test_bit(MLX5E_STATE_OPENED,
-                                                       &priv->state) ? 0 :
-                               ((u64 *)&priv->channel[i]->sq[tc].stats)[j];
+                               data[idx++] = MLX5E_READ_CTR64_CPU(&priv->channel[i]->sq[tc].stats,
+                                                                  sq_stats_desc, j);
 }
 
 static void mlx5e_get_ringparam(struct net_device *dev,
@@ -369,6 +456,7 @@ static int mlx5e_set_channels(struct net_device *dev,
        struct mlx5e_priv *priv = netdev_priv(dev);
        int ncv = mlx5e_get_max_num_channels(priv->mdev);
        unsigned int count = ch->combined_count;
+       bool arfs_enabled;
        bool was_opened;
        int err = 0;
 
@@ -397,13 +485,27 @@ static int mlx5e_set_channels(struct net_device *dev,
        if (was_opened)
                mlx5e_close_locked(dev);
 
+       arfs_enabled = dev->features & NETIF_F_NTUPLE;
+       if (arfs_enabled)
+               mlx5e_arfs_disable(priv);
+
        priv->params.num_channels = count;
        mlx5e_build_default_indir_rqt(priv->mdev, priv->params.indirection_rqt,
                                      MLX5E_INDIR_RQT_SIZE, count);
 
        if (was_opened)
                err = mlx5e_open_locked(dev);
+       if (err)
+               goto out;
+
+       if (arfs_enabled) {
+               err = mlx5e_arfs_enable(priv);
+               if (err)
+                       netdev_err(dev, "%s: mlx5e_arfs_enable failed: %d\n",
+                                  __func__, err);
+       }
 
+out:
        mutex_unlock(&priv->state_lock);
 
        return err;
@@ -739,9 +841,8 @@ static void mlx5e_modify_tirs_hash(struct mlx5e_priv *priv, void *in, int inlen)
        MLX5_SET(modify_tir_in, in, bitmask.hash, 1);
        mlx5e_build_tir_ctx_hash(tirc, priv);
 
-       for (i = 0; i < MLX5E_NUM_TT; i++)
-               if (IS_HASHING_TT(i))
-                       mlx5_core_modify_tir(mdev, priv->tirn[i], in, inlen);
+       for (i = 0; i < MLX5E_NUM_INDIR_TIRS; i++)
+               mlx5_core_modify_tir(mdev, priv->indir_tirn[i], in, inlen);
 }
 
 static int mlx5e_set_rxfh(struct net_device *dev, const u32 *indir,
@@ -763,9 +864,11 @@ static int mlx5e_set_rxfh(struct net_device *dev, const u32 *indir,
        mutex_lock(&priv->state_lock);
 
        if (indir) {
+               u32 rqtn = priv->indir_rqtn;
+
                memcpy(priv->params.indirection_rqt, indir,
                       sizeof(priv->params.indirection_rqt));
-               mlx5e_redirect_rqt(priv, MLX5E_INDIRECTION_RQT);
+               mlx5e_redirect_rqt(priv, rqtn, MLX5E_INDIR_RQT_SIZE, 0);
        }
 
        if (key)
@@ -1048,6 +1151,108 @@ static int mlx5e_set_wol(struct net_device *netdev, struct ethtool_wolinfo *wol)
        return mlx5_set_port_wol(mdev, mlx5_wol_mode);
 }
 
+static int mlx5e_set_phys_id(struct net_device *dev,
+                            enum ethtool_phys_id_state state)
+{
+       struct mlx5e_priv *priv = netdev_priv(dev);
+       struct mlx5_core_dev *mdev = priv->mdev;
+       u16 beacon_duration;
+
+       if (!MLX5_CAP_GEN(mdev, beacon_led))
+               return -EOPNOTSUPP;
+
+       switch (state) {
+       case ETHTOOL_ID_ACTIVE:
+               beacon_duration = MLX5_BEACON_DURATION_INF;
+               break;
+       case ETHTOOL_ID_INACTIVE:
+               beacon_duration = MLX5_BEACON_DURATION_OFF;
+               break;
+       default:
+               return -EOPNOTSUPP;
+       }
+
+       return mlx5_set_port_beacon(mdev, beacon_duration);
+}
+
+static int mlx5e_get_module_info(struct net_device *netdev,
+                                struct ethtool_modinfo *modinfo)
+{
+       struct mlx5e_priv *priv = netdev_priv(netdev);
+       struct mlx5_core_dev *dev = priv->mdev;
+       int size_read = 0;
+       u8 data[4];
+
+       size_read = mlx5_query_module_eeprom(dev, 0, 2, data);
+       if (size_read < 2)
+               return -EIO;
+
+       /* data[0] = identifier byte */
+       switch (data[0]) {
+       case MLX5_MODULE_ID_QSFP:
+               modinfo->type       = ETH_MODULE_SFF_8436;
+               modinfo->eeprom_len = ETH_MODULE_SFF_8436_LEN;
+               break;
+       case MLX5_MODULE_ID_QSFP_PLUS:
+       case MLX5_MODULE_ID_QSFP28:
+               /* data[1] = revision id */
+               if (data[0] == MLX5_MODULE_ID_QSFP28 || data[1] >= 0x3) {
+                       modinfo->type       = ETH_MODULE_SFF_8636;
+                       modinfo->eeprom_len = ETH_MODULE_SFF_8636_LEN;
+               } else {
+                       modinfo->type       = ETH_MODULE_SFF_8436;
+                       modinfo->eeprom_len = ETH_MODULE_SFF_8436_LEN;
+               }
+               break;
+       case MLX5_MODULE_ID_SFP:
+               modinfo->type       = ETH_MODULE_SFF_8472;
+               modinfo->eeprom_len = ETH_MODULE_SFF_8472_LEN;
+               break;
+       default:
+               netdev_err(priv->netdev, "%s: cable type not recognized:0x%x\n",
+                          __func__, data[0]);
+               return -EINVAL;
+       }
+
+       return 0;
+}
+
+static int mlx5e_get_module_eeprom(struct net_device *netdev,
+                                  struct ethtool_eeprom *ee,
+                                  u8 *data)
+{
+       struct mlx5e_priv *priv = netdev_priv(netdev);
+       struct mlx5_core_dev *mdev = priv->mdev;
+       int offset = ee->offset;
+       int size_read;
+       int i = 0;
+
+       if (!ee->len)
+               return -EINVAL;
+
+       memset(data, 0, ee->len);
+
+       while (i < ee->len) {
+               size_read = mlx5_query_module_eeprom(mdev, offset, ee->len - i,
+                                                    data + i);
+
+               if (!size_read)
+                       /* Done reading */
+                       return 0;
+
+               if (size_read < 0) {
+                       netdev_err(priv->netdev, "%s: mlx5_query_eeprom failed:0x%x\n",
+                                  __func__, size_read);
+                       return 0;
+               }
+
+               i += size_read;
+               offset += size_read;
+       }
+
+       return 0;
+}
+
 const struct ethtool_ops mlx5e_ethtool_ops = {
        .get_drvinfo       = mlx5e_get_drvinfo,
        .get_link          = ethtool_op_get_link,
@@ -1072,6 +1277,9 @@ const struct ethtool_ops mlx5e_ethtool_ops = {
        .get_pauseparam    = mlx5e_get_pauseparam,
        .set_pauseparam    = mlx5e_set_pauseparam,
        .get_ts_info       = mlx5e_get_ts_info,
+       .set_phys_id       = mlx5e_set_phys_id,
        .get_wol           = mlx5e_get_wol,
        .set_wol           = mlx5e_set_wol,
+       .get_module_info   = mlx5e_get_module_info,
+       .get_module_eeprom = mlx5e_get_module_eeprom,
 };
index d00a242..b327400 100644 (file)
 #include <linux/mlx5/fs.h>
 #include "en.h"
 
-#define MLX5_SET_CFG(p, f, v) MLX5_SET(create_flow_group_in, p, f, v)
+static int mlx5e_add_l2_flow_rule(struct mlx5e_priv *priv,
+                                 struct mlx5e_l2_rule *ai, int type);
+static void mlx5e_del_l2_flow_rule(struct mlx5e_priv *priv,
+                                  struct mlx5e_l2_rule *ai);
 
 enum {
        MLX5E_FULLMATCH = 0,
@@ -58,21 +61,21 @@ enum {
        MLX5E_ACTION_DEL  = 2,
 };
 
-struct mlx5e_eth_addr_hash_node {
+struct mlx5e_l2_hash_node {
        struct hlist_node          hlist;
        u8                         action;
-       struct mlx5e_eth_addr_info ai;
+       struct mlx5e_l2_rule ai;
 };
 
-static inline int mlx5e_hash_eth_addr(u8 *addr)
+static inline int mlx5e_hash_l2(u8 *addr)
 {
        return addr[5];
 }
 
-static void mlx5e_add_eth_addr_to_hash(struct hlist_head *hash, u8 *addr)
+static void mlx5e_add_l2_to_hash(struct hlist_head *hash, u8 *addr)
 {
-       struct mlx5e_eth_addr_hash_node *hn;
-       int ix = mlx5e_hash_eth_addr(addr);
+       struct mlx5e_l2_hash_node *hn;
+       int ix = mlx5e_hash_l2(addr);
        int found = 0;
 
        hlist_for_each_entry(hn, &hash[ix], hlist)
@@ -96,371 +99,12 @@ static void mlx5e_add_eth_addr_to_hash(struct hlist_head *hash, u8 *addr)
        hlist_add_head(&hn->hlist, &hash[ix]);
 }
 
-static void mlx5e_del_eth_addr_from_hash(struct mlx5e_eth_addr_hash_node *hn)
+static void mlx5e_del_l2_from_hash(struct mlx5e_l2_hash_node *hn)
 {
        hlist_del(&hn->hlist);
        kfree(hn);
 }
 
-static void mlx5e_del_eth_addr_from_flow_table(struct mlx5e_priv *priv,
-                                              struct mlx5e_eth_addr_info *ai)
-{
-       if (ai->tt_vec & BIT(MLX5E_TT_IPV6_IPSEC_ESP))
-               mlx5_del_flow_rule(ai->ft_rule[MLX5E_TT_IPV6_IPSEC_ESP]);
-
-       if (ai->tt_vec & BIT(MLX5E_TT_IPV4_IPSEC_ESP))
-               mlx5_del_flow_rule(ai->ft_rule[MLX5E_TT_IPV4_IPSEC_ESP]);
-
-       if (ai->tt_vec & BIT(MLX5E_TT_IPV6_IPSEC_AH))
-               mlx5_del_flow_rule(ai->ft_rule[MLX5E_TT_IPV6_IPSEC_AH]);
-
-       if (ai->tt_vec & BIT(MLX5E_TT_IPV4_IPSEC_AH))
-               mlx5_del_flow_rule(ai->ft_rule[MLX5E_TT_IPV4_IPSEC_AH]);
-
-       if (ai->tt_vec & BIT(MLX5E_TT_IPV6_TCP))
-               mlx5_del_flow_rule(ai->ft_rule[MLX5E_TT_IPV6_TCP]);
-
-       if (ai->tt_vec & BIT(MLX5E_TT_IPV4_TCP))
-               mlx5_del_flow_rule(ai->ft_rule[MLX5E_TT_IPV4_TCP]);
-
-       if (ai->tt_vec & BIT(MLX5E_TT_IPV6_UDP))
-               mlx5_del_flow_rule(ai->ft_rule[MLX5E_TT_IPV6_UDP]);
-
-       if (ai->tt_vec & BIT(MLX5E_TT_IPV4_UDP))
-               mlx5_del_flow_rule(ai->ft_rule[MLX5E_TT_IPV4_UDP]);
-
-       if (ai->tt_vec & BIT(MLX5E_TT_IPV6))
-               mlx5_del_flow_rule(ai->ft_rule[MLX5E_TT_IPV6]);
-
-       if (ai->tt_vec & BIT(MLX5E_TT_IPV4))
-               mlx5_del_flow_rule(ai->ft_rule[MLX5E_TT_IPV4]);
-
-       if (ai->tt_vec & BIT(MLX5E_TT_ANY))
-               mlx5_del_flow_rule(ai->ft_rule[MLX5E_TT_ANY]);
-}
-
-static int mlx5e_get_eth_addr_type(u8 *addr)
-{
-       if (is_unicast_ether_addr(addr))
-               return MLX5E_UC;
-
-       if ((addr[0] == 0x01) &&
-           (addr[1] == 0x00) &&
-           (addr[2] == 0x5e) &&
-          !(addr[3] &  0x80))
-               return MLX5E_MC_IPV4;
-
-       if ((addr[0] == 0x33) &&
-           (addr[1] == 0x33))
-               return MLX5E_MC_IPV6;
-
-       return MLX5E_MC_OTHER;
-}
-
-static u32 mlx5e_get_tt_vec(struct mlx5e_eth_addr_info *ai, int type)
-{
-       int eth_addr_type;
-       u32 ret;
-
-       switch (type) {
-       case MLX5E_FULLMATCH:
-               eth_addr_type = mlx5e_get_eth_addr_type(ai->addr);
-               switch (eth_addr_type) {
-               case MLX5E_UC:
-                       ret =
-                               BIT(MLX5E_TT_IPV4_TCP)       |
-                               BIT(MLX5E_TT_IPV6_TCP)       |
-                               BIT(MLX5E_TT_IPV4_UDP)       |
-                               BIT(MLX5E_TT_IPV6_UDP)       |
-                               BIT(MLX5E_TT_IPV4_IPSEC_AH)  |
-                               BIT(MLX5E_TT_IPV6_IPSEC_AH)  |
-                               BIT(MLX5E_TT_IPV4_IPSEC_ESP) |
-                               BIT(MLX5E_TT_IPV6_IPSEC_ESP) |
-                               BIT(MLX5E_TT_IPV4)           |
-                               BIT(MLX5E_TT_IPV6)           |
-                               BIT(MLX5E_TT_ANY)            |
-                               0;
-                       break;
-
-               case MLX5E_MC_IPV4:
-                       ret =
-                               BIT(MLX5E_TT_IPV4_UDP)       |
-                               BIT(MLX5E_TT_IPV4)           |
-                               0;
-                       break;
-
-               case MLX5E_MC_IPV6:
-                       ret =
-                               BIT(MLX5E_TT_IPV6_UDP)       |
-                               BIT(MLX5E_TT_IPV6)           |
-                               0;
-                       break;
-
-               case MLX5E_MC_OTHER:
-                       ret =
-                               BIT(MLX5E_TT_ANY)            |
-                               0;
-                       break;
-               }
-
-               break;
-
-       case MLX5E_ALLMULTI:
-               ret =
-                       BIT(MLX5E_TT_IPV4_UDP) |
-                       BIT(MLX5E_TT_IPV6_UDP) |
-                       BIT(MLX5E_TT_IPV4)     |
-                       BIT(MLX5E_TT_IPV6)     |
-                       BIT(MLX5E_TT_ANY)      |
-                       0;
-               break;
-
-       default: /* MLX5E_PROMISC */
-               ret =
-                       BIT(MLX5E_TT_IPV4_TCP)       |
-                       BIT(MLX5E_TT_IPV6_TCP)       |
-                       BIT(MLX5E_TT_IPV4_UDP)       |
-                       BIT(MLX5E_TT_IPV6_UDP)       |
-                       BIT(MLX5E_TT_IPV4_IPSEC_AH)  |
-                       BIT(MLX5E_TT_IPV6_IPSEC_AH)  |
-                       BIT(MLX5E_TT_IPV4_IPSEC_ESP) |
-                       BIT(MLX5E_TT_IPV6_IPSEC_ESP) |
-                       BIT(MLX5E_TT_IPV4)           |
-                       BIT(MLX5E_TT_IPV6)           |
-                       BIT(MLX5E_TT_ANY)            |
-                       0;
-               break;
-       }
-
-       return ret;
-}
-
-static int __mlx5e_add_eth_addr_rule(struct mlx5e_priv *priv,
-                                    struct mlx5e_eth_addr_info *ai,
-                                    int type, u32 *mc, u32 *mv)
-{
-       struct mlx5_flow_destination dest;
-       u8 match_criteria_enable = 0;
-       struct mlx5_flow_rule **rule_p;
-       struct mlx5_flow_table *ft = priv->fts.main.t;
-       u8 *mc_dmac = MLX5_ADDR_OF(fte_match_param, mc,
-                                  outer_headers.dmac_47_16);
-       u8 *mv_dmac = MLX5_ADDR_OF(fte_match_param, mv,
-                                  outer_headers.dmac_47_16);
-       u32 *tirn = priv->tirn;
-       u32 tt_vec;
-       int err = 0;
-
-       dest.type = MLX5_FLOW_DESTINATION_TYPE_TIR;
-
-       switch (type) {
-       case MLX5E_FULLMATCH:
-               match_criteria_enable = MLX5_MATCH_OUTER_HEADERS;
-               eth_broadcast_addr(mc_dmac);
-               ether_addr_copy(mv_dmac, ai->addr);
-               break;
-
-       case MLX5E_ALLMULTI:
-               match_criteria_enable = MLX5_MATCH_OUTER_HEADERS;
-               mc_dmac[0] = 0x01;
-               mv_dmac[0] = 0x01;
-               break;
-
-       case MLX5E_PROMISC:
-               break;
-       }
-
-       tt_vec = mlx5e_get_tt_vec(ai, type);
-
-       if (tt_vec & BIT(MLX5E_TT_ANY)) {
-               rule_p = &ai->ft_rule[MLX5E_TT_ANY];
-               dest.tir_num = tirn[MLX5E_TT_ANY];
-               *rule_p = mlx5_add_flow_rule(ft, match_criteria_enable, mc, mv,
-                                            MLX5_FLOW_CONTEXT_ACTION_FWD_DEST,
-                                            MLX5_FS_DEFAULT_FLOW_TAG, &dest);
-               if (IS_ERR_OR_NULL(*rule_p))
-                       goto err_del_ai;
-               ai->tt_vec |= BIT(MLX5E_TT_ANY);
-       }
-
-       match_criteria_enable = MLX5_MATCH_OUTER_HEADERS;
-       MLX5_SET_TO_ONES(fte_match_param, mc, outer_headers.ethertype);
-
-       if (tt_vec & BIT(MLX5E_TT_IPV4)) {
-               rule_p = &ai->ft_rule[MLX5E_TT_IPV4];
-               dest.tir_num = tirn[MLX5E_TT_IPV4];
-               MLX5_SET(fte_match_param, mv, outer_headers.ethertype,
-                        ETH_P_IP);
-               *rule_p = mlx5_add_flow_rule(ft, match_criteria_enable, mc, mv,
-                                            MLX5_FLOW_CONTEXT_ACTION_FWD_DEST,
-                                            MLX5_FS_DEFAULT_FLOW_TAG, &dest);
-               if (IS_ERR_OR_NULL(*rule_p))
-                       goto err_del_ai;
-               ai->tt_vec |= BIT(MLX5E_TT_IPV4);
-       }
-
-       if (tt_vec & BIT(MLX5E_TT_IPV6)) {
-               rule_p = &ai->ft_rule[MLX5E_TT_IPV6];
-               dest.tir_num = tirn[MLX5E_TT_IPV6];
-               MLX5_SET(fte_match_param, mv, outer_headers.ethertype,
-                        ETH_P_IPV6);
-               *rule_p = mlx5_add_flow_rule(ft, match_criteria_enable, mc, mv,
-                                            MLX5_FLOW_CONTEXT_ACTION_FWD_DEST,
-                                            MLX5_FS_DEFAULT_FLOW_TAG, &dest);
-               if (IS_ERR_OR_NULL(*rule_p))
-                       goto err_del_ai;
-               ai->tt_vec |= BIT(MLX5E_TT_IPV6);
-       }
-
-       MLX5_SET_TO_ONES(fte_match_param, mc, outer_headers.ip_protocol);
-       MLX5_SET(fte_match_param, mv, outer_headers.ip_protocol, IPPROTO_UDP);
-
-       if (tt_vec & BIT(MLX5E_TT_IPV4_UDP)) {
-               rule_p = &ai->ft_rule[MLX5E_TT_IPV4_UDP];
-               dest.tir_num = tirn[MLX5E_TT_IPV4_UDP];
-               MLX5_SET(fte_match_param, mv, outer_headers.ethertype,
-                        ETH_P_IP);
-               *rule_p = mlx5_add_flow_rule(ft, match_criteria_enable, mc, mv,
-                                            MLX5_FLOW_CONTEXT_ACTION_FWD_DEST,
-                                            MLX5_FS_DEFAULT_FLOW_TAG, &dest);
-               if (IS_ERR_OR_NULL(*rule_p))
-                       goto err_del_ai;
-               ai->tt_vec |= BIT(MLX5E_TT_IPV4_UDP);
-       }
-
-       if (tt_vec & BIT(MLX5E_TT_IPV6_UDP)) {
-               rule_p = &ai->ft_rule[MLX5E_TT_IPV6_UDP];
-               dest.tir_num = tirn[MLX5E_TT_IPV6_UDP];
-               MLX5_SET(fte_match_param, mv, outer_headers.ethertype,
-                        ETH_P_IPV6);
-               *rule_p = mlx5_add_flow_rule(ft, match_criteria_enable, mc, mv,
-                                            MLX5_FLOW_CONTEXT_ACTION_FWD_DEST,
-                                            MLX5_FS_DEFAULT_FLOW_TAG, &dest);
-               if (IS_ERR_OR_NULL(*rule_p))
-                       goto err_del_ai;
-               ai->tt_vec |= BIT(MLX5E_TT_IPV6_UDP);
-       }
-
-       MLX5_SET(fte_match_param, mv, outer_headers.ip_protocol, IPPROTO_TCP);
-
-       if (tt_vec & BIT(MLX5E_TT_IPV4_TCP)) {
-               rule_p = &ai->ft_rule[MLX5E_TT_IPV4_TCP];
-               dest.tir_num = tirn[MLX5E_TT_IPV4_TCP];
-               MLX5_SET(fte_match_param, mv, outer_headers.ethertype,
-                        ETH_P_IP);
-               *rule_p = mlx5_add_flow_rule(ft, match_criteria_enable, mc, mv,
-                                            MLX5_FLOW_CONTEXT_ACTION_FWD_DEST,
-                                            MLX5_FS_DEFAULT_FLOW_TAG, &dest);
-               if (IS_ERR_OR_NULL(*rule_p))
-                       goto err_del_ai;
-               ai->tt_vec |= BIT(MLX5E_TT_IPV4_TCP);
-       }
-
-       if (tt_vec & BIT(MLX5E_TT_IPV6_TCP)) {
-               rule_p = &ai->ft_rule[MLX5E_TT_IPV6_TCP];
-               dest.tir_num = tirn[MLX5E_TT_IPV6_TCP];
-               MLX5_SET(fte_match_param, mv, outer_headers.ethertype,
-                        ETH_P_IPV6);
-               *rule_p = mlx5_add_flow_rule(ft, match_criteria_enable, mc, mv,
-                                            MLX5_FLOW_CONTEXT_ACTION_FWD_DEST,
-                                            MLX5_FS_DEFAULT_FLOW_TAG, &dest);
-               if (IS_ERR_OR_NULL(*rule_p))
-                       goto err_del_ai;
-
-               ai->tt_vec |= BIT(MLX5E_TT_IPV6_TCP);
-       }
-
-       MLX5_SET(fte_match_param, mv, outer_headers.ip_protocol, IPPROTO_AH);
-
-       if (tt_vec & BIT(MLX5E_TT_IPV4_IPSEC_AH)) {
-               rule_p = &ai->ft_rule[MLX5E_TT_IPV4_IPSEC_AH];
-               dest.tir_num = tirn[MLX5E_TT_IPV4_IPSEC_AH];
-               MLX5_SET(fte_match_param, mv, outer_headers.ethertype,
-                        ETH_P_IP);
-               *rule_p = mlx5_add_flow_rule(ft, match_criteria_enable, mc, mv,
-                                            MLX5_FLOW_CONTEXT_ACTION_FWD_DEST,
-                                            MLX5_FS_DEFAULT_FLOW_TAG, &dest);
-               if (IS_ERR_OR_NULL(*rule_p))
-                       goto err_del_ai;
-               ai->tt_vec |= BIT(MLX5E_TT_IPV4_IPSEC_AH);
-       }
-
-       if (tt_vec & BIT(MLX5E_TT_IPV6_IPSEC_AH)) {
-               rule_p = &ai->ft_rule[MLX5E_TT_IPV6_IPSEC_AH];
-               dest.tir_num = tirn[MLX5E_TT_IPV6_IPSEC_AH];
-               MLX5_SET(fte_match_param, mv, outer_headers.ethertype,
-                        ETH_P_IPV6);
-               *rule_p = mlx5_add_flow_rule(ft, match_criteria_enable, mc, mv,
-                                            MLX5_FLOW_CONTEXT_ACTION_FWD_DEST,
-                                            MLX5_FS_DEFAULT_FLOW_TAG, &dest);
-               if (IS_ERR_OR_NULL(*rule_p))
-                       goto err_del_ai;
-               ai->tt_vec |= BIT(MLX5E_TT_IPV6_IPSEC_AH);
-       }
-
-       MLX5_SET(fte_match_param, mv, outer_headers.ip_protocol, IPPROTO_ESP);
-
-       if (tt_vec & BIT(MLX5E_TT_IPV4_IPSEC_ESP)) {
-               rule_p = &ai->ft_rule[MLX5E_TT_IPV4_IPSEC_ESP];
-               dest.tir_num = tirn[MLX5E_TT_IPV4_IPSEC_ESP];
-               MLX5_SET(fte_match_param, mv, outer_headers.ethertype,
-                        ETH_P_IP);
-               *rule_p = mlx5_add_flow_rule(ft, match_criteria_enable, mc, mv,
-                                            MLX5_FLOW_CONTEXT_ACTION_FWD_DEST,
-                                            MLX5_FS_DEFAULT_FLOW_TAG, &dest);
-               if (IS_ERR_OR_NULL(*rule_p))
-                       goto err_del_ai;
-               ai->tt_vec |= BIT(MLX5E_TT_IPV4_IPSEC_ESP);
-       }
-
-       if (tt_vec & BIT(MLX5E_TT_IPV6_IPSEC_ESP)) {
-               rule_p = &ai->ft_rule[MLX5E_TT_IPV6_IPSEC_ESP];
-               dest.tir_num = tirn[MLX5E_TT_IPV6_IPSEC_ESP];
-               MLX5_SET(fte_match_param, mv, outer_headers.ethertype,
-                        ETH_P_IPV6);
-               *rule_p = mlx5_add_flow_rule(ft, match_criteria_enable, mc, mv,
-                                            MLX5_FLOW_CONTEXT_ACTION_FWD_DEST,
-                                            MLX5_FS_DEFAULT_FLOW_TAG, &dest);
-               if (IS_ERR_OR_NULL(*rule_p))
-                       goto err_del_ai;
-               ai->tt_vec |= BIT(MLX5E_TT_IPV6_IPSEC_ESP);
-       }
-
-       return 0;
-
-err_del_ai:
-       err = PTR_ERR(*rule_p);
-       *rule_p = NULL;
-       mlx5e_del_eth_addr_from_flow_table(priv, ai);
-
-       return err;
-}
-
-static int mlx5e_add_eth_addr_rule(struct mlx5e_priv *priv,
-                                  struct mlx5e_eth_addr_info *ai, int type)
-{
-       u32 *match_criteria;
-       u32 *match_value;
-       int err = 0;
-
-       match_value     = mlx5_vzalloc(MLX5_ST_SZ_BYTES(fte_match_param));
-       match_criteria  = mlx5_vzalloc(MLX5_ST_SZ_BYTES(fte_match_param));
-       if (!match_value || !match_criteria) {
-               netdev_err(priv->netdev, "%s: alloc failed\n", __func__);
-               err = -ENOMEM;
-               goto add_eth_addr_rule_out;
-       }
-
-       err = __mlx5e_add_eth_addr_rule(priv, ai, type, match_criteria,
-                                       match_value);
-
-add_eth_addr_rule_out:
-       kvfree(match_criteria);
-       kvfree(match_value);
-
-       return err;
-}
-
 static int mlx5e_vport_context_update_vlans(struct mlx5e_priv *priv)
 {
        struct net_device *ndev = priv->netdev;
@@ -472,7 +116,7 @@ static int mlx5e_vport_context_update_vlans(struct mlx5e_priv *priv)
        int i;
 
        list_size = 0;
-       for_each_set_bit(vlan, priv->vlan.active_vlans, VLAN_N_VID)
+       for_each_set_bit(vlan, priv->fs.vlan.active_vlans, VLAN_N_VID)
                list_size++;
 
        max_list_size = 1 << MLX5_CAP_GEN(priv->mdev, log_max_vlan_list);
@@ -489,7 +133,7 @@ static int mlx5e_vport_context_update_vlans(struct mlx5e_priv *priv)
                return -ENOMEM;
 
        i = 0;
-       for_each_set_bit(vlan, priv->vlan.active_vlans, VLAN_N_VID) {
+       for_each_set_bit(vlan, priv->fs.vlan.active_vlans, VLAN_N_VID) {
                if (i >= list_size)
                        break;
                vlans[i++] = vlan;
@@ -514,28 +158,28 @@ static int __mlx5e_add_vlan_rule(struct mlx5e_priv *priv,
                                 enum mlx5e_vlan_rule_type rule_type,
                                 u16 vid, u32 *mc, u32 *mv)
 {
-       struct mlx5_flow_table *ft = priv->fts.vlan.t;
+       struct mlx5_flow_table *ft = priv->fs.vlan.ft.t;
        struct mlx5_flow_destination dest;
        u8 match_criteria_enable = 0;
        struct mlx5_flow_rule **rule_p;
        int err = 0;
 
        dest.type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE;
-       dest.ft = priv->fts.main.t;
+       dest.ft = priv->fs.l2.ft.t;
 
        match_criteria_enable = MLX5_MATCH_OUTER_HEADERS;
        MLX5_SET_TO_ONES(fte_match_param, mc, outer_headers.vlan_tag);
 
        switch (rule_type) {
        case MLX5E_VLAN_RULE_TYPE_UNTAGGED:
-               rule_p = &priv->vlan.untagged_rule;
+               rule_p = &priv->fs.vlan.untagged_rule;
                break;
        case MLX5E_VLAN_RULE_TYPE_ANY_VID:
-               rule_p = &priv->vlan.any_vlan_rule;
+               rule_p = &priv->fs.vlan.any_vlan_rule;
                MLX5_SET(fte_match_param, mv, outer_headers.vlan_tag, 1);
                break;
        default: /* MLX5E_VLAN_RULE_TYPE_MATCH_VID */
-               rule_p = &priv->vlan.active_vlans_rule[vid];
+               rule_p = &priv->fs.vlan.active_vlans_rule[vid];
                MLX5_SET(fte_match_param, mv, outer_headers.vlan_tag, 1);
                MLX5_SET_TO_ONES(fte_match_param, mc, outer_headers.first_vid);
                MLX5_SET(fte_match_param, mv, outer_headers.first_vid, vid);
@@ -589,22 +233,22 @@ static void mlx5e_del_vlan_rule(struct mlx5e_priv *priv,
 {
        switch (rule_type) {
        case MLX5E_VLAN_RULE_TYPE_UNTAGGED:
-               if (priv->vlan.untagged_rule) {
-                       mlx5_del_flow_rule(priv->vlan.untagged_rule);
-                       priv->vlan.untagged_rule = NULL;
+               if (priv->fs.vlan.untagged_rule) {
+                       mlx5_del_flow_rule(priv->fs.vlan.untagged_rule);
+                       priv->fs.vlan.untagged_rule = NULL;
                }
                break;
        case MLX5E_VLAN_RULE_TYPE_ANY_VID:
-               if (priv->vlan.any_vlan_rule) {
-                       mlx5_del_flow_rule(priv->vlan.any_vlan_rule);
-                       priv->vlan.any_vlan_rule = NULL;
+               if (priv->fs.vlan.any_vlan_rule) {
+                       mlx5_del_flow_rule(priv->fs.vlan.any_vlan_rule);
+                       priv->fs.vlan.any_vlan_rule = NULL;
                }
                break;
        case MLX5E_VLAN_RULE_TYPE_MATCH_VID:
                mlx5e_vport_context_update_vlans(priv);
-               if (priv->vlan.active_vlans_rule[vid]) {
-                       mlx5_del_flow_rule(priv->vlan.active_vlans_rule[vid]);
-                       priv->vlan.active_vlans_rule[vid] = NULL;
+               if (priv->fs.vlan.active_vlans_rule[vid]) {
+                       mlx5_del_flow_rule(priv->fs.vlan.active_vlans_rule[vid]);
+                       priv->fs.vlan.active_vlans_rule[vid] = NULL;
                }
                mlx5e_vport_context_update_vlans(priv);
                break;
@@ -613,10 +257,10 @@ static void mlx5e_del_vlan_rule(struct mlx5e_priv *priv,
 
 void mlx5e_enable_vlan_filter(struct mlx5e_priv *priv)
 {
-       if (!priv->vlan.filter_disabled)
+       if (!priv->fs.vlan.filter_disabled)
                return;
 
-       priv->vlan.filter_disabled = false;
+       priv->fs.vlan.filter_disabled = false;
        if (priv->netdev->flags & IFF_PROMISC)
                return;
        mlx5e_del_vlan_rule(priv, MLX5E_VLAN_RULE_TYPE_ANY_VID, 0);
@@ -624,10 +268,10 @@ void mlx5e_enable_vlan_filter(struct mlx5e_priv *priv)
 
 void mlx5e_disable_vlan_filter(struct mlx5e_priv *priv)
 {
-       if (priv->vlan.filter_disabled)
+       if (priv->fs.vlan.filter_disabled)
                return;
 
-       priv->vlan.filter_disabled = true;
+       priv->fs.vlan.filter_disabled = true;
        if (priv->netdev->flags & IFF_PROMISC)
                return;
        mlx5e_add_vlan_rule(priv, MLX5E_VLAN_RULE_TYPE_ANY_VID, 0);
@@ -638,7 +282,7 @@ int mlx5e_vlan_rx_add_vid(struct net_device *dev, __always_unused __be16 proto,
 {
        struct mlx5e_priv *priv = netdev_priv(dev);
 
-       set_bit(vid, priv->vlan.active_vlans);
+       set_bit(vid, priv->fs.vlan.active_vlans);
 
        return mlx5e_add_vlan_rule(priv, MLX5E_VLAN_RULE_TYPE_MATCH_VID, vid);
 }
@@ -648,7 +292,7 @@ int mlx5e_vlan_rx_kill_vid(struct net_device *dev, __always_unused __be16 proto,
 {
        struct mlx5e_priv *priv = netdev_priv(dev);
 
-       clear_bit(vid, priv->vlan.active_vlans);
+       clear_bit(vid, priv->fs.vlan.active_vlans);
 
        mlx5e_del_vlan_rule(priv, MLX5E_VLAN_RULE_TYPE_MATCH_VID, vid);
 
@@ -656,21 +300,21 @@ int mlx5e_vlan_rx_kill_vid(struct net_device *dev, __always_unused __be16 proto,
 }
 
 #define mlx5e_for_each_hash_node(hn, tmp, hash, i) \
-       for (i = 0; i < MLX5E_ETH_ADDR_HASH_SIZE; i++) \
+       for (i = 0; i < MLX5E_L2_ADDR_HASH_SIZE; i++) \
                hlist_for_each_entry_safe(hn, tmp, &hash[i], hlist)
 
-static void mlx5e_execute_action(struct mlx5e_priv *priv,
-                                struct mlx5e_eth_addr_hash_node *hn)
+static void mlx5e_execute_l2_action(struct mlx5e_priv *priv,
+                                   struct mlx5e_l2_hash_node *hn)
 {
        switch (hn->action) {
        case MLX5E_ACTION_ADD:
-               mlx5e_add_eth_addr_rule(priv, &hn->ai, MLX5E_FULLMATCH);
+               mlx5e_add_l2_flow_rule(priv, &hn->ai, MLX5E_FULLMATCH);
                hn->action = MLX5E_ACTION_NONE;
                break;
 
        case MLX5E_ACTION_DEL:
-               mlx5e_del_eth_addr_from_flow_table(priv, &hn->ai);
-               mlx5e_del_eth_addr_from_hash(hn);
+               mlx5e_del_l2_flow_rule(priv, &hn->ai);
+               mlx5e_del_l2_from_hash(hn);
                break;
        }
 }
@@ -682,14 +326,14 @@ static void mlx5e_sync_netdev_addr(struct mlx5e_priv *priv)
 
        netif_addr_lock_bh(netdev);
 
-       mlx5e_add_eth_addr_to_hash(priv->eth_addr.netdev_uc,
-                                  priv->netdev->dev_addr);
+       mlx5e_add_l2_to_hash(priv->fs.l2.netdev_uc,
+                            priv->netdev->dev_addr);
 
        netdev_for_each_uc_addr(ha, netdev)
-               mlx5e_add_eth_addr_to_hash(priv->eth_addr.netdev_uc, ha->addr);
+               mlx5e_add_l2_to_hash(priv->fs.l2.netdev_uc, ha->addr);
 
        netdev_for_each_mc_addr(ha, netdev)
-               mlx5e_add_eth_addr_to_hash(priv->eth_addr.netdev_mc, ha->addr);
+               mlx5e_add_l2_to_hash(priv->fs.l2.netdev_mc, ha->addr);
 
        netif_addr_unlock_bh(netdev);
 }
@@ -699,17 +343,17 @@ static void mlx5e_fill_addr_array(struct mlx5e_priv *priv, int list_type,
 {
        bool is_uc = (list_type == MLX5_NVPRT_LIST_TYPE_UC);
        struct net_device *ndev = priv->netdev;
-       struct mlx5e_eth_addr_hash_node *hn;
+       struct mlx5e_l2_hash_node *hn;
        struct hlist_head *addr_list;
        struct hlist_node *tmp;
        int i = 0;
        int hi;
 
-       addr_list = is_uc ? priv->eth_addr.netdev_uc : priv->eth_addr.netdev_mc;
+       addr_list = is_uc ? priv->fs.l2.netdev_uc : priv->fs.l2.netdev_mc;
 
        if (is_uc) /* Make sure our own address is pushed first */
                ether_addr_copy(addr_array[i++], ndev->dev_addr);
-       else if (priv->eth_addr.broadcast_enabled)
+       else if (priv->fs.l2.broadcast_enabled)
                ether_addr_copy(addr_array[i++], ndev->broadcast);
 
        mlx5e_for_each_hash_node(hn, tmp, addr_list, hi) {
@@ -725,7 +369,7 @@ static void mlx5e_vport_context_update_addr_list(struct mlx5e_priv *priv,
                                                 int list_type)
 {
        bool is_uc = (list_type == MLX5_NVPRT_LIST_TYPE_UC);
-       struct mlx5e_eth_addr_hash_node *hn;
+       struct mlx5e_l2_hash_node *hn;
        u8 (*addr_array)[ETH_ALEN] = NULL;
        struct hlist_head *addr_list;
        struct hlist_node *tmp;
@@ -734,12 +378,12 @@ static void mlx5e_vport_context_update_addr_list(struct mlx5e_priv *priv,
        int err;
        int hi;
 
-       size = is_uc ? 0 : (priv->eth_addr.broadcast_enabled ? 1 : 0);
+       size = is_uc ? 0 : (priv->fs.l2.broadcast_enabled ? 1 : 0);
        max_size = is_uc ?
                1 << MLX5_CAP_GEN(priv->mdev, log_max_current_uc_list) :
                1 << MLX5_CAP_GEN(priv->mdev, log_max_current_mc_list);
 
-       addr_list = is_uc ? priv->eth_addr.netdev_uc : priv->eth_addr.netdev_mc;
+       addr_list = is_uc ? priv->fs.l2.netdev_uc : priv->fs.l2.netdev_mc;
        mlx5e_for_each_hash_node(hn, tmp, addr_list, hi)
                size++;
 
@@ -770,7 +414,7 @@ out:
 
 static void mlx5e_vport_context_update(struct mlx5e_priv *priv)
 {
-       struct mlx5e_eth_addr_db *ea = &priv->eth_addr;
+       struct mlx5e_l2_table *ea = &priv->fs.l2;
 
        mlx5e_vport_context_update_addr_list(priv, MLX5_NVPRT_LIST_TYPE_UC);
        mlx5e_vport_context_update_addr_list(priv, MLX5_NVPRT_LIST_TYPE_MC);
@@ -781,26 +425,26 @@ static void mlx5e_vport_context_update(struct mlx5e_priv *priv)
 
 static void mlx5e_apply_netdev_addr(struct mlx5e_priv *priv)
 {
-       struct mlx5e_eth_addr_hash_node *hn;
+       struct mlx5e_l2_hash_node *hn;
        struct hlist_node *tmp;
        int i;
 
-       mlx5e_for_each_hash_node(hn, tmp, priv->eth_addr.netdev_uc, i)
-               mlx5e_execute_action(priv, hn);
+       mlx5e_for_each_hash_node(hn, tmp, priv->fs.l2.netdev_uc, i)
+               mlx5e_execute_l2_action(priv, hn);
 
-       mlx5e_for_each_hash_node(hn, tmp, priv->eth_addr.netdev_mc, i)
-               mlx5e_execute_action(priv, hn);
+       mlx5e_for_each_hash_node(hn, tmp, priv->fs.l2.netdev_mc, i)
+               mlx5e_execute_l2_action(priv, hn);
 }
 
 static void mlx5e_handle_netdev_addr(struct mlx5e_priv *priv)
 {
-       struct mlx5e_eth_addr_hash_node *hn;
+       struct mlx5e_l2_hash_node *hn;
        struct hlist_node *tmp;
        int i;
 
-       mlx5e_for_each_hash_node(hn, tmp, priv->eth_addr.netdev_uc, i)
+       mlx5e_for_each_hash_node(hn, tmp, priv->fs.l2.netdev_uc, i)
                hn->action = MLX5E_ACTION_DEL;
-       mlx5e_for_each_hash_node(hn, tmp, priv->eth_addr.netdev_mc, i)
+       mlx5e_for_each_hash_node(hn, tmp, priv->fs.l2.netdev_mc, i)
                hn->action = MLX5E_ACTION_DEL;
 
        if (!test_bit(MLX5E_STATE_DESTROYING, &priv->state))
@@ -814,7 +458,7 @@ void mlx5e_set_rx_mode_work(struct work_struct *work)
        struct mlx5e_priv *priv = container_of(work, struct mlx5e_priv,
                                               set_rx_mode_work);
 
-       struct mlx5e_eth_addr_db *ea = &priv->eth_addr;
+       struct mlx5e_l2_table *ea = &priv->fs.l2;
        struct net_device *ndev = priv->netdev;
 
        bool rx_mode_enable   = !test_bit(MLX5E_STATE_DESTROYING, &priv->state);
@@ -830,27 +474,27 @@ void mlx5e_set_rx_mode_work(struct work_struct *work)
        bool disable_broadcast =  ea->broadcast_enabled && !broadcast_enabled;
 
        if (enable_promisc) {
-               mlx5e_add_eth_addr_rule(priv, &ea->promisc, MLX5E_PROMISC);
-               if (!priv->vlan.filter_disabled)
+               mlx5e_add_l2_flow_rule(priv, &ea->promisc, MLX5E_PROMISC);
+               if (!priv->fs.vlan.filter_disabled)
                        mlx5e_add_vlan_rule(priv, MLX5E_VLAN_RULE_TYPE_ANY_VID,
                                            0);
        }
        if (enable_allmulti)
-               mlx5e_add_eth_addr_rule(priv, &ea->allmulti, MLX5E_ALLMULTI);
+               mlx5e_add_l2_flow_rule(priv, &ea->allmulti, MLX5E_ALLMULTI);
        if (enable_broadcast)
-               mlx5e_add_eth_addr_rule(priv, &ea->broadcast, MLX5E_FULLMATCH);
+               mlx5e_add_l2_flow_rule(priv, &ea->broadcast, MLX5E_FULLMATCH);
 
        mlx5e_handle_netdev_addr(priv);
 
        if (disable_broadcast)
-               mlx5e_del_eth_addr_from_flow_table(priv, &ea->broadcast);
+               mlx5e_del_l2_flow_rule(priv, &ea->broadcast);
        if (disable_allmulti)
-               mlx5e_del_eth_addr_from_flow_table(priv, &ea->allmulti);
+               mlx5e_del_l2_flow_rule(priv, &ea->allmulti);
        if (disable_promisc) {
-               if (!priv->vlan.filter_disabled)
+               if (!priv->fs.vlan.filter_disabled)
                        mlx5e_del_vlan_rule(priv, MLX5E_VLAN_RULE_TYPE_ANY_VID,
                                            0);
-               mlx5e_del_eth_addr_from_flow_table(priv, &ea->promisc);
+               mlx5e_del_l2_flow_rule(priv, &ea->promisc);
        }
 
        ea->promisc_enabled   = promisc_enabled;
@@ -872,224 +516,454 @@ static void mlx5e_destroy_groups(struct mlx5e_flow_table *ft)
        ft->num_groups = 0;
 }
 
-void mlx5e_init_eth_addr(struct mlx5e_priv *priv)
+void mlx5e_init_l2_addr(struct mlx5e_priv *priv)
 {
-       ether_addr_copy(priv->eth_addr.broadcast.addr, priv->netdev->broadcast);
+       ether_addr_copy(priv->fs.l2.broadcast.addr, priv->netdev->broadcast);
 }
 
-#define MLX5E_MAIN_GROUP0_SIZE BIT(3)
-#define MLX5E_MAIN_GROUP1_SIZE BIT(1)
-#define MLX5E_MAIN_GROUP2_SIZE BIT(0)
-#define MLX5E_MAIN_GROUP3_SIZE BIT(14)
-#define MLX5E_MAIN_GROUP4_SIZE BIT(13)
-#define MLX5E_MAIN_GROUP5_SIZE BIT(11)
-#define MLX5E_MAIN_GROUP6_SIZE BIT(2)
-#define MLX5E_MAIN_GROUP7_SIZE BIT(1)
-#define MLX5E_MAIN_GROUP8_SIZE BIT(0)
-#define MLX5E_MAIN_TABLE_SIZE  (MLX5E_MAIN_GROUP0_SIZE +\
-                                MLX5E_MAIN_GROUP1_SIZE +\
-                                MLX5E_MAIN_GROUP2_SIZE +\
-                                MLX5E_MAIN_GROUP3_SIZE +\
-                                MLX5E_MAIN_GROUP4_SIZE +\
-                                MLX5E_MAIN_GROUP5_SIZE +\
-                                MLX5E_MAIN_GROUP6_SIZE +\
-                                MLX5E_MAIN_GROUP7_SIZE +\
-                                MLX5E_MAIN_GROUP8_SIZE)
-
-static int __mlx5e_create_main_groups(struct mlx5e_flow_table *ft, u32 *in,
-                                     int inlen)
+void mlx5e_destroy_flow_table(struct mlx5e_flow_table *ft)
 {
-       u8 *mc = MLX5_ADDR_OF(create_flow_group_in, in, match_criteria);
-       u8 *dmac = MLX5_ADDR_OF(create_flow_group_in, in,
-                               match_criteria.outer_headers.dmac_47_16);
+       mlx5e_destroy_groups(ft);
+       kfree(ft->g);
+       mlx5_destroy_flow_table(ft->t);
+       ft->t = NULL;
+}
+
+static void mlx5e_cleanup_ttc_rules(struct mlx5e_ttc_table *ttc)
+{
+       int i;
+
+       for (i = 0; i < MLX5E_NUM_TT; i++) {
+               if (!IS_ERR_OR_NULL(ttc->rules[i])) {
+                       mlx5_del_flow_rule(ttc->rules[i]);
+                       ttc->rules[i] = NULL;
+               }
+       }
+}
+
+static struct {
+       u16 etype;
+       u8 proto;
+} ttc_rules[] = {
+       [MLX5E_TT_IPV4_TCP] = {
+               .etype = ETH_P_IP,
+               .proto = IPPROTO_TCP,
+       },
+       [MLX5E_TT_IPV6_TCP] = {
+               .etype = ETH_P_IPV6,
+               .proto = IPPROTO_TCP,
+       },
+       [MLX5E_TT_IPV4_UDP] = {
+               .etype = ETH_P_IP,
+               .proto = IPPROTO_UDP,
+       },
+       [MLX5E_TT_IPV6_UDP] = {
+               .etype = ETH_P_IPV6,
+               .proto = IPPROTO_UDP,
+       },
+       [MLX5E_TT_IPV4_IPSEC_AH] = {
+               .etype = ETH_P_IP,
+               .proto = IPPROTO_AH,
+       },
+       [MLX5E_TT_IPV6_IPSEC_AH] = {
+               .etype = ETH_P_IPV6,
+               .proto = IPPROTO_AH,
+       },
+       [MLX5E_TT_IPV4_IPSEC_ESP] = {
+               .etype = ETH_P_IP,
+               .proto = IPPROTO_ESP,
+       },
+       [MLX5E_TT_IPV6_IPSEC_ESP] = {
+               .etype = ETH_P_IPV6,
+               .proto = IPPROTO_ESP,
+       },
+       [MLX5E_TT_IPV4] = {
+               .etype = ETH_P_IP,
+               .proto = 0,
+       },
+       [MLX5E_TT_IPV6] = {
+               .etype = ETH_P_IPV6,
+               .proto = 0,
+       },
+       [MLX5E_TT_ANY] = {
+               .etype = 0,
+               .proto = 0,
+       },
+};
+
+static struct mlx5_flow_rule *mlx5e_generate_ttc_rule(struct mlx5e_priv *priv,
+                                                     struct mlx5_flow_table *ft,
+                                                     struct mlx5_flow_destination *dest,
+                                                     u16 etype,
+                                                     u8 proto)
+{
+       struct mlx5_flow_rule *rule;
+       u8 match_criteria_enable = 0;
+       u32 *match_criteria;
+       u32 *match_value;
+       int err = 0;
+
+       match_value     = mlx5_vzalloc(MLX5_ST_SZ_BYTES(fte_match_param));
+       match_criteria  = mlx5_vzalloc(MLX5_ST_SZ_BYTES(fte_match_param));
+       if (!match_value || !match_criteria) {
+               netdev_err(priv->netdev, "%s: alloc failed\n", __func__);
+               err = -ENOMEM;
+               goto out;
+       }
+
+       if (proto) {
+               match_criteria_enable = MLX5_MATCH_OUTER_HEADERS;
+               MLX5_SET_TO_ONES(fte_match_param, match_criteria, outer_headers.ip_protocol);
+               MLX5_SET(fte_match_param, match_value, outer_headers.ip_protocol, proto);
+       }
+       if (etype) {
+               match_criteria_enable = MLX5_MATCH_OUTER_HEADERS;
+               MLX5_SET_TO_ONES(fte_match_param, match_criteria, outer_headers.ethertype);
+               MLX5_SET(fte_match_param, match_value, outer_headers.ethertype, etype);
+       }
+
+       rule = mlx5_add_flow_rule(ft, match_criteria_enable,
+                                 match_criteria, match_value,
+                                 MLX5_FLOW_CONTEXT_ACTION_FWD_DEST,
+                                 MLX5_FS_DEFAULT_FLOW_TAG,
+                                 dest);
+       if (IS_ERR(rule)) {
+               err = PTR_ERR(rule);
+               netdev_err(priv->netdev, "%s: add rule failed\n", __func__);
+       }
+out:
+       kvfree(match_criteria);
+       kvfree(match_value);
+       return err ? ERR_PTR(err) : rule;
+}
+
+static int mlx5e_generate_ttc_table_rules(struct mlx5e_priv *priv)
+{
+       struct mlx5_flow_destination dest;
+       struct mlx5e_ttc_table *ttc;
+       struct mlx5_flow_rule **rules;
+       struct mlx5_flow_table *ft;
+       int tt;
        int err;
+
+       ttc = &priv->fs.ttc;
+       ft = ttc->ft.t;
+       rules = ttc->rules;
+
+       dest.type = MLX5_FLOW_DESTINATION_TYPE_TIR;
+       for (tt = 0; tt < MLX5E_NUM_TT; tt++) {
+               if (tt == MLX5E_TT_ANY)
+                       dest.tir_num = priv->direct_tir[0].tirn;
+               else
+                       dest.tir_num = priv->indir_tirn[tt];
+               rules[tt] = mlx5e_generate_ttc_rule(priv, ft, &dest,
+                                                   ttc_rules[tt].etype,
+                                                   ttc_rules[tt].proto);
+               if (IS_ERR(rules[tt]))
+                       goto del_rules;
+       }
+
+       return 0;
+
+del_rules:
+       err = PTR_ERR(rules[tt]);
+       rules[tt] = NULL;
+       mlx5e_cleanup_ttc_rules(ttc);
+       return err;
+}
+
+#define MLX5E_TTC_NUM_GROUPS   3
+#define MLX5E_TTC_GROUP1_SIZE  BIT(3)
+#define MLX5E_TTC_GROUP2_SIZE  BIT(1)
+#define MLX5E_TTC_GROUP3_SIZE  BIT(0)
+#define MLX5E_TTC_TABLE_SIZE   (MLX5E_TTC_GROUP1_SIZE +\
+                                MLX5E_TTC_GROUP2_SIZE +\
+                                MLX5E_TTC_GROUP3_SIZE)
+static int mlx5e_create_ttc_table_groups(struct mlx5e_ttc_table *ttc)
+{
+       int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in);
+       struct mlx5e_flow_table *ft = &ttc->ft;
        int ix = 0;
+       u32 *in;
+       int err;
+       u8 *mc;
 
-       memset(in, 0, inlen);
-       MLX5_SET_CFG(in, match_criteria_enable, MLX5_MATCH_OUTER_HEADERS);
-       MLX5_SET_TO_ONES(fte_match_param, mc, outer_headers.ethertype);
-       MLX5_SET_TO_ONES(fte_match_param, mc, outer_headers.ip_protocol);
-       MLX5_SET_CFG(in, start_flow_index, ix);
-       ix += MLX5E_MAIN_GROUP0_SIZE;
-       MLX5_SET_CFG(in, end_flow_index, ix - 1);
-       ft->g[ft->num_groups] = mlx5_create_flow_group(ft->t, in);
-       if (IS_ERR(ft->g[ft->num_groups]))
-               goto err_destroy_groups;
-       ft->num_groups++;
+       ft->g = kcalloc(MLX5E_TTC_NUM_GROUPS,
+                       sizeof(*ft->g), GFP_KERNEL);
+       if (!ft->g)
+               return -ENOMEM;
+       in = mlx5_vzalloc(inlen);
+       if (!in) {
+               kfree(ft->g);
+               return -ENOMEM;
+       }
 
-       memset(in, 0, inlen);
-       MLX5_SET_CFG(in, match_criteria_enable, MLX5_MATCH_OUTER_HEADERS);
+       /* L4 Group */
+       mc = MLX5_ADDR_OF(create_flow_group_in, in, match_criteria);
+       MLX5_SET_TO_ONES(fte_match_param, mc, outer_headers.ip_protocol);
        MLX5_SET_TO_ONES(fte_match_param, mc, outer_headers.ethertype);
+       MLX5_SET_CFG(in, match_criteria_enable, MLX5_MATCH_OUTER_HEADERS);
        MLX5_SET_CFG(in, start_flow_index, ix);
-       ix += MLX5E_MAIN_GROUP1_SIZE;
+       ix += MLX5E_TTC_GROUP1_SIZE;
        MLX5_SET_CFG(in, end_flow_index, ix - 1);
        ft->g[ft->num_groups] = mlx5_create_flow_group(ft->t, in);
        if (IS_ERR(ft->g[ft->num_groups]))
-               goto err_destroy_groups;
+               goto err;
        ft->num_groups++;
 
-       memset(in, 0, inlen);
+       /* L3 Group */
+       MLX5_SET(fte_match_param, mc, outer_headers.ip_protocol, 0);
        MLX5_SET_CFG(in, start_flow_index, ix);
-       ix += MLX5E_MAIN_GROUP2_SIZE;
+       ix += MLX5E_TTC_GROUP2_SIZE;
        MLX5_SET_CFG(in, end_flow_index, ix - 1);
        ft->g[ft->num_groups] = mlx5_create_flow_group(ft->t, in);
        if (IS_ERR(ft->g[ft->num_groups]))
-               goto err_destroy_groups;
+               goto err;
        ft->num_groups++;
 
+       /* Any Group */
        memset(in, 0, inlen);
-       MLX5_SET_CFG(in, match_criteria_enable, MLX5_MATCH_OUTER_HEADERS);
-       MLX5_SET_TO_ONES(fte_match_param, mc, outer_headers.ethertype);
-       MLX5_SET_TO_ONES(fte_match_param, mc, outer_headers.ip_protocol);
-       eth_broadcast_addr(dmac);
        MLX5_SET_CFG(in, start_flow_index, ix);
-       ix += MLX5E_MAIN_GROUP3_SIZE;
+       ix += MLX5E_TTC_GROUP3_SIZE;
        MLX5_SET_CFG(in, end_flow_index, ix - 1);
        ft->g[ft->num_groups] = mlx5_create_flow_group(ft->t, in);
        if (IS_ERR(ft->g[ft->num_groups]))
-               goto err_destroy_groups;
+               goto err;
        ft->num_groups++;
 
-       memset(in, 0, inlen);
-       MLX5_SET_CFG(in, match_criteria_enable, MLX5_MATCH_OUTER_HEADERS);
-       MLX5_SET_TO_ONES(fte_match_param, mc, outer_headers.ethertype);
-       eth_broadcast_addr(dmac);
-       MLX5_SET_CFG(in, start_flow_index, ix);
-       ix += MLX5E_MAIN_GROUP4_SIZE;
-       MLX5_SET_CFG(in, end_flow_index, ix - 1);
-       ft->g[ft->num_groups] = mlx5_create_flow_group(ft->t, in);
-       if (IS_ERR(ft->g[ft->num_groups]))
-               goto err_destroy_groups;
-       ft->num_groups++;
+       kvfree(in);
+       return 0;
 
-       memset(in, 0, inlen);
-       MLX5_SET_CFG(in, match_criteria_enable, MLX5_MATCH_OUTER_HEADERS);
-       eth_broadcast_addr(dmac);
-       MLX5_SET_CFG(in, start_flow_index, ix);
-       ix += MLX5E_MAIN_GROUP5_SIZE;
-       MLX5_SET_CFG(in, end_flow_index, ix - 1);
-       ft->g[ft->num_groups] = mlx5_create_flow_group(ft->t, in);
-       if (IS_ERR(ft->g[ft->num_groups]))
-               goto err_destroy_groups;
-       ft->num_groups++;
+err:
+       err = PTR_ERR(ft->g[ft->num_groups]);
+       ft->g[ft->num_groups] = NULL;
+       kvfree(in);
 
-       memset(in, 0, inlen);
-       MLX5_SET_CFG(in, match_criteria_enable, MLX5_MATCH_OUTER_HEADERS);
-       MLX5_SET_TO_ONES(fte_match_param, mc, outer_headers.ethertype);
-       MLX5_SET_TO_ONES(fte_match_param, mc, outer_headers.ip_protocol);
-       dmac[0] = 0x01;
+       return err;
+}
+
+static void mlx5e_destroy_ttc_table(struct mlx5e_priv *priv)
+{
+       struct mlx5e_ttc_table *ttc = &priv->fs.ttc;
+
+       mlx5e_cleanup_ttc_rules(ttc);
+       mlx5e_destroy_flow_table(&ttc->ft);
+}
+
+static int mlx5e_create_ttc_table(struct mlx5e_priv *priv)
+{
+       struct mlx5e_ttc_table *ttc = &priv->fs.ttc;
+       struct mlx5e_flow_table *ft = &ttc->ft;
+       int err;
+
+       ft->t = mlx5_create_flow_table(priv->fs.ns, MLX5E_NIC_PRIO,
+                                      MLX5E_TTC_TABLE_SIZE, MLX5E_TTC_FT_LEVEL);
+       if (IS_ERR(ft->t)) {
+               err = PTR_ERR(ft->t);
+               ft->t = NULL;
+               return err;
+       }
+
+       err = mlx5e_create_ttc_table_groups(ttc);
+       if (err)
+               goto err;
+
+       err = mlx5e_generate_ttc_table_rules(priv);
+       if (err)
+               goto err;
+
+       return 0;
+err:
+       mlx5e_destroy_flow_table(ft);
+       return err;
+}
+
+static void mlx5e_del_l2_flow_rule(struct mlx5e_priv *priv,
+                                  struct mlx5e_l2_rule *ai)
+{
+       if (!IS_ERR_OR_NULL(ai->rule)) {
+               mlx5_del_flow_rule(ai->rule);
+               ai->rule = NULL;
+       }
+}
+
+static int mlx5e_add_l2_flow_rule(struct mlx5e_priv *priv,
+                                 struct mlx5e_l2_rule *ai, int type)
+{
+       struct mlx5_flow_table *ft = priv->fs.l2.ft.t;
+       struct mlx5_flow_destination dest;
+       u8 match_criteria_enable = 0;
+       u32 *match_criteria;
+       u32 *match_value;
+       int err = 0;
+       u8 *mc_dmac;
+       u8 *mv_dmac;
+
+       match_value    = mlx5_vzalloc(MLX5_ST_SZ_BYTES(fte_match_param));
+       match_criteria = mlx5_vzalloc(MLX5_ST_SZ_BYTES(fte_match_param));
+       if (!match_value || !match_criteria) {
+               netdev_err(priv->netdev, "%s: alloc failed\n", __func__);
+               err = -ENOMEM;
+               goto add_l2_rule_out;
+       }
+
+       mc_dmac = MLX5_ADDR_OF(fte_match_param, match_criteria,
+                              outer_headers.dmac_47_16);
+       mv_dmac = MLX5_ADDR_OF(fte_match_param, match_value,
+                              outer_headers.dmac_47_16);
+
+       dest.type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE;
+       dest.ft = priv->fs.ttc.ft.t;
+
+       switch (type) {
+       case MLX5E_FULLMATCH:
+               match_criteria_enable = MLX5_MATCH_OUTER_HEADERS;
+               eth_broadcast_addr(mc_dmac);
+               ether_addr_copy(mv_dmac, ai->addr);
+               break;
+
+       case MLX5E_ALLMULTI:
+               match_criteria_enable = MLX5_MATCH_OUTER_HEADERS;
+               mc_dmac[0] = 0x01;
+               mv_dmac[0] = 0x01;
+               break;
+
+       case MLX5E_PROMISC:
+               break;
+       }
+
+       ai->rule = mlx5_add_flow_rule(ft, match_criteria_enable, match_criteria,
+                                     match_value,
+                                     MLX5_FLOW_CONTEXT_ACTION_FWD_DEST,
+                                     MLX5_FS_DEFAULT_FLOW_TAG, &dest);
+       if (IS_ERR(ai->rule)) {
+               netdev_err(priv->netdev, "%s: add l2 rule(mac:%pM) failed\n",
+                          __func__, mv_dmac);
+               err = PTR_ERR(ai->rule);
+               ai->rule = NULL;
+       }
+
+add_l2_rule_out:
+       kvfree(match_criteria);
+       kvfree(match_value);
+
+       return err;
+}
+
+#define MLX5E_NUM_L2_GROUPS       3
+#define MLX5E_L2_GROUP1_SIZE      BIT(0)
+#define MLX5E_L2_GROUP2_SIZE      BIT(15)
+#define MLX5E_L2_GROUP3_SIZE      BIT(0)
+#define MLX5E_L2_TABLE_SIZE       (MLX5E_L2_GROUP1_SIZE +\
+                                   MLX5E_L2_GROUP2_SIZE +\
+                                   MLX5E_L2_GROUP3_SIZE)
+static int mlx5e_create_l2_table_groups(struct mlx5e_l2_table *l2_table)
+{
+       int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in);
+       struct mlx5e_flow_table *ft = &l2_table->ft;
+       int ix = 0;
+       u8 *mc_dmac;
+       u32 *in;
+       int err;
+       u8 *mc;
+
+       ft->g = kcalloc(MLX5E_NUM_L2_GROUPS, sizeof(*ft->g), GFP_KERNEL);
+       if (!ft->g)
+               return -ENOMEM;
+       in = mlx5_vzalloc(inlen);
+       if (!in) {
+               kfree(ft->g);
+               return -ENOMEM;
+       }
+
+       mc = MLX5_ADDR_OF(create_flow_group_in, in, match_criteria);
+       mc_dmac = MLX5_ADDR_OF(fte_match_param, mc,
+                              outer_headers.dmac_47_16);
+       /* Flow Group for promiscuous */
        MLX5_SET_CFG(in, start_flow_index, ix);
-       ix += MLX5E_MAIN_GROUP6_SIZE;
+       ix += MLX5E_L2_GROUP1_SIZE;
        MLX5_SET_CFG(in, end_flow_index, ix - 1);
        ft->g[ft->num_groups] = mlx5_create_flow_group(ft->t, in);
        if (IS_ERR(ft->g[ft->num_groups]))
                goto err_destroy_groups;
        ft->num_groups++;
 
-       memset(in, 0, inlen);
+       /* Flow Group for full match */
+       eth_broadcast_addr(mc_dmac);
        MLX5_SET_CFG(in, match_criteria_enable, MLX5_MATCH_OUTER_HEADERS);
-       MLX5_SET_TO_ONES(fte_match_param, mc, outer_headers.ethertype);
-       dmac[0] = 0x01;
        MLX5_SET_CFG(in, start_flow_index, ix);
-       ix += MLX5E_MAIN_GROUP7_SIZE;
+       ix += MLX5E_L2_GROUP2_SIZE;
        MLX5_SET_CFG(in, end_flow_index, ix - 1);
        ft->g[ft->num_groups] = mlx5_create_flow_group(ft->t, in);
        if (IS_ERR(ft->g[ft->num_groups]))
                goto err_destroy_groups;
        ft->num_groups++;
 
-       memset(in, 0, inlen);
-       MLX5_SET_CFG(in, match_criteria_enable, MLX5_MATCH_OUTER_HEADERS);
-       dmac[0] = 0x01;
+       /* Flow Group for allmulti */
+       eth_zero_addr(mc_dmac);
+       mc_dmac[0] = 0x01;
        MLX5_SET_CFG(in, start_flow_index, ix);
-       ix += MLX5E_MAIN_GROUP8_SIZE;
+       ix += MLX5E_L2_GROUP3_SIZE;
        MLX5_SET_CFG(in, end_flow_index, ix - 1);
        ft->g[ft->num_groups] = mlx5_create_flow_group(ft->t, in);
        if (IS_ERR(ft->g[ft->num_groups]))
                goto err_destroy_groups;
        ft->num_groups++;
 
+       kvfree(in);
        return 0;
 
 err_destroy_groups:
        err = PTR_ERR(ft->g[ft->num_groups]);
        ft->g[ft->num_groups] = NULL;
        mlx5e_destroy_groups(ft);
+       kvfree(in);
 
        return err;
 }
 
-static int mlx5e_create_main_groups(struct mlx5e_flow_table *ft)
+static void mlx5e_destroy_l2_table(struct mlx5e_priv *priv)
 {
-       u32 *in;
-       int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in);
-       int err;
-
-       in = mlx5_vzalloc(inlen);
-       if (!in)
-               return -ENOMEM;
-
-       err = __mlx5e_create_main_groups(ft, in, inlen);
-
-       kvfree(in);
-       return err;
+       mlx5e_destroy_flow_table(&priv->fs.l2.ft);
 }
 
-static int mlx5e_create_main_flow_table(struct mlx5e_priv *priv)
+static int mlx5e_create_l2_table(struct mlx5e_priv *priv)
 {
-       struct mlx5e_flow_table *ft = &priv->fts.main;
+       struct mlx5e_l2_table *l2_table = &priv->fs.l2;
+       struct mlx5e_flow_table *ft = &l2_table->ft;
        int err;
 
        ft->num_groups = 0;
-       ft->t = mlx5_create_flow_table(priv->fts.ns, 1, MLX5E_MAIN_TABLE_SIZE);
+       ft->t = mlx5_create_flow_table(priv->fs.ns, MLX5E_NIC_PRIO,
+                                      MLX5E_L2_TABLE_SIZE, MLX5E_L2_FT_LEVEL);
 
        if (IS_ERR(ft->t)) {
                err = PTR_ERR(ft->t);
                ft->t = NULL;
                return err;
        }
-       ft->g = kcalloc(MLX5E_NUM_MAIN_GROUPS, sizeof(*ft->g), GFP_KERNEL);
-       if (!ft->g) {
-               err = -ENOMEM;
-               goto err_destroy_main_flow_table;
-       }
 
-       err = mlx5e_create_main_groups(ft);
+       err = mlx5e_create_l2_table_groups(l2_table);
        if (err)
-               goto err_free_g;
-       return 0;
+               goto err_destroy_flow_table;
 
-err_free_g:
-       kfree(ft->g);
+       return 0;
 
-err_destroy_main_flow_table:
+err_destroy_flow_table:
        mlx5_destroy_flow_table(ft->t);
        ft->t = NULL;
 
        return err;
 }
 
-static void mlx5e_destroy_flow_table(struct mlx5e_flow_table *ft)
-{
-       mlx5e_destroy_groups(ft);
-       kfree(ft->g);
-       mlx5_destroy_flow_table(ft->t);
-       ft->t = NULL;
-}
-
-static void mlx5e_destroy_main_flow_table(struct mlx5e_priv *priv)
-{
-       mlx5e_destroy_flow_table(&priv->fts.main);
-}
-
 #define MLX5E_NUM_VLAN_GROUPS  2
 #define MLX5E_VLAN_GROUP0_SIZE BIT(12)
 #define MLX5E_VLAN_GROUP1_SIZE BIT(1)
 #define MLX5E_VLAN_TABLE_SIZE  (MLX5E_VLAN_GROUP0_SIZE +\
                                 MLX5E_VLAN_GROUP1_SIZE)
 
-static int __mlx5e_create_vlan_groups(struct mlx5e_flow_table *ft, u32 *in,
-                                     int inlen)
+static int __mlx5e_create_vlan_table_groups(struct mlx5e_flow_table *ft, u32 *in,
+                                           int inlen)
 {
        int err;
        int ix = 0;
@@ -1128,7 +1002,7 @@ err_destroy_groups:
        return err;
 }
 
-static int mlx5e_create_vlan_groups(struct mlx5e_flow_table *ft)
+static int mlx5e_create_vlan_table_groups(struct mlx5e_flow_table *ft)
 {
        u32 *in;
        int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in);
@@ -1138,19 +1012,20 @@ static int mlx5e_create_vlan_groups(struct mlx5e_flow_table *ft)
        if (!in)
                return -ENOMEM;
 
-       err = __mlx5e_create_vlan_groups(ft, in, inlen);
+       err = __mlx5e_create_vlan_table_groups(ft, in, inlen);
 
        kvfree(in);
        return err;
 }
 
-static int mlx5e_create_vlan_flow_table(struct mlx5e_priv *priv)
+static int mlx5e_create_vlan_table(struct mlx5e_priv *priv)
 {
-       struct mlx5e_flow_table *ft = &priv->fts.vlan;
+       struct mlx5e_flow_table *ft = &priv->fs.vlan.ft;
        int err;
 
        ft->num_groups = 0;
-       ft->t = mlx5_create_flow_table(priv->fts.ns, 1, MLX5E_VLAN_TABLE_SIZE);
+       ft->t = mlx5_create_flow_table(priv->fs.ns, MLX5E_NIC_PRIO,
+                                      MLX5E_VLAN_TABLE_SIZE, MLX5E_VLAN_FT_LEVEL);
 
        if (IS_ERR(ft->t)) {
                err = PTR_ERR(ft->t);
@@ -1160,65 +1035,90 @@ static int mlx5e_create_vlan_flow_table(struct mlx5e_priv *priv)
        ft->g = kcalloc(MLX5E_NUM_VLAN_GROUPS, sizeof(*ft->g), GFP_KERNEL);
        if (!ft->g) {
                err = -ENOMEM;
-               goto err_destroy_vlan_flow_table;
+               goto err_destroy_vlan_table;
        }
 
-       err = mlx5e_create_vlan_groups(ft);
+       err = mlx5e_create_vlan_table_groups(ft);
        if (err)
                goto err_free_g;
 
+       err = mlx5e_add_vlan_rule(priv, MLX5E_VLAN_RULE_TYPE_UNTAGGED, 0);
+       if (err)
+               goto err_destroy_vlan_flow_groups;
+
        return 0;
 
+err_destroy_vlan_flow_groups:
+       mlx5e_destroy_groups(ft);
 err_free_g:
        kfree(ft->g);
-
-err_destroy_vlan_flow_table:
+err_destroy_vlan_table:
        mlx5_destroy_flow_table(ft->t);
        ft->t = NULL;
 
        return err;
 }
 
-static void mlx5e_destroy_vlan_flow_table(struct mlx5e_priv *priv)
+static void mlx5e_destroy_vlan_table(struct mlx5e_priv *priv)
 {
-       mlx5e_destroy_flow_table(&priv->fts.vlan);
+       mlx5e_destroy_flow_table(&priv->fs.vlan.ft);
 }
 
-int mlx5e_create_flow_tables(struct mlx5e_priv *priv)
+int mlx5e_create_flow_steering(struct mlx5e_priv *priv)
 {
        int err;
 
-       priv->fts.ns = mlx5_get_flow_namespace(priv->mdev,
+       priv->fs.ns = mlx5_get_flow_namespace(priv->mdev,
                                               MLX5_FLOW_NAMESPACE_KERNEL);
 
-       if (!priv->fts.ns)
+       if (!priv->fs.ns)
                return -EINVAL;
 
-       err = mlx5e_create_vlan_flow_table(priv);
-       if (err)
-               return err;
+       err = mlx5e_arfs_create_tables(priv);
+       if (err) {
+               netdev_err(priv->netdev, "Failed to create arfs tables, err=%d\n",
+                          err);
+               priv->netdev->hw_features &= ~NETIF_F_NTUPLE;
+       }
 
-       err = mlx5e_create_main_flow_table(priv);
-       if (err)
-               goto err_destroy_vlan_flow_table;
+       err = mlx5e_create_ttc_table(priv);
+       if (err) {
+               netdev_err(priv->netdev, "Failed to create ttc table, err=%d\n",
+                          err);
+               goto err_destroy_arfs_tables;
+       }
 
-       err = mlx5e_add_vlan_rule(priv, MLX5E_VLAN_RULE_TYPE_UNTAGGED, 0);
-       if (err)
-               goto err_destroy_main_flow_table;
+       err = mlx5e_create_l2_table(priv);
+       if (err) {
+               netdev_err(priv->netdev, "Failed to create l2 table, err=%d\n",
+                          err);
+               goto err_destroy_ttc_table;
+       }
+
+       err = mlx5e_create_vlan_table(priv);
+       if (err) {
+               netdev_err(priv->netdev, "Failed to create vlan table, err=%d\n",
+                          err);
+               goto err_destroy_l2_table;
+       }
 
        return 0;
 
-err_destroy_main_flow_table:
-       mlx5e_destroy_main_flow_table(priv);
-err_destroy_vlan_flow_table:
-       mlx5e_destroy_vlan_flow_table(priv);
+err_destroy_l2_table:
+       mlx5e_destroy_l2_table(priv);
+err_destroy_ttc_table:
+       mlx5e_destroy_ttc_table(priv);
+err_destroy_arfs_tables:
+       mlx5e_arfs_destroy_tables(priv);
 
        return err;
 }
 
-void mlx5e_destroy_flow_tables(struct mlx5e_priv *priv)
+void mlx5e_destroy_flow_steering(struct mlx5e_priv *priv)
 {
        mlx5e_del_vlan_rule(priv, MLX5E_VLAN_RULE_TYPE_UNTAGGED, 0);
-       mlx5e_destroy_main_flow_table(priv);
-       mlx5e_destroy_vlan_flow_table(priv);
+       mlx5e_destroy_vlan_table(priv);
+       mlx5e_destroy_l2_table(priv);
+       mlx5e_destroy_ttc_table(priv);
+       mlx5e_arfs_destroy_tables(priv);
 }
index d485d1e..1c70e51 100644 (file)
@@ -91,96 +91,15 @@ static void mlx5e_update_carrier_work(struct work_struct *work)
        mutex_unlock(&priv->state_lock);
 }
 
-static void mlx5e_update_pport_counters(struct mlx5e_priv *priv)
-{
-       struct mlx5_core_dev *mdev = priv->mdev;
-       struct mlx5e_pport_stats *s = &priv->stats.pport;
-       u32 *in;
-       u32 *out;
-       int sz = MLX5_ST_SZ_BYTES(ppcnt_reg);
-
-       in  = mlx5_vzalloc(sz);
-       out = mlx5_vzalloc(sz);
-       if (!in || !out)
-               goto free_out;
-
-       MLX5_SET(ppcnt_reg, in, local_port, 1);
-
-       MLX5_SET(ppcnt_reg, in, grp, MLX5_IEEE_802_3_COUNTERS_GROUP);
-       mlx5_core_access_reg(mdev, in, sz, out,
-                            sz, MLX5_REG_PPCNT, 0, 0);
-       memcpy(s->IEEE_802_3_counters,
-              MLX5_ADDR_OF(ppcnt_reg, out, counter_set),
-              sizeof(s->IEEE_802_3_counters));
-
-       MLX5_SET(ppcnt_reg, in, grp, MLX5_RFC_2863_COUNTERS_GROUP);
-       mlx5_core_access_reg(mdev, in, sz, out,
-                            sz, MLX5_REG_PPCNT, 0, 0);
-       memcpy(s->RFC_2863_counters,
-              MLX5_ADDR_OF(ppcnt_reg, out, counter_set),
-              sizeof(s->RFC_2863_counters));
-
-       MLX5_SET(ppcnt_reg, in, grp, MLX5_RFC_2819_COUNTERS_GROUP);
-       mlx5_core_access_reg(mdev, in, sz, out,
-                            sz, MLX5_REG_PPCNT, 0, 0);
-       memcpy(s->RFC_2819_counters,
-              MLX5_ADDR_OF(ppcnt_reg, out, counter_set),
-              sizeof(s->RFC_2819_counters));
-
-free_out:
-       kvfree(in);
-       kvfree(out);
-}
-
-static void mlx5e_update_q_counter(struct mlx5e_priv *priv)
-{
-       struct mlx5e_qcounter_stats *qcnt = &priv->stats.qcnt;
-
-       if (!priv->q_counter)
-               return;
-
-       mlx5_core_query_out_of_buffer(priv->mdev, priv->q_counter,
-                                     &qcnt->rx_out_of_buffer);
-}
-
-void mlx5e_update_stats(struct mlx5e_priv *priv)
+static void mlx5e_update_sw_counters(struct mlx5e_priv *priv)
 {
-       struct mlx5_core_dev *mdev = priv->mdev;
-       struct mlx5e_vport_stats *s = &priv->stats.vport;
+       struct mlx5e_sw_stats *s = &priv->stats.sw;
        struct mlx5e_rq_stats *rq_stats;
        struct mlx5e_sq_stats *sq_stats;
-       u32 in[MLX5_ST_SZ_DW(query_vport_counter_in)];
-       u32 *out;
-       int outlen = MLX5_ST_SZ_BYTES(query_vport_counter_out);
-       u64 tx_offload_none;
+       u64 tx_offload_none = 0;
        int i, j;
 
-       out = mlx5_vzalloc(outlen);
-       if (!out)
-               return;
-
-       /* Collect firts the SW counters and then HW for consistency */
-       s->rx_packets           = 0;
-       s->rx_bytes             = 0;
-       s->tx_packets           = 0;
-       s->tx_bytes             = 0;
-       s->tso_packets          = 0;
-       s->tso_bytes            = 0;
-       s->tso_inner_packets    = 0;
-       s->tso_inner_bytes      = 0;
-       s->tx_queue_stopped     = 0;
-       s->tx_queue_wake        = 0;
-       s->tx_queue_dropped     = 0;
-       s->tx_csum_inner        = 0;
-       tx_offload_none         = 0;
-       s->lro_packets          = 0;
-       s->lro_bytes            = 0;
-       s->rx_csum_none         = 0;
-       s->rx_csum_sw           = 0;
-       s->rx_wqe_err           = 0;
-       s->rx_mpwqe_filler      = 0;
-       s->rx_mpwqe_frag        = 0;
-       s->rx_buff_alloc_err    = 0;
+       memset(s, 0, sizeof(*s));
        for (i = 0; i < priv->params.num_channels; i++) {
                rq_stats = &priv->channel[i]->rq.stats;
 
@@ -190,6 +109,7 @@ void mlx5e_update_stats(struct mlx5e_priv *priv)
                s->lro_bytes    += rq_stats->lro_bytes;
                s->rx_csum_none += rq_stats->csum_none;
                s->rx_csum_sw   += rq_stats->csum_sw;
+               s->rx_csum_inner += rq_stats->csum_inner;
                s->rx_wqe_err   += rq_stats->wqe_err;
                s->rx_mpwqe_filler += rq_stats->mpwqe_filler;
                s->rx_mpwqe_frag   += rq_stats->mpwqe_frag;
@@ -212,7 +132,23 @@ void mlx5e_update_stats(struct mlx5e_priv *priv)
                }
        }
 
-       /* HW counters */
+       /* Update calculated offload counters */
+       s->tx_csum_offload = s->tx_packets - tx_offload_none - s->tx_csum_inner;
+       s->rx_csum_good    = s->rx_packets - s->rx_csum_none -
+                            s->rx_csum_sw;
+
+       s->link_down_events = MLX5_GET(ppcnt_reg,
+                               priv->stats.pport.phy_counters,
+                               counter_set.phys_layer_cntrs.link_down_events);
+}
+
+static void mlx5e_update_vport_counters(struct mlx5e_priv *priv)
+{
+       int outlen = MLX5_ST_SZ_BYTES(query_vport_counter_out);
+       u32 *out = (u32 *)priv->stats.vport.query_vport_out;
+       u32 in[MLX5_ST_SZ_DW(query_vport_counter_in)];
+       struct mlx5_core_dev *mdev = priv->mdev;
+
        memset(in, 0, sizeof(in));
 
        MLX5_SET(query_vport_counter_in, in, opcode,
@@ -222,58 +158,69 @@ void mlx5e_update_stats(struct mlx5e_priv *priv)
 
        memset(out, 0, outlen);
 
-       if (mlx5_cmd_exec(mdev, in, sizeof(in), out, outlen))
+       mlx5_cmd_exec(mdev, in, sizeof(in), out, outlen);
+}
+
+static void mlx5e_update_pport_counters(struct mlx5e_priv *priv)
+{
+       struct mlx5e_pport_stats *pstats = &priv->stats.pport;
+       struct mlx5_core_dev *mdev = priv->mdev;
+       int sz = MLX5_ST_SZ_BYTES(ppcnt_reg);
+       int prio;
+       void *out;
+       u32 *in;
+
+       in = mlx5_vzalloc(sz);
+       if (!in)
                goto free_out;
 
-#define MLX5_GET_CTR(p, x) \
-       MLX5_GET64(query_vport_counter_out, p, x)
-
-       s->rx_error_packets     =
-               MLX5_GET_CTR(out, received_errors.packets);
-       s->rx_error_bytes       =
-               MLX5_GET_CTR(out, received_errors.octets);
-       s->tx_error_packets     =
-               MLX5_GET_CTR(out, transmit_errors.packets);
-       s->tx_error_bytes       =
-               MLX5_GET_CTR(out, transmit_errors.octets);
-
-       s->rx_unicast_packets   =
-               MLX5_GET_CTR(out, received_eth_unicast.packets);
-       s->rx_unicast_bytes     =
-               MLX5_GET_CTR(out, received_eth_unicast.octets);
-       s->tx_unicast_packets   =
-               MLX5_GET_CTR(out, transmitted_eth_unicast.packets);
-       s->tx_unicast_bytes     =
-               MLX5_GET_CTR(out, transmitted_eth_unicast.octets);
-
-       s->rx_multicast_packets =
-               MLX5_GET_CTR(out, received_eth_multicast.packets);
-       s->rx_multicast_bytes   =
-               MLX5_GET_CTR(out, received_eth_multicast.octets);
-       s->tx_multicast_packets =
-               MLX5_GET_CTR(out, transmitted_eth_multicast.packets);
-       s->tx_multicast_bytes   =
-               MLX5_GET_CTR(out, transmitted_eth_multicast.octets);
-
-       s->rx_broadcast_packets =
-               MLX5_GET_CTR(out, received_eth_broadcast.packets);
-       s->rx_broadcast_bytes   =
-               MLX5_GET_CTR(out, received_eth_broadcast.octets);
-       s->tx_broadcast_packets =
-               MLX5_GET_CTR(out, transmitted_eth_broadcast.packets);
-       s->tx_broadcast_bytes   =
-               MLX5_GET_CTR(out, transmitted_eth_broadcast.octets);
+       MLX5_SET(ppcnt_reg, in, local_port, 1);
 
-       /* Update calculated offload counters */
-       s->tx_csum_offload = s->tx_packets - tx_offload_none - s->tx_csum_inner;
-       s->rx_csum_good    = s->rx_packets - s->rx_csum_none -
-                              s->rx_csum_sw;
+       out = pstats->IEEE_802_3_counters;
+       MLX5_SET(ppcnt_reg, in, grp, MLX5_IEEE_802_3_COUNTERS_GROUP);
+       mlx5_core_access_reg(mdev, in, sz, out, sz, MLX5_REG_PPCNT, 0, 0);
 
-       mlx5e_update_pport_counters(priv);
-       mlx5e_update_q_counter(priv);
+       out = pstats->RFC_2863_counters;
+       MLX5_SET(ppcnt_reg, in, grp, MLX5_RFC_2863_COUNTERS_GROUP);
+       mlx5_core_access_reg(mdev, in, sz, out, sz, MLX5_REG_PPCNT, 0, 0);
+
+       out = pstats->RFC_2819_counters;
+       MLX5_SET(ppcnt_reg, in, grp, MLX5_RFC_2819_COUNTERS_GROUP);
+       mlx5_core_access_reg(mdev, in, sz, out, sz, MLX5_REG_PPCNT, 0, 0);
+
+       out = pstats->phy_counters;
+       MLX5_SET(ppcnt_reg, in, grp, MLX5_PHYSICAL_LAYER_COUNTERS_GROUP);
+       mlx5_core_access_reg(mdev, in, sz, out, sz, MLX5_REG_PPCNT, 0, 0);
+
+       MLX5_SET(ppcnt_reg, in, grp, MLX5_PER_PRIORITY_COUNTERS_GROUP);
+       for (prio = 0; prio < NUM_PPORT_PRIO; prio++) {
+               out = pstats->per_prio_counters[prio];
+               MLX5_SET(ppcnt_reg, in, prio_tc, prio);
+               mlx5_core_access_reg(mdev, in, sz, out, sz,
+                                    MLX5_REG_PPCNT, 0, 0);
+       }
 
 free_out:
-       kvfree(out);
+       kvfree(in);
+}
+
+static void mlx5e_update_q_counter(struct mlx5e_priv *priv)
+{
+       struct mlx5e_qcounter_stats *qcnt = &priv->stats.qcnt;
+
+       if (!priv->q_counter)
+               return;
+
+       mlx5_core_query_out_of_buffer(priv->mdev, priv->q_counter,
+                                     &qcnt->rx_out_of_buffer);
+}
+
+void mlx5e_update_stats(struct mlx5e_priv *priv)
+{
+       mlx5e_update_q_counter(priv);
+       mlx5e_update_vport_counters(priv);
+       mlx5e_update_pport_counters(priv);
+       mlx5e_update_sw_counters(priv);
 }
 
 static void mlx5e_update_stats_work(struct work_struct *work)
@@ -284,9 +231,8 @@ static void mlx5e_update_stats_work(struct work_struct *work)
        mutex_lock(&priv->state_lock);
        if (test_bit(MLX5E_STATE_OPENED, &priv->state)) {
                mlx5e_update_stats(priv);
-               schedule_delayed_work(dwork,
-                                     msecs_to_jiffies(
-                                             MLX5E_UPDATE_STATS_INTERVAL));
+               queue_delayed_work(priv->wq, dwork,
+                                  msecs_to_jiffies(MLX5E_UPDATE_STATS_INTERVAL));
        }
        mutex_unlock(&priv->state_lock);
 }
@@ -302,7 +248,7 @@ static void mlx5e_async_event(struct mlx5_core_dev *mdev, void *vpriv,
        switch (event) {
        case MLX5_DEV_EVENT_PORT_UP:
        case MLX5_DEV_EVENT_PORT_DOWN:
-               schedule_work(&priv->update_carrier_work);
+               queue_work(priv->wq, &priv->update_carrier_work);
                break;
 
        default:
@@ -442,6 +388,7 @@ static int mlx5e_enable_rq(struct mlx5e_rq *rq, struct mlx5e_rq_param *param)
        MLX5_SET(rqc,  rqc, cqn,                rq->cq.mcq.cqn);
        MLX5_SET(rqc,  rqc, state,              MLX5_RQC_STATE_RST);
        MLX5_SET(rqc,  rqc, flush_in_error_en,  1);
+       MLX5_SET(rqc,  rqc, vsd, priv->params.vlan_strip_disable);
        MLX5_SET(wq,   wq,  log_wq_pg_sz,       rq->wq_ctrl.buf.page_shift -
                                                MLX5_ADAPTER_PAGE_SHIFT);
        MLX5_SET64(wq, wq,  dbr_addr,           rq->wq_ctrl.db.dma);
@@ -456,7 +403,8 @@ static int mlx5e_enable_rq(struct mlx5e_rq *rq, struct mlx5e_rq_param *param)
        return err;
 }
 
-static int mlx5e_modify_rq(struct mlx5e_rq *rq, int curr_state, int next_state)
+static int mlx5e_modify_rq_state(struct mlx5e_rq *rq, int curr_state,
+                                int next_state)
 {
        struct mlx5e_channel *c = rq->channel;
        struct mlx5e_priv *priv = c->priv;
@@ -484,6 +432,36 @@ static int mlx5e_modify_rq(struct mlx5e_rq *rq, int curr_state, int next_state)
        return err;
 }
 
+static int mlx5e_modify_rq_vsd(struct mlx5e_rq *rq, bool vsd)
+{
+       struct mlx5e_channel *c = rq->channel;
+       struct mlx5e_priv *priv = c->priv;
+       struct mlx5_core_dev *mdev = priv->mdev;
+
+       void *in;
+       void *rqc;
+       int inlen;
+       int err;
+
+       inlen = MLX5_ST_SZ_BYTES(modify_rq_in);
+       in = mlx5_vzalloc(inlen);
+       if (!in)
+               return -ENOMEM;
+
+       rqc = MLX5_ADDR_OF(modify_rq_in, in, ctx);
+
+       MLX5_SET(modify_rq_in, in, rq_state, MLX5_RQC_STATE_RDY);
+       MLX5_SET64(modify_rq_in, in, modify_bitmask, MLX5_RQ_BITMASK_VSD);
+       MLX5_SET(rqc, rqc, vsd, vsd);
+       MLX5_SET(rqc, rqc, state, MLX5_RQC_STATE_RDY);
+
+       err = mlx5_core_modify_rq(mdev, rq->rqn, in, inlen);
+
+       kvfree(in);
+
+       return err;
+}
+
 static void mlx5e_disable_rq(struct mlx5e_rq *rq)
 {
        mlx5_core_destroy_rq(rq->priv->mdev, rq->rqn);
@@ -522,7 +500,7 @@ static int mlx5e_open_rq(struct mlx5e_channel *c,
        if (err)
                goto err_destroy_rq;
 
-       err = mlx5e_modify_rq(rq, MLX5_RQC_STATE_RST, MLX5_RQC_STATE_RDY);
+       err = mlx5e_modify_rq_state(rq, MLX5_RQC_STATE_RST, MLX5_RQC_STATE_RDY);
        if (err)
                goto err_disable_rq;
 
@@ -547,7 +525,7 @@ static void mlx5e_close_rq(struct mlx5e_rq *rq)
        clear_bit(MLX5E_RQ_STATE_POST_WQES_ENABLE, &rq->state);
        napi_synchronize(&rq->channel->napi); /* prevent mlx5e_post_rx_wqes */
 
-       mlx5e_modify_rq(rq, MLX5_RQC_STATE_RDY, MLX5_RQC_STATE_ERR);
+       mlx5e_modify_rq_state(rq, MLX5_RQC_STATE_RDY, MLX5_RQC_STATE_ERR);
        while (!mlx5_wq_ll_is_empty(&rq->wq))
                msleep(20);
 
@@ -1266,13 +1244,10 @@ static void mlx5e_build_icosq_param(struct mlx5e_priv *priv,
        param->icosq = true;
 }
 
-static void mlx5e_build_channel_param(struct mlx5e_priv *priv,
-                                     struct mlx5e_channel_param *cparam)
+static void mlx5e_build_channel_param(struct mlx5e_priv *priv, struct mlx5e_channel_param *cparam)
 {
        u8 icosq_log_wq_sz = MLX5E_PARAMS_MINIMUM_LOG_SQ_SIZE;
 
-       memset(cparam, 0, sizeof(*cparam));
-
        mlx5e_build_rq_param(priv, &cparam->rq);
        mlx5e_build_sq_param(priv, &cparam->sq);
        mlx5e_build_icosq_param(priv, &cparam->icosq, icosq_log_wq_sz);
@@ -1283,7 +1258,7 @@ static void mlx5e_build_channel_param(struct mlx5e_priv *priv,
 
 static int mlx5e_open_channels(struct mlx5e_priv *priv)
 {
-       struct mlx5e_channel_param cparam;
+       struct mlx5e_channel_param *cparam;
        int nch = priv->params.num_channels;
        int err = -ENOMEM;
        int i;
@@ -1295,12 +1270,15 @@ static int mlx5e_open_channels(struct mlx5e_priv *priv)
        priv->txq_to_sq_map = kcalloc(nch * priv->params.num_tc,
                                      sizeof(struct mlx5e_sq *), GFP_KERNEL);
 
-       if (!priv->channel || !priv->txq_to_sq_map)
+       cparam = kzalloc(sizeof(struct mlx5e_channel_param), GFP_KERNEL);
+
+       if (!priv->channel || !priv->txq_to_sq_map || !cparam)
                goto err_free_txq_to_sq_map;
 
-       mlx5e_build_channel_param(priv, &cparam);
+       mlx5e_build_channel_param(priv, cparam);
+
        for (i = 0; i < nch; i++) {
-               err = mlx5e_open_channel(priv, i, &cparam, &priv->channel[i]);
+               err = mlx5e_open_channel(priv, i, cparam, &priv->channel[i]);
                if (err)
                        goto err_close_channels;
        }
@@ -1311,6 +1289,7 @@ static int mlx5e_open_channels(struct mlx5e_priv *priv)
                        goto err_close_channels;
        }
 
+       kfree(cparam);
        return 0;
 
 err_close_channels:
@@ -1320,6 +1299,7 @@ err_close_channels:
 err_free_txq_to_sq_map:
        kfree(priv->txq_to_sq_map);
        kfree(priv->channel);
+       kfree(cparam);
 
        return err;
 }
@@ -1359,48 +1339,36 @@ static void mlx5e_fill_indir_rqt_rqns(struct mlx5e_priv *priv, void *rqtc)
 
        for (i = 0; i < MLX5E_INDIR_RQT_SIZE; i++) {
                int ix = i;
+               u32 rqn;
 
                if (priv->params.rss_hfunc == ETH_RSS_HASH_XOR)
                        ix = mlx5e_bits_invert(i, MLX5E_LOG_INDIR_RQT_SIZE);
 
                ix = priv->params.indirection_rqt[ix];
-               MLX5_SET(rqtc, rqtc, rq_num[i],
-                        test_bit(MLX5E_STATE_OPENED, &priv->state) ?
-                        priv->channel[ix]->rq.rqn :
-                        priv->drop_rq.rqn);
+               rqn = test_bit(MLX5E_STATE_OPENED, &priv->state) ?
+                               priv->channel[ix]->rq.rqn :
+                               priv->drop_rq.rqn;
+               MLX5_SET(rqtc, rqtc, rq_num[i], rqn);
        }
 }
 
-static void mlx5e_fill_rqt_rqns(struct mlx5e_priv *priv, void *rqtc,
-                               enum mlx5e_rqt_ix rqt_ix)
+static void mlx5e_fill_direct_rqt_rqn(struct mlx5e_priv *priv, void *rqtc,
+                                     int ix)
 {
+       u32 rqn = test_bit(MLX5E_STATE_OPENED, &priv->state) ?
+                       priv->channel[ix]->rq.rqn :
+                       priv->drop_rq.rqn;
 
-       switch (rqt_ix) {
-       case MLX5E_INDIRECTION_RQT:
-               mlx5e_fill_indir_rqt_rqns(priv, rqtc);
-
-               break;
-
-       default: /* MLX5E_SINGLE_RQ_RQT */
-               MLX5_SET(rqtc, rqtc, rq_num[0],
-                        test_bit(MLX5E_STATE_OPENED, &priv->state) ?
-                        priv->channel[0]->rq.rqn :
-                        priv->drop_rq.rqn);
-
-               break;
-       }
+       MLX5_SET(rqtc, rqtc, rq_num[0], rqn);
 }
 
-static int mlx5e_create_rqt(struct mlx5e_priv *priv, enum mlx5e_rqt_ix rqt_ix)
+static int mlx5e_create_rqt(struct mlx5e_priv *priv, int sz, int ix, u32 *rqtn)
 {
        struct mlx5_core_dev *mdev = priv->mdev;
-       u32 *in;
        void *rqtc;
        int inlen;
-       int sz;
        int err;
-
-       sz = (rqt_ix == MLX5E_SINGLE_RQ_RQT) ? 1 : MLX5E_INDIR_RQT_SIZE;
+       u32 *in;
 
        inlen = MLX5_ST_SZ_BYTES(create_rqt_in) + sizeof(u32) * sz;
        in = mlx5_vzalloc(inlen);
@@ -1412,26 +1380,73 @@ static int mlx5e_create_rqt(struct mlx5e_priv *priv, enum mlx5e_rqt_ix rqt_ix)
        MLX5_SET(rqtc, rqtc, rqt_actual_size, sz);
        MLX5_SET(rqtc, rqtc, rqt_max_size, sz);
 
-       mlx5e_fill_rqt_rqns(priv, rqtc, rqt_ix);
+       if (sz > 1) /* RSS */
+               mlx5e_fill_indir_rqt_rqns(priv, rqtc);
+       else
+               mlx5e_fill_direct_rqt_rqn(priv, rqtc, ix);
 
-       err = mlx5_core_create_rqt(mdev, in, inlen, &priv->rqtn[rqt_ix]);
+       err = mlx5_core_create_rqt(mdev, in, inlen, rqtn);
 
        kvfree(in);
+       return err;
+}
+
+static void mlx5e_destroy_rqt(struct mlx5e_priv *priv, u32 rqtn)
+{
+       mlx5_core_destroy_rqt(priv->mdev, rqtn);
+}
+
+static int mlx5e_create_rqts(struct mlx5e_priv *priv)
+{
+       int nch = mlx5e_get_max_num_channels(priv->mdev);
+       u32 *rqtn;
+       int err;
+       int ix;
+
+       /* Indirect RQT */
+       rqtn = &priv->indir_rqtn;
+       err = mlx5e_create_rqt(priv, MLX5E_INDIR_RQT_SIZE, 0, rqtn);
+       if (err)
+               return err;
+
+       /* Direct RQTs */
+       for (ix = 0; ix < nch; ix++) {
+               rqtn = &priv->direct_tir[ix].rqtn;
+               err = mlx5e_create_rqt(priv, 1 /*size */, ix, rqtn);
+               if (err)
+                       goto err_destroy_rqts;
+       }
+
+       return 0;
+
+err_destroy_rqts:
+       for (ix--; ix >= 0; ix--)
+               mlx5e_destroy_rqt(priv, priv->direct_tir[ix].rqtn);
+
+       mlx5e_destroy_rqt(priv, priv->indir_rqtn);
 
        return err;
 }
 
-int mlx5e_redirect_rqt(struct mlx5e_priv *priv, enum mlx5e_rqt_ix rqt_ix)
+static void mlx5e_destroy_rqts(struct mlx5e_priv *priv)
+{
+       int nch = mlx5e_get_max_num_channels(priv->mdev);
+       int i;
+
+       for (i = 0; i < nch; i++)
+               mlx5e_destroy_rqt(priv, priv->direct_tir[i].rqtn);
+
+       mlx5e_destroy_rqt(priv, priv->indir_rqtn);
+}
+
+int mlx5e_redirect_rqt(struct mlx5e_priv *priv, u32 rqtn, int sz, int ix)
 {
        struct mlx5_core_dev *mdev = priv->mdev;
-       u32 *in;
        void *rqtc;
        int inlen;
-       int sz;
+       u32 *in;
        int err;
 
-       sz = (rqt_ix == MLX5E_SINGLE_RQ_RQT) ? 1 : MLX5E_INDIR_RQT_SIZE;
-
        inlen = MLX5_ST_SZ_BYTES(modify_rqt_in) + sizeof(u32) * sz;
        in = mlx5_vzalloc(inlen);
        if (!in)
@@ -1440,27 +1455,31 @@ int mlx5e_redirect_rqt(struct mlx5e_priv *priv, enum mlx5e_rqt_ix rqt_ix)
        rqtc = MLX5_ADDR_OF(modify_rqt_in, in, ctx);
 
        MLX5_SET(rqtc, rqtc, rqt_actual_size, sz);
-
-       mlx5e_fill_rqt_rqns(priv, rqtc, rqt_ix);
+       if (sz > 1) /* RSS */
+               mlx5e_fill_indir_rqt_rqns(priv, rqtc);
+       else
+               mlx5e_fill_direct_rqt_rqn(priv, rqtc, ix);
 
        MLX5_SET(modify_rqt_in, in, bitmask.rqn_list, 1);
 
-       err = mlx5_core_modify_rqt(mdev, priv->rqtn[rqt_ix], in, inlen);
+       err = mlx5_core_modify_rqt(mdev, rqtn, in, inlen);
 
        kvfree(in);
 
        return err;
 }
 
-static void mlx5e_destroy_rqt(struct mlx5e_priv *priv, enum mlx5e_rqt_ix rqt_ix)
-{
-       mlx5_core_destroy_rqt(priv->mdev, priv->rqtn[rqt_ix]);
-}
-
 static void mlx5e_redirect_rqts(struct mlx5e_priv *priv)
 {
-       mlx5e_redirect_rqt(priv, MLX5E_INDIRECTION_RQT);
-       mlx5e_redirect_rqt(priv, MLX5E_SINGLE_RQ_RQT);
+       u32 rqtn;
+       int ix;
+
+       rqtn = priv->indir_rqtn;
+       mlx5e_redirect_rqt(priv, rqtn, MLX5E_INDIR_RQT_SIZE, 0);
+       for (ix = 0; ix < priv->params.num_channels; ix++) {
+               rqtn = priv->direct_tir[ix].rqtn;
+               mlx5e_redirect_rqt(priv, rqtn, 1, ix);
+       }
 }
 
 static void mlx5e_build_tir_ctx_lro(void *tirc, struct mlx5e_priv *priv)
@@ -1505,6 +1524,7 @@ static int mlx5e_modify_tirs_lro(struct mlx5e_priv *priv)
        int inlen;
        int err;
        int tt;
+       int ix;
 
        inlen = MLX5_ST_SZ_BYTES(modify_tir_in);
        in = mlx5_vzalloc(inlen);
@@ -1516,23 +1536,32 @@ static int mlx5e_modify_tirs_lro(struct mlx5e_priv *priv)
 
        mlx5e_build_tir_ctx_lro(tirc, priv);
 
-       for (tt = 0; tt < MLX5E_NUM_TT; tt++) {
-               err = mlx5_core_modify_tir(mdev, priv->tirn[tt], in, inlen);
+       for (tt = 0; tt < MLX5E_NUM_INDIR_TIRS; tt++) {
+               err = mlx5_core_modify_tir(mdev, priv->indir_tirn[tt], in,
+                                          inlen);
+               if (err)
+                       goto free_in;
+       }
+
+       for (ix = 0; ix < mlx5e_get_max_num_channels(mdev); ix++) {
+               err = mlx5_core_modify_tir(mdev, priv->direct_tir[ix].tirn,
+                                          in, inlen);
                if (err)
-                       break;
+                       goto free_in;
        }
 
+free_in:
        kvfree(in);
 
        return err;
 }
 
-static int mlx5e_refresh_tir_self_loopback_enable(struct mlx5_core_dev *mdev,
-                                                 u32 tirn)
+static int mlx5e_refresh_tirs_self_loopback_enable(struct mlx5e_priv *priv)
 {
        void *in;
        int inlen;
        int err;
+       int i;
 
        inlen = MLX5_ST_SZ_BYTES(modify_tir_in);
        in = mlx5_vzalloc(inlen);
@@ -1541,46 +1570,70 @@ static int mlx5e_refresh_tir_self_loopback_enable(struct mlx5_core_dev *mdev,
 
        MLX5_SET(modify_tir_in, in, bitmask.self_lb_en, 1);
 
-       err = mlx5_core_modify_tir(mdev, tirn, in, inlen);
+       for (i = 0; i < MLX5E_NUM_INDIR_TIRS; i++) {
+               err = mlx5_core_modify_tir(priv->mdev, priv->indir_tirn[i], in,
+                                          inlen);
+               if (err)
+                       return err;
+       }
+
+       for (i = 0; i < priv->params.num_channels; i++) {
+               err = mlx5_core_modify_tir(priv->mdev,
+                                          priv->direct_tir[i].tirn, in,
+                                          inlen);
+               if (err)
+                       return err;
+       }
 
        kvfree(in);
 
-       return err;
+       return 0;
 }
 
-static int mlx5e_refresh_tirs_self_loopback_enable(struct mlx5e_priv *priv)
+static int mlx5e_set_mtu(struct mlx5e_priv *priv, u16 mtu)
 {
+       struct mlx5_core_dev *mdev = priv->mdev;
+       u16 hw_mtu = MLX5E_SW2HW_MTU(mtu);
        int err;
-       int i;
 
-       for (i = 0; i < MLX5E_NUM_TT; i++) {
-               err = mlx5e_refresh_tir_self_loopback_enable(priv->mdev,
-                                                            priv->tirn[i]);
-               if (err)
-                       return err;
-       }
+       err = mlx5_set_port_mtu(mdev, hw_mtu, 1);
+       if (err)
+               return err;
 
+       /* Update vport context MTU */
+       mlx5_modify_nic_vport_mtu(mdev, hw_mtu);
        return 0;
 }
 
+static void mlx5e_query_mtu(struct mlx5e_priv *priv, u16 *mtu)
+{
+       struct mlx5_core_dev *mdev = priv->mdev;
+       u16 hw_mtu = 0;
+       int err;
+
+       err = mlx5_query_nic_vport_mtu(mdev, &hw_mtu);
+       if (err || !hw_mtu) /* fallback to port oper mtu */
+               mlx5_query_port_oper_mtu(mdev, &hw_mtu, 1);
+
+       *mtu = MLX5E_HW2SW_MTU(hw_mtu);
+}
+
 static int mlx5e_set_dev_port_mtu(struct net_device *netdev)
 {
        struct mlx5e_priv *priv = netdev_priv(netdev);
-       struct mlx5_core_dev *mdev = priv->mdev;
-       int hw_mtu;
+       u16 mtu;
        int err;
 
-       err = mlx5_set_port_mtu(mdev, MLX5E_SW2HW_MTU(netdev->mtu), 1);
+       err = mlx5e_set_mtu(priv, netdev->mtu);
        if (err)
                return err;
 
-       mlx5_query_port_oper_mtu(mdev, &hw_mtu, 1);
-
-       if (MLX5E_HW2SW_MTU(hw_mtu) != netdev->mtu)
-               netdev_warn(netdev, "%s: Port MTU %d is different than netdev mtu %d\n",
-                           __func__, MLX5E_HW2SW_MTU(hw_mtu), netdev->mtu);
+       mlx5e_query_mtu(priv, &mtu);
+       if (mtu != netdev->mtu)
+               netdev_warn(netdev, "%s: VPort MTU %d is different than netdev mtu %d\n",
+                           __func__, mtu, netdev->mtu);
 
-       netdev->mtu = MLX5E_HW2SW_MTU(hw_mtu);
+       netdev->mtu = mtu;
        return 0;
 }
 
@@ -1637,8 +1690,11 @@ int mlx5e_open_locked(struct net_device *netdev)
        mlx5e_redirect_rqts(priv);
        mlx5e_update_carrier(priv);
        mlx5e_timestamp_init(priv);
+#ifdef CONFIG_RFS_ACCEL
+       priv->netdev->rx_cpu_rmap = priv->mdev->rmap;
+#endif
 
-       schedule_delayed_work(&priv->update_stats_work, 0);
+       queue_delayed_work(priv->wq, &priv->update_stats_work, 0);
 
        return 0;
 
@@ -1844,7 +1900,8 @@ static void mlx5e_destroy_tises(struct mlx5e_priv *priv)
                mlx5e_destroy_tis(priv, tc);
 }
 
-static void mlx5e_build_tir_ctx(struct mlx5e_priv *priv, u32 *tirc, int tt)
+static void mlx5e_build_indir_tir_ctx(struct mlx5e_priv *priv, u32 *tirc,
+                                     enum mlx5e_traffic_types tt)
 {
        void *hfso = MLX5_ADDR_OF(tirc, tirc, rx_hash_field_selector_outer);
 
@@ -1865,19 +1922,8 @@ static void mlx5e_build_tir_ctx(struct mlx5e_priv *priv, u32 *tirc, int tt)
        mlx5e_build_tir_ctx_lro(tirc, priv);
 
        MLX5_SET(tirc, tirc, disp_type, MLX5_TIRC_DISP_TYPE_INDIRECT);
-
-       switch (tt) {
-       case MLX5E_TT_ANY:
-               MLX5_SET(tirc, tirc, indirect_table,
-                        priv->rqtn[MLX5E_SINGLE_RQ_RQT]);
-               MLX5_SET(tirc, tirc, rx_hash_fn, MLX5_RX_HASH_FN_INVERTED_XOR8);
-               break;
-       default:
-               MLX5_SET(tirc, tirc, indirect_table,
-                        priv->rqtn[MLX5E_INDIRECTION_RQT]);
-               mlx5e_build_tir_ctx_hash(tirc, priv);
-               break;
-       }
+       MLX5_SET(tirc, tirc, indirect_table, priv->indir_rqtn);
+       mlx5e_build_tir_ctx_hash(tirc, priv);
 
        switch (tt) {
        case MLX5E_TT_IPV4_TCP:
@@ -1957,64 +2003,107 @@ static void mlx5e_build_tir_ctx(struct mlx5e_priv *priv, u32 *tirc, int tt)
                MLX5_SET(rx_hash_field_select, hfso, selected_fields,
                         MLX5_HASH_IP);
                break;
+       default:
+               WARN_ONCE(true,
+                         "mlx5e_build_indir_tir_ctx: bad traffic type!\n");
        }
 }
 
-static int mlx5e_create_tir(struct mlx5e_priv *priv, int tt)
+static void mlx5e_build_direct_tir_ctx(struct mlx5e_priv *priv, u32 *tirc,
+                                      u32 rqtn)
 {
-       struct mlx5_core_dev *mdev = priv->mdev;
-       u32 *in;
+       MLX5_SET(tirc, tirc, transport_domain, priv->tdn);
+
+       mlx5e_build_tir_ctx_lro(tirc, priv);
+
+       MLX5_SET(tirc, tirc, disp_type, MLX5_TIRC_DISP_TYPE_INDIRECT);
+       MLX5_SET(tirc, tirc, indirect_table, rqtn);
+       MLX5_SET(tirc, tirc, rx_hash_fn, MLX5_RX_HASH_FN_INVERTED_XOR8);
+}
+
+static int mlx5e_create_tirs(struct mlx5e_priv *priv)
+{
+       int nch = mlx5e_get_max_num_channels(priv->mdev);
        void *tirc;
        int inlen;
+       u32 *tirn;
        int err;
+       u32 *in;
+       int ix;
+       int tt;
 
        inlen = MLX5_ST_SZ_BYTES(create_tir_in);
        in = mlx5_vzalloc(inlen);
        if (!in)
                return -ENOMEM;
 
-       tirc = MLX5_ADDR_OF(create_tir_in, in, ctx);
+       /* indirect tirs */
+       for (tt = 0; tt < MLX5E_NUM_INDIR_TIRS; tt++) {
+               memset(in, 0, inlen);
+               tirn = &priv->indir_tirn[tt];
+               tirc = MLX5_ADDR_OF(create_tir_in, in, ctx);
+               mlx5e_build_indir_tir_ctx(priv, tirc, tt);
+               err = mlx5_core_create_tir(priv->mdev, in, inlen, tirn);
+               if (err)
+                       goto err_destroy_tirs;
+       }
+
+       /* direct tirs */
+       for (ix = 0; ix < nch; ix++) {
+               memset(in, 0, inlen);
+               tirn = &priv->direct_tir[ix].tirn;
+               tirc = MLX5_ADDR_OF(create_tir_in, in, ctx);
+               mlx5e_build_direct_tir_ctx(priv, tirc,
+                                          priv->direct_tir[ix].rqtn);
+               err = mlx5_core_create_tir(priv->mdev, in, inlen, tirn);
+               if (err)
+                       goto err_destroy_ch_tirs;
+       }
 
-       mlx5e_build_tir_ctx(priv, tirc, tt);
+       kvfree(in);
 
-       err = mlx5_core_create_tir(mdev, in, inlen, &priv->tirn[tt]);
+       return 0;
+
+err_destroy_ch_tirs:
+       for (ix--; ix >= 0; ix--)
+               mlx5_core_destroy_tir(priv->mdev, priv->direct_tir[ix].tirn);
+
+err_destroy_tirs:
+       for (tt--; tt >= 0; tt--)
+               mlx5_core_destroy_tir(priv->mdev, priv->indir_tirn[tt]);
 
        kvfree(in);
 
        return err;
 }
 
-static void mlx5e_destroy_tir(struct mlx5e_priv *priv, int tt)
+static void mlx5e_destroy_tirs(struct mlx5e_priv *priv)
 {
-       mlx5_core_destroy_tir(priv->mdev, priv->tirn[tt]);
+       int nch = mlx5e_get_max_num_channels(priv->mdev);
+       int i;
+
+       for (i = 0; i < nch; i++)
+               mlx5_core_destroy_tir(priv->mdev, priv->direct_tir[i].tirn);
+
+       for (i = 0; i < MLX5E_NUM_INDIR_TIRS; i++)
+               mlx5_core_destroy_tir(priv->mdev, priv->indir_tirn[i]);
 }
 
-static int mlx5e_create_tirs(struct mlx5e_priv *priv)
+int mlx5e_modify_rqs_vsd(struct mlx5e_priv *priv, bool vsd)
 {
-       int err;
+       int err = 0;
        int i;
 
-       for (i = 0; i < MLX5E_NUM_TT; i++) {
-               err = mlx5e_create_tir(priv, i);
+       if (!test_bit(MLX5E_STATE_OPENED, &priv->state))
+               return 0;
+
+       for (i = 0; i < priv->params.num_channels; i++) {
+               err = mlx5e_modify_rq_vsd(&priv->channel[i]->rq, vsd);
                if (err)
-                       goto err_destroy_tirs;
+                       return err;
        }
 
        return 0;
-
-err_destroy_tirs:
-       for (i--; i >= 0; i--)
-               mlx5e_destroy_tir(priv, i);
-
-       return err;
-}
-
-static void mlx5e_destroy_tirs(struct mlx5e_priv *priv)
-{
-       int i;
-
-       for (i = 0; i < MLX5E_NUM_TT; i++)
-               mlx5e_destroy_tir(priv, i);
 }
 
 static int mlx5e_setup_tc(struct net_device *netdev, u8 tc)
@@ -2073,19 +2162,37 @@ static struct rtnl_link_stats64 *
 mlx5e_get_stats(struct net_device *dev, struct rtnl_link_stats64 *stats)
 {
        struct mlx5e_priv *priv = netdev_priv(dev);
+       struct mlx5e_sw_stats *sstats = &priv->stats.sw;
        struct mlx5e_vport_stats *vstats = &priv->stats.vport;
-
-       stats->rx_packets = vstats->rx_packets;
-       stats->rx_bytes   = vstats->rx_bytes;
-       stats->tx_packets = vstats->tx_packets;
-       stats->tx_bytes   = vstats->tx_bytes;
-       stats->multicast  = vstats->rx_multicast_packets +
-                           vstats->tx_multicast_packets;
-       stats->tx_errors  = vstats->tx_error_packets;
-       stats->rx_errors  = vstats->rx_error_packets;
-       stats->tx_dropped = vstats->tx_queue_dropped;
-       stats->rx_crc_errors = 0;
-       stats->rx_length_errors = 0;
+       struct mlx5e_pport_stats *pstats = &priv->stats.pport;
+
+       stats->rx_packets = sstats->rx_packets;
+       stats->rx_bytes   = sstats->rx_bytes;
+       stats->tx_packets = sstats->tx_packets;
+       stats->tx_bytes   = sstats->tx_bytes;
+
+       stats->rx_dropped = priv->stats.qcnt.rx_out_of_buffer;
+       stats->tx_dropped = sstats->tx_queue_dropped;
+
+       stats->rx_length_errors =
+               PPORT_802_3_GET(pstats, a_in_range_length_errors) +
+               PPORT_802_3_GET(pstats, a_out_of_range_length_field) +
+               PPORT_802_3_GET(pstats, a_frame_too_long_errors);
+       stats->rx_crc_errors =
+               PPORT_802_3_GET(pstats, a_frame_check_sequence_errors);
+       stats->rx_frame_errors = PPORT_802_3_GET(pstats, a_alignment_errors);
+       stats->tx_aborted_errors = PPORT_2863_GET(pstats, if_out_discards);
+       stats->tx_carrier_errors =
+               PPORT_802_3_GET(pstats, a_symbol_error_during_carrier);
+       stats->rx_errors = stats->rx_length_errors + stats->rx_crc_errors +
+                          stats->rx_frame_errors;
+       stats->tx_errors = stats->tx_aborted_errors + stats->tx_carrier_errors;
+
+       /* vport multicast also counts packets that are dropped due to steering
+        * or rx out of buffer
+        */
+       stats->multicast =
+               VPORT_COUNTER_GET(vstats, received_eth_multicast.packets);
 
        return stats;
 }
@@ -2094,7 +2201,7 @@ static void mlx5e_set_rx_mode(struct net_device *dev)
 {
        struct mlx5e_priv *priv = netdev_priv(dev);
 
-       schedule_work(&priv->set_rx_mode_work);
+       queue_work(priv->wq, &priv->set_rx_mode_work);
 }
 
 static int mlx5e_set_mac(struct net_device *netdev, void *addr)
@@ -2109,73 +2216,180 @@ static int mlx5e_set_mac(struct net_device *netdev, void *addr)
        ether_addr_copy(netdev->dev_addr, saddr->sa_data);
        netif_addr_unlock_bh(netdev);
 
-       schedule_work(&priv->set_rx_mode_work);
+       queue_work(priv->wq, &priv->set_rx_mode_work);
 
        return 0;
 }
 
-static int mlx5e_set_features(struct net_device *netdev,
-                             netdev_features_t features)
+#define MLX5E_SET_FEATURE(netdev, feature, enable)     \
+       do {                                            \
+               if (enable)                             \
+                       netdev->features |= feature;    \
+               else                                    \
+                       netdev->features &= ~feature;   \
+       } while (0)
+
+typedef int (*mlx5e_feature_handler)(struct net_device *netdev, bool enable);
+
+static int set_feature_lro(struct net_device *netdev, bool enable)
 {
        struct mlx5e_priv *priv = netdev_priv(netdev);
-       int err = 0;
-       netdev_features_t changes = features ^ netdev->features;
+       bool was_opened = test_bit(MLX5E_STATE_OPENED, &priv->state);
+       int err;
 
        mutex_lock(&priv->state_lock);
 
-       if (changes & NETIF_F_LRO) {
-               bool was_opened = test_bit(MLX5E_STATE_OPENED, &priv->state);
-
-               if (was_opened && (priv->params.rq_wq_type ==
-                                  MLX5_WQ_TYPE_LINKED_LIST))
-                       mlx5e_close_locked(priv->netdev);
-
-               priv->params.lro_en = !!(features & NETIF_F_LRO);
-               err = mlx5e_modify_tirs_lro(priv);
-               if (err)
-                       mlx5_core_warn(priv->mdev, "lro modify failed, %d\n",
-                                      err);
+       if (was_opened && (priv->params.rq_wq_type == MLX5_WQ_TYPE_LINKED_LIST))
+               mlx5e_close_locked(priv->netdev);
 
-               if (was_opened && (priv->params.rq_wq_type ==
-                                  MLX5_WQ_TYPE_LINKED_LIST))
-                       err = mlx5e_open_locked(priv->netdev);
+       priv->params.lro_en = enable;
+       err = mlx5e_modify_tirs_lro(priv);
+       if (err) {
+               netdev_err(netdev, "lro modify failed, %d\n", err);
+               priv->params.lro_en = !enable;
        }
 
+       if (was_opened && (priv->params.rq_wq_type == MLX5_WQ_TYPE_LINKED_LIST))
+               mlx5e_open_locked(priv->netdev);
+
        mutex_unlock(&priv->state_lock);
 
-       if (changes & NETIF_F_HW_VLAN_CTAG_FILTER) {
-               if (features & NETIF_F_HW_VLAN_CTAG_FILTER)
-                       mlx5e_enable_vlan_filter(priv);
-               else
-                       mlx5e_disable_vlan_filter(priv);
-       }
+       return err;
+}
+
+static int set_feature_vlan_filter(struct net_device *netdev, bool enable)
+{
+       struct mlx5e_priv *priv = netdev_priv(netdev);
+
+       if (enable)
+               mlx5e_enable_vlan_filter(priv);
+       else
+               mlx5e_disable_vlan_filter(priv);
 
-       if ((changes & NETIF_F_HW_TC) && !(features & NETIF_F_HW_TC) &&
-           mlx5e_tc_num_filters(priv)) {
+       return 0;
+}
+
+static int set_feature_tc_num_filters(struct net_device *netdev, bool enable)
+{
+       struct mlx5e_priv *priv = netdev_priv(netdev);
+
+       if (!enable && mlx5e_tc_num_filters(priv)) {
                netdev_err(netdev,
                           "Active offloaded tc filters, can't turn hw_tc_offload off\n");
                return -EINVAL;
        }
 
+       return 0;
+}
+
+static int set_feature_rx_all(struct net_device *netdev, bool enable)
+{
+       struct mlx5e_priv *priv = netdev_priv(netdev);
+       struct mlx5_core_dev *mdev = priv->mdev;
+
+       return mlx5_set_port_fcs(mdev, !enable);
+}
+
+static int set_feature_rx_vlan(struct net_device *netdev, bool enable)
+{
+       struct mlx5e_priv *priv = netdev_priv(netdev);
+       int err;
+
+       mutex_lock(&priv->state_lock);
+
+       priv->params.vlan_strip_disable = !enable;
+       err = mlx5e_modify_rqs_vsd(priv, !enable);
+       if (err)
+               priv->params.vlan_strip_disable = enable;
+
+       mutex_unlock(&priv->state_lock);
+
+       return err;
+}
+
+#ifdef CONFIG_RFS_ACCEL
+static int set_feature_arfs(struct net_device *netdev, bool enable)
+{
+       struct mlx5e_priv *priv = netdev_priv(netdev);
+       int err;
+
+       if (enable)
+               err = mlx5e_arfs_enable(priv);
+       else
+               err = mlx5e_arfs_disable(priv);
+
        return err;
 }
+#endif
+
+static int mlx5e_handle_feature(struct net_device *netdev,
+                               netdev_features_t wanted_features,
+                               netdev_features_t feature,
+                               mlx5e_feature_handler feature_handler)
+{
+       netdev_features_t changes = wanted_features ^ netdev->features;
+       bool enable = !!(wanted_features & feature);
+       int err;
+
+       if (!(changes & feature))
+               return 0;
+
+       err = feature_handler(netdev, enable);
+       if (err) {
+               netdev_err(netdev, "%s feature 0x%llx failed err %d\n",
+                          enable ? "Enable" : "Disable", feature, err);
+               return err;
+       }
+
+       MLX5E_SET_FEATURE(netdev, feature, enable);
+       return 0;
+}
+
+static int mlx5e_set_features(struct net_device *netdev,
+                             netdev_features_t features)
+{
+       int err;
+
+       err  = mlx5e_handle_feature(netdev, features, NETIF_F_LRO,
+                                   set_feature_lro);
+       err |= mlx5e_handle_feature(netdev, features,
+                                   NETIF_F_HW_VLAN_CTAG_FILTER,
+                                   set_feature_vlan_filter);
+       err |= mlx5e_handle_feature(netdev, features, NETIF_F_HW_TC,
+                                   set_feature_tc_num_filters);
+       err |= mlx5e_handle_feature(netdev, features, NETIF_F_RXALL,
+                                   set_feature_rx_all);
+       err |= mlx5e_handle_feature(netdev, features, NETIF_F_HW_VLAN_CTAG_RX,
+                                   set_feature_rx_vlan);
+#ifdef CONFIG_RFS_ACCEL
+       err |= mlx5e_handle_feature(netdev, features, NETIF_F_NTUPLE,
+                                   set_feature_arfs);
+#endif
+
+       return err ? -EINVAL : 0;
+}
+
+#define MXL5_HW_MIN_MTU 64
+#define MXL5E_MIN_MTU (MXL5_HW_MIN_MTU + ETH_FCS_LEN)
 
 static int mlx5e_change_mtu(struct net_device *netdev, int new_mtu)
 {
        struct mlx5e_priv *priv = netdev_priv(netdev);
        struct mlx5_core_dev *mdev = priv->mdev;
        bool was_opened;
-       int max_mtu;
+       u16 max_mtu;
+       u16 min_mtu;
        int err = 0;
 
        mlx5_query_port_max_mtu(mdev, &max_mtu, 1);
 
        max_mtu = MLX5E_HW2SW_MTU(max_mtu);
+       min_mtu = MLX5E_HW2SW_MTU(MXL5E_MIN_MTU);
 
-       if (new_mtu > max_mtu) {
+       if (new_mtu > max_mtu || new_mtu < min_mtu) {
                netdev_err(netdev,
-                          "%s: Bad MTU (%d) > (%d) Max\n",
-                          __func__, new_mtu, max_mtu);
+                          "%s: Bad MTU (%d), valid range is: [%d..%d]\n",
+                          __func__, new_mtu, min_mtu, max_mtu);
                return -EINVAL;
        }
 
@@ -2224,6 +2438,21 @@ static int mlx5e_set_vf_vlan(struct net_device *dev, int vf, u16 vlan, u8 qos)
                                           vlan, qos);
 }
 
+static int mlx5e_set_vf_spoofchk(struct net_device *dev, int vf, bool setting)
+{
+       struct mlx5e_priv *priv = netdev_priv(dev);
+       struct mlx5_core_dev *mdev = priv->mdev;
+
+       return mlx5_eswitch_set_vport_spoofchk(mdev->priv.eswitch, vf + 1, setting);
+}
+
+static int mlx5e_set_vf_trust(struct net_device *dev, int vf, bool setting)
+{
+       struct mlx5e_priv *priv = netdev_priv(dev);
+       struct mlx5_core_dev *mdev = priv->mdev;
+
+       return mlx5_eswitch_set_vport_trust(mdev->priv.eswitch, vf + 1, setting);
+}
 static int mlx5_vport_link2ifla(u8 esw_link)
 {
        switch (esw_link) {
@@ -2288,7 +2517,7 @@ static void mlx5e_add_vxlan_port(struct net_device *netdev,
        if (!mlx5e_vxlan_allowed(priv->mdev))
                return;
 
-       mlx5e_vxlan_add_port(priv, be16_to_cpu(port));
+       mlx5e_vxlan_queue_work(priv, sa_family, be16_to_cpu(port), 1);
 }
 
 static void mlx5e_del_vxlan_port(struct net_device *netdev,
@@ -2299,7 +2528,7 @@ static void mlx5e_del_vxlan_port(struct net_device *netdev,
        if (!mlx5e_vxlan_allowed(priv->mdev))
                return;
 
-       mlx5e_vxlan_del_port(priv, be16_to_cpu(port));
+       mlx5e_vxlan_queue_work(priv, sa_family, be16_to_cpu(port), 0);
 }
 
 static netdev_features_t mlx5e_vxlan_features_check(struct mlx5e_priv *priv,
@@ -2366,6 +2595,9 @@ static const struct net_device_ops mlx5e_netdev_ops_basic = {
        .ndo_set_features        = mlx5e_set_features,
        .ndo_change_mtu          = mlx5e_change_mtu,
        .ndo_do_ioctl            = mlx5e_ioctl,
+#ifdef CONFIG_RFS_ACCEL
+       .ndo_rx_flow_steer       = mlx5e_rx_flow_steer,
+#endif
 };
 
 static const struct net_device_ops mlx5e_netdev_ops_sriov = {
@@ -2385,8 +2617,13 @@ static const struct net_device_ops mlx5e_netdev_ops_sriov = {
        .ndo_add_vxlan_port      = mlx5e_add_vxlan_port,
        .ndo_del_vxlan_port      = mlx5e_del_vxlan_port,
        .ndo_features_check      = mlx5e_features_check,
+#ifdef CONFIG_RFS_ACCEL
+       .ndo_rx_flow_steer       = mlx5e_rx_flow_steer,
+#endif
        .ndo_set_vf_mac          = mlx5e_set_vf_mac,
        .ndo_set_vf_vlan         = mlx5e_set_vf_vlan,
+       .ndo_set_vf_spoofchk     = mlx5e_set_vf_spoofchk,
+       .ndo_set_vf_trust        = mlx5e_set_vf_trust,
        .ndo_get_vf_config       = mlx5e_get_vf_config,
        .ndo_set_vf_link_state   = mlx5e_set_vf_link_state,
        .ndo_get_vf_stats        = mlx5e_get_vf_stats,
@@ -2546,6 +2783,8 @@ static void mlx5e_build_netdev(struct net_device *netdev)
 {
        struct mlx5e_priv *priv = netdev_priv(netdev);
        struct mlx5_core_dev *mdev = priv->mdev;
+       bool fcs_supported;
+       bool fcs_enabled;
 
        SET_NETDEV_DEV(netdev, &mdev->pdev->dev);
 
@@ -2580,25 +2819,41 @@ static void mlx5e_build_netdev(struct net_device *netdev)
        netdev->hw_features      |= NETIF_F_HW_VLAN_CTAG_FILTER;
 
        if (mlx5e_vxlan_allowed(mdev)) {
-               netdev->hw_features     |= NETIF_F_GSO_UDP_TUNNEL;
+               netdev->hw_features     |= NETIF_F_GSO_UDP_TUNNEL |
+                                          NETIF_F_GSO_UDP_TUNNEL_CSUM |
+                                          NETIF_F_GSO_PARTIAL;
                netdev->hw_enc_features |= NETIF_F_IP_CSUM;
-               netdev->hw_enc_features |= NETIF_F_RXCSUM;
+               netdev->hw_enc_features |= NETIF_F_IPV6_CSUM;
                netdev->hw_enc_features |= NETIF_F_TSO;
                netdev->hw_enc_features |= NETIF_F_TSO6;
-               netdev->hw_enc_features |= NETIF_F_RXHASH;
                netdev->hw_enc_features |= NETIF_F_GSO_UDP_TUNNEL;
+               netdev->hw_enc_features |= NETIF_F_GSO_UDP_TUNNEL_CSUM |
+                                          NETIF_F_GSO_PARTIAL;
+               netdev->gso_partial_features = NETIF_F_GSO_UDP_TUNNEL_CSUM;
        }
 
+       mlx5_query_port_fcs(mdev, &fcs_supported, &fcs_enabled);
+
+       if (fcs_supported)
+               netdev->hw_features |= NETIF_F_RXALL;
+
        netdev->features          = netdev->hw_features;
        if (!priv->params.lro_en)
                netdev->features  &= ~NETIF_F_LRO;
 
+       if (fcs_enabled)
+               netdev->features  &= ~NETIF_F_RXALL;
+
 #define FT_CAP(f) MLX5_CAP_FLOWTABLE(mdev, flow_table_properties_nic_receive.f)
        if (FT_CAP(flow_modify_en) &&
            FT_CAP(modify_root) &&
            FT_CAP(identified_miss_table_mode) &&
-           FT_CAP(flow_table_modify))
-               priv->netdev->hw_features      |= NETIF_F_HW_TC;
+           FT_CAP(flow_table_modify)) {
+               netdev->hw_features      |= NETIF_F_HW_TC;
+#ifdef CONFIG_RFS_ACCEL
+               netdev->hw_features      |= NETIF_F_NTUPLE;
+#endif
+       }
 
        netdev->features         |= NETIF_F_HIGHDMA;
 
@@ -2712,10 +2967,14 @@ static void *mlx5e_create_netdev(struct mlx5_core_dev *mdev)
 
        priv = netdev_priv(netdev);
 
+       priv->wq = create_singlethread_workqueue("mlx5e");
+       if (!priv->wq)
+               goto err_free_netdev;
+
        err = mlx5_alloc_map_uar(mdev, &priv->cq_uar, false);
        if (err) {
                mlx5_core_err(mdev, "alloc_map uar failed, %d\n", err);
-               goto err_free_netdev;
+               goto err_destroy_wq;
        }
 
        err = mlx5_core_alloc_pd(mdev, &priv->pdn);
@@ -2754,33 +3013,27 @@ static void *mlx5e_create_netdev(struct mlx5_core_dev *mdev)
                goto err_destroy_tises;
        }
 
-       err = mlx5e_create_rqt(priv, MLX5E_INDIRECTION_RQT);
+       err = mlx5e_create_rqts(priv);
        if (err) {
-               mlx5_core_warn(mdev, "create rqt(INDIR) failed, %d\n", err);
+               mlx5_core_warn(mdev, "create rqts failed, %d\n", err);
                goto err_close_drop_rq;
        }
 
-       err = mlx5e_create_rqt(priv, MLX5E_SINGLE_RQ_RQT);
-       if (err) {
-               mlx5_core_warn(mdev, "create rqt(SINGLE) failed, %d\n", err);
-               goto err_destroy_rqt_indir;
-       }
-
        err = mlx5e_create_tirs(priv);
        if (err) {
                mlx5_core_warn(mdev, "create tirs failed, %d\n", err);
-               goto err_destroy_rqt_single;
+               goto err_destroy_rqts;
        }
 
-       err = mlx5e_create_flow_tables(priv);
+       err = mlx5e_create_flow_steering(priv);
        if (err) {
-               mlx5_core_warn(mdev, "create flow tables failed, %d\n", err);
+               mlx5_core_warn(mdev, "create flow steering failed, %d\n", err);
                goto err_destroy_tirs;
        }
 
        mlx5e_create_q_counter(priv);
 
-       mlx5e_init_eth_addr(priv);
+       mlx5e_init_l2_addr(priv);
 
        mlx5e_vxlan_init(priv);
 
@@ -2798,11 +3051,14 @@ static void *mlx5e_create_netdev(struct mlx5_core_dev *mdev)
                goto err_tc_cleanup;
        }
 
-       if (mlx5e_vxlan_allowed(mdev))
+       if (mlx5e_vxlan_allowed(mdev)) {
+               rtnl_lock();
                vxlan_get_rx_port(netdev);
+               rtnl_unlock();
+       }
 
        mlx5e_enable_async_events(priv);
-       schedule_work(&priv->set_rx_mode_work);
+       queue_work(priv->wq, &priv->set_rx_mode_work);
 
        return priv;
 
@@ -2811,16 +3067,13 @@ err_tc_cleanup:
 
 err_dealloc_q_counters:
        mlx5e_destroy_q_counter(priv);
-       mlx5e_destroy_flow_tables(priv);
+       mlx5e_destroy_flow_steering(priv);
 
 err_destroy_tirs:
        mlx5e_destroy_tirs(priv);
 
-err_destroy_rqt_single:
-       mlx5e_destroy_rqt(priv, MLX5E_SINGLE_RQ_RQT);
-
-err_destroy_rqt_indir:
-       mlx5e_destroy_rqt(priv, MLX5E_INDIRECTION_RQT);
+err_destroy_rqts:
+       mlx5e_destroy_rqts(priv);
 
 err_close_drop_rq:
        mlx5e_close_drop_rq(priv);
@@ -2843,6 +3096,9 @@ err_dealloc_pd:
 err_unmap_free_uar:
        mlx5_unmap_free_uar(mdev, &priv->cq_uar);
 
+err_destroy_wq:
+       destroy_workqueue(priv->wq);
+
 err_free_netdev:
        free_netdev(netdev);
 
@@ -2856,17 +3112,25 @@ static void mlx5e_destroy_netdev(struct mlx5_core_dev *mdev, void *vpriv)
 
        set_bit(MLX5E_STATE_DESTROYING, &priv->state);
 
-       schedule_work(&priv->set_rx_mode_work);
+       queue_work(priv->wq, &priv->set_rx_mode_work);
        mlx5e_disable_async_events(priv);
-       flush_scheduled_work();
-       unregister_netdev(netdev);
+       flush_workqueue(priv->wq);
+       if (test_bit(MLX5_INTERFACE_STATE_SHUTDOWN, &mdev->intf_state)) {
+               netif_device_detach(netdev);
+               mutex_lock(&priv->state_lock);
+               if (test_bit(MLX5E_STATE_OPENED, &priv->state))
+                       mlx5e_close_locked(netdev);
+               mutex_unlock(&priv->state_lock);
+       } else {
+               unregister_netdev(netdev);
+       }
+
        mlx5e_tc_cleanup(priv);
        mlx5e_vxlan_cleanup(priv);
        mlx5e_destroy_q_counter(priv);
-       mlx5e_destroy_flow_tables(priv);
+       mlx5e_destroy_flow_steering(priv);
        mlx5e_destroy_tirs(priv);
-       mlx5e_destroy_rqt(priv, MLX5E_SINGLE_RQ_RQT);
-       mlx5e_destroy_rqt(priv, MLX5E_INDIRECTION_RQT);
+       mlx5e_destroy_rqts(priv);
        mlx5e_close_drop_rq(priv);
        mlx5e_destroy_tises(priv);
        mlx5_core_destroy_mkey(priv->mdev, &priv->umr_mkey);
@@ -2874,7 +3138,11 @@ static void mlx5e_destroy_netdev(struct mlx5_core_dev *mdev, void *vpriv)
        mlx5_core_dealloc_transport_domain(priv->mdev, priv->tdn);
        mlx5_core_dealloc_pd(priv->mdev, priv->pdn);
        mlx5_unmap_free_uar(priv->mdev, &priv->cq_uar);
-       free_netdev(netdev);
+       cancel_delayed_work_sync(&priv->update_stats_work);
+       destroy_workqueue(priv->wq);
+
+       if (!test_bit(MLX5_INTERFACE_STATE_SHUTDOWN, &mdev->intf_state))
+               free_netdev(netdev);
 }
 
 static void *mlx5e_get_netdev(void *vpriv)
index 918b7c7..23adfe2 100644 (file)
@@ -543,16 +543,26 @@ static inline void mlx5e_handle_csum(struct net_device *netdev,
 
        if (lro) {
                skb->ip_summed = CHECKSUM_UNNECESSARY;
-       } else if (likely(is_first_ethertype_ip(skb))) {
+               return;
+       }
+
+       if (is_first_ethertype_ip(skb)) {
                skb->ip_summed = CHECKSUM_COMPLETE;
                skb->csum = csum_unfold((__force __sum16)cqe->check_sum);
                rq->stats.csum_sw++;
-       } else {
-               goto csum_none;
+               return;
        }
 
-       return;
-
+       if (likely((cqe->hds_ip_ext & CQE_L3_OK) &&
+                  (cqe->hds_ip_ext & CQE_L4_OK))) {
+               skb->ip_summed = CHECKSUM_UNNECESSARY;
+               if (cqe_is_tunneled(cqe)) {
+                       skb->csum_level = 1;
+                       skb->encapsulation = 1;
+                       rq->stats.csum_inner++;
+               }
+               return;
+       }
 csum_none:
        skb->ip_summed = CHECKSUM_NONE;
        rq->stats.csum_none++;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h
new file mode 100644 (file)
index 0000000..115752b
--- /dev/null
@@ -0,0 +1,359 @@
+/*
+ * Copyright (c) 2015-2016, Mellanox Technologies. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses.  You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ *     Redistribution and use in source and binary forms, with or
+ *     without modification, are permitted provided that the following
+ *     conditions are met:
+ *
+ *      - Redistributions of source code must retain the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer.
+ *
+ *      - Redistributions in binary form must reproduce the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer in the documentation and/or other materials
+ *        provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+#ifndef __MLX5_EN_STATS_H__
+#define __MLX5_EN_STATS_H__
+
+#define MLX5E_READ_CTR64_CPU(ptr, dsc, i) \
+       (*(u64 *)((char *)ptr + dsc[i].offset))
+#define MLX5E_READ_CTR64_BE(ptr, dsc, i) \
+       be64_to_cpu(*(__be64 *)((char *)ptr + dsc[i].offset))
+#define MLX5E_READ_CTR32_CPU(ptr, dsc, i) \
+       (*(u32 *)((char *)ptr + dsc[i].offset))
+#define MLX5E_READ_CTR32_BE(ptr, dsc, i) \
+       be64_to_cpu(*(__be32 *)((char *)ptr + dsc[i].offset))
+
+#define MLX5E_DECLARE_STAT(type, fld) #fld, offsetof(type, fld)
+
+struct counter_desc {
+       char            name[ETH_GSTRING_LEN];
+       int             offset; /* Byte offset */
+};
+
+struct mlx5e_sw_stats {
+       u64 rx_packets;
+       u64 rx_bytes;
+       u64 tx_packets;
+       u64 tx_bytes;
+       u64 tso_packets;
+       u64 tso_bytes;
+       u64 tso_inner_packets;
+       u64 tso_inner_bytes;
+       u64 lro_packets;
+       u64 lro_bytes;
+       u64 rx_csum_good;
+       u64 rx_csum_none;
+       u64 rx_csum_sw;
+       u64 rx_csum_inner;
+       u64 tx_csum_offload;
+       u64 tx_csum_inner;
+       u64 tx_queue_stopped;
+       u64 tx_queue_wake;
+       u64 tx_queue_dropped;
+       u64 rx_wqe_err;
+       u64 rx_mpwqe_filler;
+       u64 rx_mpwqe_frag;
+       u64 rx_buff_alloc_err;
+
+       /* Special handling counters */
+       u64 link_down_events;
+};
+
+static const struct counter_desc sw_stats_desc[] = {
+       { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_packets) },
+       { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_bytes) },
+       { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_packets) },
+       { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_bytes) },
+       { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tso_packets) },
+       { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tso_bytes) },
+       { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tso_inner_packets) },
+       { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tso_inner_bytes) },
+       { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, lro_packets) },
+       { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, lro_bytes) },
+       { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_csum_good) },
+       { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_csum_none) },
+       { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_csum_sw) },
+       { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_csum_inner) },
+       { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_csum_offload) },
+       { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_csum_inner) },
+       { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_queue_stopped) },
+       { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_queue_wake) },
+       { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_queue_dropped) },
+       { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_wqe_err) },
+       { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_mpwqe_filler) },
+       { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_mpwqe_frag) },
+       { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_buff_alloc_err) },
+       { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, link_down_events) },
+};
+
+struct mlx5e_qcounter_stats {
+       u32 rx_out_of_buffer;
+};
+
+static const struct counter_desc q_stats_desc[] = {
+       { MLX5E_DECLARE_STAT(struct mlx5e_qcounter_stats, rx_out_of_buffer) },
+};
+
+#define VPORT_COUNTER_OFF(c) MLX5_BYTE_OFF(query_vport_counter_out, c)
+#define VPORT_COUNTER_GET(vstats, c) MLX5_GET64(query_vport_counter_out, \
+                                               vstats->query_vport_out, c)
+
+struct mlx5e_vport_stats {
+       __be64 query_vport_out[MLX5_ST_SZ_QW(query_vport_counter_out)];
+};
+
+static const struct counter_desc vport_stats_desc[] = {
+       { "rx_vport_error_packets",
+               VPORT_COUNTER_OFF(received_errors.packets) },
+       { "rx_vport_error_bytes", VPORT_COUNTER_OFF(received_errors.octets) },
+       { "tx_vport_error_packets",
+               VPORT_COUNTER_OFF(transmit_errors.packets) },
+       { "tx_vport_error_bytes", VPORT_COUNTER_OFF(transmit_errors.octets) },
+       { "rx_vport_unicast_packets",
+               VPORT_COUNTER_OFF(received_eth_unicast.packets) },
+       { "rx_vport_unicast_bytes",
+               VPORT_COUNTER_OFF(received_eth_unicast.octets) },
+       { "tx_vport_unicast_packets",
+               VPORT_COUNTER_OFF(transmitted_eth_unicast.packets) },
+       { "tx_vport_unicast_bytes",
+               VPORT_COUNTER_OFF(transmitted_eth_unicast.octets) },
+       { "rx_vport_multicast_packets",
+               VPORT_COUNTER_OFF(received_eth_multicast.packets) },
+       { "rx_vport_multicast_bytes",
+               VPORT_COUNTER_OFF(received_eth_multicast.octets) },
+       { "tx_vport_multicast_packets",
+               VPORT_COUNTER_OFF(transmitted_eth_multicast.packets) },
+       { "tx_vport_multicast_bytes",
+               VPORT_COUNTER_OFF(transmitted_eth_multicast.octets) },
+       { "rx_vport_broadcast_packets",
+               VPORT_COUNTER_OFF(received_eth_broadcast.packets) },
+       { "rx_vport_broadcast_bytes",
+               VPORT_COUNTER_OFF(received_eth_broadcast.octets) },
+       { "tx_vport_broadcast_packets",
+               VPORT_COUNTER_OFF(transmitted_eth_broadcast.packets) },
+       { "tx_vport_broadcast_bytes",
+               VPORT_COUNTER_OFF(transmitted_eth_broadcast.octets) },
+};
+
+#define PPORT_802_3_OFF(c) \
+       MLX5_BYTE_OFF(ppcnt_reg, \
+                     counter_set.eth_802_3_cntrs_grp_data_layout.c##_high)
+#define PPORT_802_3_GET(pstats, c) \
+       MLX5_GET64(ppcnt_reg, pstats->IEEE_802_3_counters, \
+                  counter_set.eth_802_3_cntrs_grp_data_layout.c##_high)
+#define PPORT_2863_OFF(c) \
+       MLX5_BYTE_OFF(ppcnt_reg, \
+                     counter_set.eth_2863_cntrs_grp_data_layout.c##_high)
+#define PPORT_2863_GET(pstats, c) \
+       MLX5_GET64(ppcnt_reg, pstats->RFC_2863_counters, \
+                  counter_set.eth_2863_cntrs_grp_data_layout.c##_high)
+#define PPORT_2819_OFF(c) \
+       MLX5_BYTE_OFF(ppcnt_reg, \
+                     counter_set.eth_2819_cntrs_grp_data_layout.c##_high)
+#define PPORT_2819_GET(pstats, c) \
+       MLX5_GET64(ppcnt_reg, pstats->RFC_2819_counters, \
+                  counter_set.eth_2819_cntrs_grp_data_layout.c##_high)
+#define PPORT_PER_PRIO_OFF(c) \
+       MLX5_BYTE_OFF(ppcnt_reg, \
+                     counter_set.eth_per_prio_grp_data_layout.c##_high)
+#define PPORT_PER_PRIO_GET(pstats, prio, c) \
+       MLX5_GET64(ppcnt_reg, pstats->per_prio_counters[prio], \
+                  counter_set.eth_per_prio_grp_data_layout.c##_high)
+#define NUM_PPORT_PRIO                         8
+
+struct mlx5e_pport_stats {
+       __be64 IEEE_802_3_counters[MLX5_ST_SZ_QW(ppcnt_reg)];
+       __be64 RFC_2863_counters[MLX5_ST_SZ_QW(ppcnt_reg)];
+       __be64 RFC_2819_counters[MLX5_ST_SZ_QW(ppcnt_reg)];
+       __be64 per_prio_counters[NUM_PPORT_PRIO][MLX5_ST_SZ_QW(ppcnt_reg)];
+       __be64 phy_counters[MLX5_ST_SZ_QW(ppcnt_reg)];
+};
+
+static const struct counter_desc pport_802_3_stats_desc[] = {
+       { "frames_tx", PPORT_802_3_OFF(a_frames_transmitted_ok) },
+       { "frames_rx", PPORT_802_3_OFF(a_frames_received_ok) },
+       { "check_seq_err", PPORT_802_3_OFF(a_frame_check_sequence_errors) },
+       { "alignment_err", PPORT_802_3_OFF(a_alignment_errors) },
+       { "octets_tx", PPORT_802_3_OFF(a_octets_transmitted_ok) },
+       { "octets_received", PPORT_802_3_OFF(a_octets_received_ok) },
+       { "multicast_xmitted", PPORT_802_3_OFF(a_multicast_frames_xmitted_ok) },
+       { "broadcast_xmitted", PPORT_802_3_OFF(a_broadcast_frames_xmitted_ok) },
+       { "multicast_rx", PPORT_802_3_OFF(a_multicast_frames_received_ok) },
+       { "broadcast_rx", PPORT_802_3_OFF(a_broadcast_frames_received_ok) },
+       { "in_range_len_errors", PPORT_802_3_OFF(a_in_range_length_errors) },
+       { "out_of_range_len", PPORT_802_3_OFF(a_out_of_range_length_field) },
+       { "too_long_errors", PPORT_802_3_OFF(a_frame_too_long_errors) },
+       { "symbol_err", PPORT_802_3_OFF(a_symbol_error_during_carrier) },
+       { "mac_control_tx", PPORT_802_3_OFF(a_mac_control_frames_transmitted) },
+       { "mac_control_rx", PPORT_802_3_OFF(a_mac_control_frames_received) },
+       { "unsupported_op_rx",
+               PPORT_802_3_OFF(a_unsupported_opcodes_received) },
+       { "pause_ctrl_rx", PPORT_802_3_OFF(a_pause_mac_ctrl_frames_received) },
+       { "pause_ctrl_tx",
+               PPORT_802_3_OFF(a_pause_mac_ctrl_frames_transmitted) },
+};
+
+static const struct counter_desc pport_2863_stats_desc[] = {
+       { "in_octets", PPORT_2863_OFF(if_in_octets) },
+       { "in_ucast_pkts", PPORT_2863_OFF(if_in_ucast_pkts) },
+       { "in_discards", PPORT_2863_OFF(if_in_discards) },
+       { "in_errors", PPORT_2863_OFF(if_in_errors) },
+       { "in_unknown_protos", PPORT_2863_OFF(if_in_unknown_protos) },
+       { "out_octets", PPORT_2863_OFF(if_out_octets) },
+       { "out_ucast_pkts", PPORT_2863_OFF(if_out_ucast_pkts) },
+       { "out_discards", PPORT_2863_OFF(if_out_discards) },
+       { "out_errors", PPORT_2863_OFF(if_out_errors) },
+       { "in_multicast_pkts", PPORT_2863_OFF(if_in_multicast_pkts) },
+       { "in_broadcast_pkts", PPORT_2863_OFF(if_in_broadcast_pkts) },
+       { "out_multicast_pkts", PPORT_2863_OFF(if_out_multicast_pkts) },
+       { "out_broadcast_pkts", PPORT_2863_OFF(if_out_broadcast_pkts) },
+};
+
+static const struct counter_desc pport_2819_stats_desc[] = {
+       { "drop_events", PPORT_2819_OFF(ether_stats_drop_events) },
+       { "octets", PPORT_2819_OFF(ether_stats_octets) },
+       { "pkts", PPORT_2819_OFF(ether_stats_pkts) },
+       { "broadcast_pkts", PPORT_2819_OFF(ether_stats_broadcast_pkts) },
+       { "multicast_pkts", PPORT_2819_OFF(ether_stats_multicast_pkts) },
+       { "crc_align_errors", PPORT_2819_OFF(ether_stats_crc_align_errors) },
+       { "undersize_pkts", PPORT_2819_OFF(ether_stats_undersize_pkts) },
+       { "oversize_pkts", PPORT_2819_OFF(ether_stats_oversize_pkts) },
+       { "fragments", PPORT_2819_OFF(ether_stats_fragments) },
+       { "jabbers", PPORT_2819_OFF(ether_stats_jabbers) },
+       { "collisions", PPORT_2819_OFF(ether_stats_collisions) },
+       { "p64octets", PPORT_2819_OFF(ether_stats_pkts64octets) },
+       { "p65to127octets", PPORT_2819_OFF(ether_stats_pkts65to127octets) },
+       { "p128to255octets", PPORT_2819_OFF(ether_stats_pkts128to255octets) },
+       { "p256to511octets", PPORT_2819_OFF(ether_stats_pkts256to511octets) },
+       { "p512to1023octets", PPORT_2819_OFF(ether_stats_pkts512to1023octets) },
+       { "p1024to1518octets",
+               PPORT_2819_OFF(ether_stats_pkts1024to1518octets) },
+       { "p1519to2047octets",
+               PPORT_2819_OFF(ether_stats_pkts1519to2047octets) },
+       { "p2048to4095octets",
+               PPORT_2819_OFF(ether_stats_pkts2048to4095octets) },
+       { "p4096to8191octets",
+               PPORT_2819_OFF(ether_stats_pkts4096to8191octets) },
+       { "p8192to10239octets",
+               PPORT_2819_OFF(ether_stats_pkts8192to10239octets) },
+};
+
+static const struct counter_desc pport_per_prio_traffic_stats_desc[] = {
+       { "rx_octets", PPORT_PER_PRIO_OFF(rx_octets) },
+       { "rx_frames", PPORT_PER_PRIO_OFF(rx_frames) },
+       { "tx_octets", PPORT_PER_PRIO_OFF(tx_octets) },
+       { "tx_frames", PPORT_PER_PRIO_OFF(tx_frames) },
+};
+
+static const struct counter_desc pport_per_prio_pfc_stats_desc[] = {
+       { "rx_pause", PPORT_PER_PRIO_OFF(rx_pause) },
+       { "rx_pause_duration", PPORT_PER_PRIO_OFF(rx_pause_duration) },
+       { "tx_pause", PPORT_PER_PRIO_OFF(tx_pause) },
+       { "tx_pause_duration", PPORT_PER_PRIO_OFF(tx_pause_duration) },
+       { "rx_pause_transition", PPORT_PER_PRIO_OFF(rx_pause_transition) },
+};
+
+struct mlx5e_rq_stats {
+       u64 packets;
+       u64 bytes;
+       u64 csum_sw;
+       u64 csum_inner;
+       u64 csum_none;
+       u64 lro_packets;
+       u64 lro_bytes;
+       u64 wqe_err;
+       u64 mpwqe_filler;
+       u64 mpwqe_frag;
+       u64 buff_alloc_err;
+};
+
+static const struct counter_desc rq_stats_desc[] = {
+       { MLX5E_DECLARE_STAT(struct mlx5e_rq_stats, packets) },
+       { MLX5E_DECLARE_STAT(struct mlx5e_rq_stats, bytes) },
+       { MLX5E_DECLARE_STAT(struct mlx5e_rq_stats, csum_sw) },
+       { MLX5E_DECLARE_STAT(struct mlx5e_rq_stats, csum_inner) },
+       { MLX5E_DECLARE_STAT(struct mlx5e_rq_stats, csum_none) },
+       { MLX5E_DECLARE_STAT(struct mlx5e_rq_stats, lro_packets) },
+       { MLX5E_DECLARE_STAT(struct mlx5e_rq_stats, lro_bytes) },
+       { MLX5E_DECLARE_STAT(struct mlx5e_rq_stats, wqe_err) },
+       { MLX5E_DECLARE_STAT(struct mlx5e_rq_stats, mpwqe_filler) },
+       { MLX5E_DECLARE_STAT(struct mlx5e_rq_stats, mpwqe_frag) },
+       { MLX5E_DECLARE_STAT(struct mlx5e_rq_stats, buff_alloc_err) },
+};
+
+struct mlx5e_sq_stats {
+       /* commonly accessed in data path */
+       u64 packets;
+       u64 bytes;
+       u64 tso_packets;
+       u64 tso_bytes;
+       u64 tso_inner_packets;
+       u64 tso_inner_bytes;
+       u64 csum_offload_inner;
+       u64 nop;
+       /* less likely accessed in data path */
+       u64 csum_offload_none;
+       u64 stopped;
+       u64 wake;
+       u64 dropped;
+};
+
+static const struct counter_desc sq_stats_desc[] = {
+       { MLX5E_DECLARE_STAT(struct mlx5e_sq_stats, packets) },
+       { MLX5E_DECLARE_STAT(struct mlx5e_sq_stats, bytes) },
+       { MLX5E_DECLARE_STAT(struct mlx5e_sq_stats, tso_packets) },
+       { MLX5E_DECLARE_STAT(struct mlx5e_sq_stats, tso_bytes) },
+       { MLX5E_DECLARE_STAT(struct mlx5e_sq_stats, tso_inner_packets) },
+       { MLX5E_DECLARE_STAT(struct mlx5e_sq_stats, tso_inner_bytes) },
+       { MLX5E_DECLARE_STAT(struct mlx5e_sq_stats, csum_offload_inner) },
+       { MLX5E_DECLARE_STAT(struct mlx5e_sq_stats, nop) },
+       { MLX5E_DECLARE_STAT(struct mlx5e_sq_stats, csum_offload_none) },
+       { MLX5E_DECLARE_STAT(struct mlx5e_sq_stats, stopped) },
+       { MLX5E_DECLARE_STAT(struct mlx5e_sq_stats, wake) },
+       { MLX5E_DECLARE_STAT(struct mlx5e_sq_stats, dropped) },
+};
+
+#define NUM_SW_COUNTERS                        ARRAY_SIZE(sw_stats_desc)
+#define NUM_Q_COUNTERS                 ARRAY_SIZE(q_stats_desc)
+#define NUM_VPORT_COUNTERS             ARRAY_SIZE(vport_stats_desc)
+#define NUM_PPORT_802_3_COUNTERS       ARRAY_SIZE(pport_802_3_stats_desc)
+#define NUM_PPORT_2863_COUNTERS                ARRAY_SIZE(pport_2863_stats_desc)
+#define NUM_PPORT_2819_COUNTERS                ARRAY_SIZE(pport_2819_stats_desc)
+#define NUM_PPORT_PER_PRIO_TRAFFIC_COUNTERS \
+       ARRAY_SIZE(pport_per_prio_traffic_stats_desc)
+#define NUM_PPORT_PER_PRIO_PFC_COUNTERS \
+       ARRAY_SIZE(pport_per_prio_pfc_stats_desc)
+#define NUM_PPORT_COUNTERS             (NUM_PPORT_802_3_COUNTERS + \
+                                        NUM_PPORT_2863_COUNTERS  + \
+                                        NUM_PPORT_2819_COUNTERS  + \
+                                        NUM_PPORT_PER_PRIO_TRAFFIC_COUNTERS * \
+                                        NUM_PPORT_PRIO)
+#define NUM_RQ_STATS                   ARRAY_SIZE(rq_stats_desc)
+#define NUM_SQ_STATS                   ARRAY_SIZE(sq_stats_desc)
+
+struct mlx5e_stats {
+       struct mlx5e_sw_stats sw;
+       struct mlx5e_qcounter_stats qcnt;
+       struct mlx5e_vport_stats vport;
+       struct mlx5e_pport_stats pport;
+};
+
+#endif /* __MLX5_EN_STATS_H__ */
index b3de09f..ef017c0 100644 (file)
@@ -46,8 +46,8 @@ struct mlx5e_tc_flow {
        struct mlx5_flow_rule   *rule;
 };
 
-#define MLX5E_TC_FLOW_TABLE_NUM_ENTRIES 1024
-#define MLX5E_TC_FLOW_TABLE_NUM_GROUPS 4
+#define MLX5E_TC_TABLE_NUM_ENTRIES 1024
+#define MLX5E_TC_TABLE_NUM_GROUPS 4
 
 static struct mlx5_flow_rule *mlx5e_tc_add_flow(struct mlx5e_priv *priv,
                                                u32 *match_c, u32 *match_v,
@@ -55,33 +55,35 @@ static struct mlx5_flow_rule *mlx5e_tc_add_flow(struct mlx5e_priv *priv,
 {
        struct mlx5_flow_destination dest = {
                .type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE,
-               {.ft = priv->fts.vlan.t},
+               {.ft = priv->fs.vlan.ft.t},
        };
        struct mlx5_flow_rule *rule;
        bool table_created = false;
 
-       if (IS_ERR_OR_NULL(priv->fts.tc.t)) {
-               priv->fts.tc.t =
-                       mlx5_create_auto_grouped_flow_table(priv->fts.ns, 0,
-                                                           MLX5E_TC_FLOW_TABLE_NUM_ENTRIES,
-                                                           MLX5E_TC_FLOW_TABLE_NUM_GROUPS);
-               if (IS_ERR(priv->fts.tc.t)) {
+       if (IS_ERR_OR_NULL(priv->fs.tc.t)) {
+               priv->fs.tc.t =
+                       mlx5_create_auto_grouped_flow_table(priv->fs.ns,
+                                                           MLX5E_TC_PRIO,
+                                                           MLX5E_TC_TABLE_NUM_ENTRIES,
+                                                           MLX5E_TC_TABLE_NUM_GROUPS,
+                                                           0);
+               if (IS_ERR(priv->fs.tc.t)) {
                        netdev_err(priv->netdev,
                                   "Failed to create tc offload table\n");
-                       return ERR_CAST(priv->fts.tc.t);
+                       return ERR_CAST(priv->fs.tc.t);
                }
 
                table_created = true;
        }
 
-       rule = mlx5_add_flow_rule(priv->fts.tc.t, MLX5_MATCH_OUTER_HEADERS,
+       rule = mlx5_add_flow_rule(priv->fs.tc.t, MLX5_MATCH_OUTER_HEADERS,
                                  match_c, match_v,
                                  action, flow_tag,
                                  action & MLX5_FLOW_CONTEXT_ACTION_FWD_DEST ? &dest : NULL);
 
        if (IS_ERR(rule) && table_created) {
-               mlx5_destroy_flow_table(priv->fts.tc.t);
-               priv->fts.tc.t = NULL;
+               mlx5_destroy_flow_table(priv->fs.tc.t);
+               priv->fs.tc.t = NULL;
        }
 
        return rule;
@@ -93,8 +95,8 @@ static void mlx5e_tc_del_flow(struct mlx5e_priv *priv,
        mlx5_del_flow_rule(rule);
 
        if (!mlx5e_tc_num_filters(priv)) {
-               mlx5_destroy_flow_table(priv->fts.tc.t);
-               priv->fts.tc.t = NULL;
+               mlx5_destroy_flow_table(priv->fs.tc.t);
+               priv->fs.tc.t = NULL;
        }
 }
 
@@ -310,7 +312,7 @@ static int parse_tc_actions(struct mlx5e_priv *priv, struct tcf_exts *exts,
 int mlx5e_configure_flower(struct mlx5e_priv *priv, __be16 protocol,
                           struct tc_cls_flower_offload *f)
 {
-       struct mlx5e_tc_flow_table *tc = &priv->fts.tc;
+       struct mlx5e_tc_table *tc = &priv->fs.tc;
        u32 *match_c;
        u32 *match_v;
        int err = 0;
@@ -376,7 +378,7 @@ int mlx5e_delete_flower(struct mlx5e_priv *priv,
                        struct tc_cls_flower_offload *f)
 {
        struct mlx5e_tc_flow *flow;
-       struct mlx5e_tc_flow_table *tc = &priv->fts.tc;
+       struct mlx5e_tc_table *tc = &priv->fs.tc;
 
        flow = rhashtable_lookup_fast(&tc->ht, &f->cookie,
                                      tc->ht_params);
@@ -401,7 +403,7 @@ static const struct rhashtable_params mlx5e_tc_flow_ht_params = {
 
 int mlx5e_tc_init(struct mlx5e_priv *priv)
 {
-       struct mlx5e_tc_flow_table *tc = &priv->fts.tc;
+       struct mlx5e_tc_table *tc = &priv->fs.tc;
 
        tc->ht_params = mlx5e_tc_flow_ht_params;
        return rhashtable_init(&tc->ht, &tc->ht_params);
@@ -418,12 +420,12 @@ static void _mlx5e_tc_del_flow(void *ptr, void *arg)
 
 void mlx5e_tc_cleanup(struct mlx5e_priv *priv)
 {
-       struct mlx5e_tc_flow_table *tc = &priv->fts.tc;
+       struct mlx5e_tc_table *tc = &priv->fs.tc;
 
        rhashtable_free_and_destroy(&tc->ht, _mlx5e_tc_del_flow, priv);
 
-       if (!IS_ERR_OR_NULL(priv->fts.tc.t)) {
-               mlx5_destroy_flow_table(priv->fts.tc.t);
-               priv->fts.tc.t = NULL;
+       if (!IS_ERR_OR_NULL(tc->t)) {
+               mlx5_destroy_flow_table(tc->t);
+               tc->t = NULL;
        }
 }
index d677428..a4f17b9 100644 (file)
@@ -45,7 +45,7 @@ int mlx5e_delete_flower(struct mlx5e_priv *priv,
 
 static inline int mlx5e_tc_num_filters(struct mlx5e_priv *priv)
 {
-       return atomic_read(&priv->fts.tc.ht.nelems);
+       return atomic_read(&priv->fs.tc.ht.nelems);
 }
 
 #endif /* __MLX5_EN_TC_H__ */
index bc3d9f8..b84a691 100644 (file)
@@ -77,16 +77,20 @@ struct vport_addr {
        u8                     action;
        u32                    vport;
        struct mlx5_flow_rule *flow_rule; /* SRIOV only */
+       /* A flag indicating that mac was added due to mc promiscuous vport */
+       bool mc_promisc;
 };
 
 enum {
        UC_ADDR_CHANGE = BIT(0),
        MC_ADDR_CHANGE = BIT(1),
+       PROMISC_CHANGE = BIT(3),
 };
 
 /* Vport context events */
 #define SRIOV_VPORT_EVENTS (UC_ADDR_CHANGE | \
-                           MC_ADDR_CHANGE)
+                           MC_ADDR_CHANGE | \
+                           PROMISC_CHANGE)
 
 static int arm_vport_context_events_cmd(struct mlx5_core_dev *dev, u16 vport,
                                        u32 events_mask)
@@ -116,6 +120,9 @@ static int arm_vport_context_events_cmd(struct mlx5_core_dev *dev, u16 vport,
        if (events_mask & MC_ADDR_CHANGE)
                MLX5_SET(nic_vport_context, nic_vport_ctx,
                         event_on_mc_address_change, 1);
+       if (events_mask & PROMISC_CHANGE)
+               MLX5_SET(nic_vport_context, nic_vport_ctx,
+                        event_on_promisc_change, 1);
 
        err = mlx5_cmd_exec(dev, in, sizeof(in), out, sizeof(out));
        if (err)
@@ -323,30 +330,45 @@ static void del_l2_table_entry(struct mlx5_core_dev *dev, u32 index)
 
 /* E-Switch FDB */
 static struct mlx5_flow_rule *
-esw_fdb_set_vport_rule(struct mlx5_eswitch *esw, u8 mac[ETH_ALEN], u32 vport)
+__esw_fdb_set_vport_rule(struct mlx5_eswitch *esw, u32 vport, bool rx_rule,
+                        u8 mac_c[ETH_ALEN], u8 mac_v[ETH_ALEN])
 {
-       int match_header = MLX5_MATCH_OUTER_HEADERS;
-       struct mlx5_flow_destination dest;
+       int match_header = (is_zero_ether_addr(mac_c) ? 0 :
+                           MLX5_MATCH_OUTER_HEADERS);
        struct mlx5_flow_rule *flow_rule = NULL;
+       struct mlx5_flow_destination dest;
+       void *mv_misc = NULL;
+       void *mc_misc = NULL;
+       u8 *dmac_v = NULL;
+       u8 *dmac_c = NULL;
        u32 *match_v;
        u32 *match_c;
-       u8 *dmac_v;
-       u8 *dmac_c;
 
+       if (rx_rule)
+               match_header |= MLX5_MATCH_MISC_PARAMETERS;
        match_v = kzalloc(MLX5_ST_SZ_BYTES(fte_match_param), GFP_KERNEL);
        match_c = kzalloc(MLX5_ST_SZ_BYTES(fte_match_param), GFP_KERNEL);
        if (!match_v || !match_c) {
                pr_warn("FDB: Failed to alloc match parameters\n");
                goto out;
        }
+
        dmac_v = MLX5_ADDR_OF(fte_match_param, match_v,
                              outer_headers.dmac_47_16);
        dmac_c = MLX5_ADDR_OF(fte_match_param, match_c,
                              outer_headers.dmac_47_16);
 
-       ether_addr_copy(dmac_v, mac);
-       /* Match criteria mask */
-       memset(dmac_c, 0xff, 6);
+       if (match_header & MLX5_MATCH_OUTER_HEADERS) {
+               ether_addr_copy(dmac_v, mac_v);
+               ether_addr_copy(dmac_c, mac_c);
+       }
+
+       if (match_header & MLX5_MATCH_MISC_PARAMETERS) {
+               mv_misc  = MLX5_ADDR_OF(fte_match_param, match_v, misc_parameters);
+               mc_misc  = MLX5_ADDR_OF(fte_match_param, match_c, misc_parameters);
+               MLX5_SET(fte_match_set_misc, mv_misc, source_port, UPLINK_VPORT);
+               MLX5_SET_TO_ONES(fte_match_set_misc, mc_misc, source_port);
+       }
 
        dest.type = MLX5_FLOW_DESTINATION_TYPE_VPORT;
        dest.vport_num = vport;
@@ -373,6 +395,39 @@ out:
        return flow_rule;
 }
 
+static struct mlx5_flow_rule *
+esw_fdb_set_vport_rule(struct mlx5_eswitch *esw, u8 mac[ETH_ALEN], u32 vport)
+{
+       u8 mac_c[ETH_ALEN];
+
+       eth_broadcast_addr(mac_c);
+       return __esw_fdb_set_vport_rule(esw, vport, false, mac_c, mac);
+}
+
+static struct mlx5_flow_rule *
+esw_fdb_set_vport_allmulti_rule(struct mlx5_eswitch *esw, u32 vport)
+{
+       u8 mac_c[ETH_ALEN];
+       u8 mac_v[ETH_ALEN];
+
+       eth_zero_addr(mac_c);
+       eth_zero_addr(mac_v);
+       mac_c[0] = 0x01;
+       mac_v[0] = 0x01;
+       return __esw_fdb_set_vport_rule(esw, vport, false, mac_c, mac_v);
+}
+
+static struct mlx5_flow_rule *
+esw_fdb_set_vport_promisc_rule(struct mlx5_eswitch *esw, u32 vport)
+{
+       u8 mac_c[ETH_ALEN];
+       u8 mac_v[ETH_ALEN];
+
+       eth_zero_addr(mac_c);
+       eth_zero_addr(mac_v);
+       return __esw_fdb_set_vport_rule(esw, vport, true, mac_c, mac_v);
+}
+
 static int esw_create_fdb_table(struct mlx5_eswitch *esw, int nvports)
 {
        int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in);
@@ -401,34 +456,80 @@ static int esw_create_fdb_table(struct mlx5_eswitch *esw, int nvports)
        memset(flow_group_in, 0, inlen);
 
        table_size = BIT(MLX5_CAP_ESW_FLOWTABLE_FDB(dev, log_max_ft_size));
-       fdb = mlx5_create_flow_table(root_ns, 0, table_size);
+       fdb = mlx5_create_flow_table(root_ns, 0, table_size, 0);
        if (IS_ERR_OR_NULL(fdb)) {
                err = PTR_ERR(fdb);
                esw_warn(dev, "Failed to create FDB Table err %d\n", err);
                goto out;
        }
+       esw->fdb_table.fdb = fdb;
 
+       /* Addresses group : Full match unicast/multicast addresses */
        MLX5_SET(create_flow_group_in, flow_group_in, match_criteria_enable,
                 MLX5_MATCH_OUTER_HEADERS);
        match_criteria = MLX5_ADDR_OF(create_flow_group_in, flow_group_in, match_criteria);
        dmac = MLX5_ADDR_OF(fte_match_param, match_criteria, outer_headers.dmac_47_16);
        MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, 0);
-       MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, table_size - 1);
+       /* Preserve 2 entries for allmulti and promisc rules*/
+       MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, table_size - 3);
        eth_broadcast_addr(dmac);
-
        g = mlx5_create_flow_group(fdb, flow_group_in);
        if (IS_ERR_OR_NULL(g)) {
                err = PTR_ERR(g);
                esw_warn(dev, "Failed to create flow group err(%d)\n", err);
                goto out;
        }
-
        esw->fdb_table.addr_grp = g;
-       esw->fdb_table.fdb = fdb;
+
+       /* Allmulti group : One rule that forwards any mcast traffic */
+       MLX5_SET(create_flow_group_in, flow_group_in, match_criteria_enable,
+                MLX5_MATCH_OUTER_HEADERS);
+       MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, table_size - 2);
+       MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, table_size - 2);
+       eth_zero_addr(dmac);
+       dmac[0] = 0x01;
+       g = mlx5_create_flow_group(fdb, flow_group_in);
+       if (IS_ERR_OR_NULL(g)) {
+               err = PTR_ERR(g);
+               esw_warn(dev, "Failed to create allmulti flow group err(%d)\n", err);
+               goto out;
+       }
+       esw->fdb_table.allmulti_grp = g;
+
+       /* Promiscuous group :
+        * One rule that forward all unmatched traffic from previous groups
+        */
+       eth_zero_addr(dmac);
+       MLX5_SET(create_flow_group_in, flow_group_in, match_criteria_enable,
+                MLX5_MATCH_MISC_PARAMETERS);
+       MLX5_SET_TO_ONES(fte_match_param, match_criteria, misc_parameters.source_port);
+       MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, table_size - 1);
+       MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, table_size - 1);
+       g = mlx5_create_flow_group(fdb, flow_group_in);
+       if (IS_ERR_OR_NULL(g)) {
+               err = PTR_ERR(g);
+               esw_warn(dev, "Failed to create promisc flow group err(%d)\n", err);
+               goto out;
+       }
+       esw->fdb_table.promisc_grp = g;
+
 out:
+       if (err) {
+               if (!IS_ERR_OR_NULL(esw->fdb_table.allmulti_grp)) {
+                       mlx5_destroy_flow_group(esw->fdb_table.allmulti_grp);
+                       esw->fdb_table.allmulti_grp = NULL;
+               }
+               if (!IS_ERR_OR_NULL(esw->fdb_table.addr_grp)) {
+                       mlx5_destroy_flow_group(esw->fdb_table.addr_grp);
+                       esw->fdb_table.addr_grp = NULL;
+               }
+               if (!IS_ERR_OR_NULL(esw->fdb_table.fdb)) {
+                       mlx5_destroy_flow_table(esw->fdb_table.fdb);
+                       esw->fdb_table.fdb = NULL;
+               }
+       }
+
        kfree(flow_group_in);
-       if (err && !IS_ERR_OR_NULL(fdb))
-               mlx5_destroy_flow_table(fdb);
        return err;
 }
 
@@ -438,10 +539,14 @@ static void esw_destroy_fdb_table(struct mlx5_eswitch *esw)
                return;
 
        esw_debug(esw->dev, "Destroy FDB Table\n");
+       mlx5_destroy_flow_group(esw->fdb_table.promisc_grp);
+       mlx5_destroy_flow_group(esw->fdb_table.allmulti_grp);
        mlx5_destroy_flow_group(esw->fdb_table.addr_grp);
        mlx5_destroy_flow_table(esw->fdb_table.fdb);
        esw->fdb_table.fdb = NULL;
        esw->fdb_table.addr_grp = NULL;
+       esw->fdb_table.allmulti_grp = NULL;
+       esw->fdb_table.promisc_grp = NULL;
 }
 
 /* E-Switch vport UC/MC lists management */
@@ -511,6 +616,52 @@ static int esw_del_uc_addr(struct mlx5_eswitch *esw, struct vport_addr *vaddr)
        return 0;
 }
 
+static void update_allmulti_vports(struct mlx5_eswitch *esw,
+                                  struct vport_addr *vaddr,
+                                  struct esw_mc_addr *esw_mc)
+{
+       u8 *mac = vaddr->node.addr;
+       u32 vport_idx = 0;
+
+       for (vport_idx = 0; vport_idx < esw->total_vports; vport_idx++) {
+               struct mlx5_vport *vport = &esw->vports[vport_idx];
+               struct hlist_head *vport_hash = vport->mc_list;
+               struct vport_addr *iter_vaddr =
+                                       l2addr_hash_find(vport_hash,
+                                                        mac,
+                                                        struct vport_addr);
+               if (IS_ERR_OR_NULL(vport->allmulti_rule) ||
+                   vaddr->vport == vport_idx)
+                       continue;
+               switch (vaddr->action) {
+               case MLX5_ACTION_ADD:
+                       if (iter_vaddr)
+                               continue;
+                       iter_vaddr = l2addr_hash_add(vport_hash, mac,
+                                                    struct vport_addr,
+                                                    GFP_KERNEL);
+                       if (!iter_vaddr) {
+                               esw_warn(esw->dev,
+                                        "ALL-MULTI: Failed to add MAC(%pM) to vport[%d] DB\n",
+                                        mac, vport_idx);
+                               continue;
+                       }
+                       iter_vaddr->vport = vport_idx;
+                       iter_vaddr->flow_rule =
+                                       esw_fdb_set_vport_rule(esw,
+                                                              mac,
+                                                              vport_idx);
+                       break;
+               case MLX5_ACTION_DEL:
+                       if (!iter_vaddr)
+                               continue;
+                       mlx5_del_flow_rule(iter_vaddr->flow_rule);
+                       l2addr_hash_del(iter_vaddr);
+                       break;
+               }
+       }
+}
+
 static int esw_add_mc_addr(struct mlx5_eswitch *esw, struct vport_addr *vaddr)
 {
        struct hlist_head *hash = esw->mc_table;
@@ -531,8 +682,17 @@ static int esw_add_mc_addr(struct mlx5_eswitch *esw, struct vport_addr *vaddr)
 
        esw_mc->uplink_rule = /* Forward MC MAC to Uplink */
                esw_fdb_set_vport_rule(esw, mac, UPLINK_VPORT);
+
+       /* Add this multicast mac to all the mc promiscuous vports */
+       update_allmulti_vports(esw, vaddr, esw_mc);
+
 add:
-       esw_mc->refcnt++;
+       /* If the multicast mac is added as a result of mc promiscuous vport,
+        * don't increment the multicast ref count
+        */
+       if (!vaddr->mc_promisc)
+               esw_mc->refcnt++;
+
        /* Forward MC MAC to vport */
        vaddr->flow_rule = esw_fdb_set_vport_rule(esw, mac, vport);
        esw_debug(esw->dev,
@@ -568,9 +728,15 @@ static int esw_del_mc_addr(struct mlx5_eswitch *esw, struct vport_addr *vaddr)
                mlx5_del_flow_rule(vaddr->flow_rule);
        vaddr->flow_rule = NULL;
 
-       if (--esw_mc->refcnt)
+       /* If the multicast mac is added as a result of mc promiscuous vport,
+        * don't decrement the multicast ref count.
+        */
+       if (vaddr->mc_promisc || (--esw_mc->refcnt > 0))
                return 0;
 
+       /* Remove this multicast mac from all the mc promiscuous vports */
+       update_allmulti_vports(esw, vaddr, esw_mc);
+
        if (esw_mc->uplink_rule)
                mlx5_del_flow_rule(esw_mc->uplink_rule);
 
@@ -643,10 +809,13 @@ static void esw_update_vport_addr_list(struct mlx5_eswitch *esw,
                addr->action = MLX5_ACTION_DEL;
        }
 
+       if (!vport->enabled)
+               goto out;
+
        err = mlx5_query_nic_vport_mac_list(esw->dev, vport_num, list_type,
                                            mac_list, &size);
        if (err)
-               return;
+               goto out;
        esw_debug(esw->dev, "vport[%d] context update %s list size (%d)\n",
                  vport_num, is_uc ? "UC" : "MC", size);
 
@@ -660,6 +829,24 @@ static void esw_update_vport_addr_list(struct mlx5_eswitch *esw,
                addr = l2addr_hash_find(hash, mac_list[i], struct vport_addr);
                if (addr) {
                        addr->action = MLX5_ACTION_NONE;
+                       /* If this mac was previously added because of allmulti
+                        * promiscuous rx mode, its now converted to be original
+                        * vport mac.
+                        */
+                       if (addr->mc_promisc) {
+                               struct esw_mc_addr *esw_mc =
+                                       l2addr_hash_find(esw->mc_table,
+                                                        mac_list[i],
+                                                        struct esw_mc_addr);
+                               if (!esw_mc) {
+                                       esw_warn(esw->dev,
+                                                "Failed to MAC(%pM) in mcast DB\n",
+                                                mac_list[i]);
+                                       continue;
+                               }
+                               esw_mc->refcnt++;
+                               addr->mc_promisc = false;
+                       }
                        continue;
                }
 
@@ -674,13 +861,121 @@ static void esw_update_vport_addr_list(struct mlx5_eswitch *esw,
                addr->vport = vport_num;
                addr->action = MLX5_ACTION_ADD;
        }
+out:
        kfree(mac_list);
 }
 
-static void esw_vport_change_handler(struct work_struct *work)
+/* Sync vport UC/MC list from vport context
+ * Must be called after esw_update_vport_addr_list
+ */
+static void esw_update_vport_mc_promisc(struct mlx5_eswitch *esw, u32 vport_num)
+{
+       struct mlx5_vport *vport = &esw->vports[vport_num];
+       struct l2addr_node *node;
+       struct vport_addr *addr;
+       struct hlist_head *hash;
+       struct hlist_node *tmp;
+       int hi;
+
+       hash = vport->mc_list;
+
+       for_each_l2hash_node(node, tmp, esw->mc_table, hi) {
+               u8 *mac = node->addr;
+
+               addr = l2addr_hash_find(hash, mac, struct vport_addr);
+               if (addr) {
+                       if (addr->action == MLX5_ACTION_DEL)
+                               addr->action = MLX5_ACTION_NONE;
+                       continue;
+               }
+               addr = l2addr_hash_add(hash, mac, struct vport_addr,
+                                      GFP_KERNEL);
+               if (!addr) {
+                       esw_warn(esw->dev,
+                                "Failed to add allmulti MAC(%pM) to vport[%d] DB\n",
+                                mac, vport_num);
+                       continue;
+               }
+               addr->vport = vport_num;
+               addr->action = MLX5_ACTION_ADD;
+               addr->mc_promisc = true;
+       }
+}
+
+/* Apply vport rx mode to HW FDB table */
+static void esw_apply_vport_rx_mode(struct mlx5_eswitch *esw, u32 vport_num,
+                                   bool promisc, bool mc_promisc)
+{
+       struct esw_mc_addr *allmulti_addr = esw->mc_promisc;
+       struct mlx5_vport *vport = &esw->vports[vport_num];
+
+       if (IS_ERR_OR_NULL(vport->allmulti_rule) != mc_promisc)
+               goto promisc;
+
+       if (mc_promisc) {
+               vport->allmulti_rule =
+                               esw_fdb_set_vport_allmulti_rule(esw, vport_num);
+               if (!allmulti_addr->uplink_rule)
+                       allmulti_addr->uplink_rule =
+                               esw_fdb_set_vport_allmulti_rule(esw,
+                                                               UPLINK_VPORT);
+               allmulti_addr->refcnt++;
+       } else if (vport->allmulti_rule) {
+               mlx5_del_flow_rule(vport->allmulti_rule);
+               vport->allmulti_rule = NULL;
+
+               if (--allmulti_addr->refcnt > 0)
+                       goto promisc;
+
+               if (allmulti_addr->uplink_rule)
+                       mlx5_del_flow_rule(allmulti_addr->uplink_rule);
+               allmulti_addr->uplink_rule = NULL;
+       }
+
+promisc:
+       if (IS_ERR_OR_NULL(vport->promisc_rule) != promisc)
+               return;
+
+       if (promisc) {
+               vport->promisc_rule = esw_fdb_set_vport_promisc_rule(esw,
+                                                                    vport_num);
+       } else if (vport->promisc_rule) {
+               mlx5_del_flow_rule(vport->promisc_rule);
+               vport->promisc_rule = NULL;
+       }
+}
+
+/* Sync vport rx mode from vport context */
+static void esw_update_vport_rx_mode(struct mlx5_eswitch *esw, u32 vport_num)
+{
+       struct mlx5_vport *vport = &esw->vports[vport_num];
+       int promisc_all = 0;
+       int promisc_uc = 0;
+       int promisc_mc = 0;
+       int err;
+
+       err = mlx5_query_nic_vport_promisc(esw->dev,
+                                          vport_num,
+                                          &promisc_uc,
+                                          &promisc_mc,
+                                          &promisc_all);
+       if (err)
+               return;
+       esw_debug(esw->dev, "vport[%d] context update rx mode promisc_all=%d, all_multi=%d\n",
+                 vport_num, promisc_all, promisc_mc);
+
+       if (!vport->trusted || !vport->enabled) {
+               promisc_uc = 0;
+               promisc_mc = 0;
+               promisc_all = 0;
+       }
+
+       esw_apply_vport_rx_mode(esw, vport_num, promisc_all,
+                               (promisc_all || promisc_mc));
+}
+
+static void esw_vport_change_handle_locked(struct mlx5_vport *vport)
 {
-       struct mlx5_vport *vport =
-               container_of(work, struct mlx5_vport, vport_change_handler);
        struct mlx5_core_dev *dev = vport->dev;
        struct mlx5_eswitch *esw = dev->priv.eswitch;
        u8 mac[ETH_ALEN];
@@ -699,6 +994,15 @@ static void esw_vport_change_handler(struct work_struct *work)
        if (vport->enabled_events & MC_ADDR_CHANGE) {
                esw_update_vport_addr_list(esw, vport->vport,
                                           MLX5_NVPRT_LIST_TYPE_MC);
+       }
+
+       if (vport->enabled_events & PROMISC_CHANGE) {
+               esw_update_vport_rx_mode(esw, vport->vport);
+               if (!IS_ERR_OR_NULL(vport->allmulti_rule))
+                       esw_update_vport_mc_promisc(esw, vport->vport);
+       }
+
+       if (vport->enabled_events & (PROMISC_CHANGE | MC_ADDR_CHANGE)) {
                esw_apply_vport_addr_list(esw, vport->vport,
                                          MLX5_NVPRT_LIST_TYPE_MC);
        }
@@ -709,15 +1013,477 @@ static void esw_vport_change_handler(struct work_struct *work)
                                             vport->enabled_events);
 }
 
+static void esw_vport_change_handler(struct work_struct *work)
+{
+       struct mlx5_vport *vport =
+               container_of(work, struct mlx5_vport, vport_change_handler);
+       struct mlx5_eswitch *esw = vport->dev->priv.eswitch;
+
+       mutex_lock(&esw->state_lock);
+       esw_vport_change_handle_locked(vport);
+       mutex_unlock(&esw->state_lock);
+}
+
+static void esw_vport_enable_egress_acl(struct mlx5_eswitch *esw,
+                                       struct mlx5_vport *vport)
+{
+       int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in);
+       struct mlx5_flow_group *vlan_grp = NULL;
+       struct mlx5_flow_group *drop_grp = NULL;
+       struct mlx5_core_dev *dev = esw->dev;
+       struct mlx5_flow_namespace *root_ns;
+       struct mlx5_flow_table *acl;
+       void *match_criteria;
+       u32 *flow_group_in;
+       /* The egress acl table contains 2 rules:
+        * 1)Allow traffic with vlan_tag=vst_vlan_id
+        * 2)Drop all other traffic.
+        */
+       int table_size = 2;
+       int err = 0;
+
+       if (!MLX5_CAP_ESW_EGRESS_ACL(dev, ft_support) ||
+           !IS_ERR_OR_NULL(vport->egress.acl))
+               return;
+
+       esw_debug(dev, "Create vport[%d] egress ACL log_max_size(%d)\n",
+                 vport->vport, MLX5_CAP_ESW_EGRESS_ACL(dev, log_max_ft_size));
+
+       root_ns = mlx5_get_flow_namespace(dev, MLX5_FLOW_NAMESPACE_ESW_EGRESS);
+       if (!root_ns) {
+               esw_warn(dev, "Failed to get E-Switch egress flow namespace\n");
+               return;
+       }
+
+       flow_group_in = mlx5_vzalloc(inlen);
+       if (!flow_group_in)
+               return;
+
+       acl = mlx5_create_vport_flow_table(root_ns, 0, table_size, 0, vport->vport);
+       if (IS_ERR_OR_NULL(acl)) {
+               err = PTR_ERR(acl);
+               esw_warn(dev, "Failed to create E-Switch vport[%d] egress flow Table, err(%d)\n",
+                        vport->vport, err);
+               goto out;
+       }
+
+       MLX5_SET(create_flow_group_in, flow_group_in, match_criteria_enable, MLX5_MATCH_OUTER_HEADERS);
+       match_criteria = MLX5_ADDR_OF(create_flow_group_in, flow_group_in, match_criteria);
+       MLX5_SET_TO_ONES(fte_match_param, match_criteria, outer_headers.vlan_tag);
+       MLX5_SET_TO_ONES(fte_match_param, match_criteria, outer_headers.first_vid);
+       MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, 0);
+       MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, 0);
+
+       vlan_grp = mlx5_create_flow_group(acl, flow_group_in);
+       if (IS_ERR_OR_NULL(vlan_grp)) {
+               err = PTR_ERR(vlan_grp);
+               esw_warn(dev, "Failed to create E-Switch vport[%d] egress allowed vlans flow group, err(%d)\n",
+                        vport->vport, err);
+               goto out;
+       }
+
+       memset(flow_group_in, 0, inlen);
+       MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, 1);
+       MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, 1);
+       drop_grp = mlx5_create_flow_group(acl, flow_group_in);
+       if (IS_ERR_OR_NULL(drop_grp)) {
+               err = PTR_ERR(drop_grp);
+               esw_warn(dev, "Failed to create E-Switch vport[%d] egress drop flow group, err(%d)\n",
+                        vport->vport, err);
+               goto out;
+       }
+
+       vport->egress.acl = acl;
+       vport->egress.drop_grp = drop_grp;
+       vport->egress.allowed_vlans_grp = vlan_grp;
+out:
+       kfree(flow_group_in);
+       if (err && !IS_ERR_OR_NULL(vlan_grp))
+               mlx5_destroy_flow_group(vlan_grp);
+       if (err && !IS_ERR_OR_NULL(acl))
+               mlx5_destroy_flow_table(acl);
+}
+
+static void esw_vport_cleanup_egress_rules(struct mlx5_eswitch *esw,
+                                          struct mlx5_vport *vport)
+{
+       if (!IS_ERR_OR_NULL(vport->egress.allowed_vlan))
+               mlx5_del_flow_rule(vport->egress.allowed_vlan);
+
+       if (!IS_ERR_OR_NULL(vport->egress.drop_rule))
+               mlx5_del_flow_rule(vport->egress.drop_rule);
+
+       vport->egress.allowed_vlan = NULL;
+       vport->egress.drop_rule = NULL;
+}
+
+static void esw_vport_disable_egress_acl(struct mlx5_eswitch *esw,
+                                        struct mlx5_vport *vport)
+{
+       if (IS_ERR_OR_NULL(vport->egress.acl))
+               return;
+
+       esw_debug(esw->dev, "Destroy vport[%d] E-Switch egress ACL\n", vport->vport);
+
+       esw_vport_cleanup_egress_rules(esw, vport);
+       mlx5_destroy_flow_group(vport->egress.allowed_vlans_grp);
+       mlx5_destroy_flow_group(vport->egress.drop_grp);
+       mlx5_destroy_flow_table(vport->egress.acl);
+       vport->egress.allowed_vlans_grp = NULL;
+       vport->egress.drop_grp = NULL;
+       vport->egress.acl = NULL;
+}
+
+static void esw_vport_enable_ingress_acl(struct mlx5_eswitch *esw,
+                                        struct mlx5_vport *vport)
+{
+       int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in);
+       struct mlx5_core_dev *dev = esw->dev;
+       struct mlx5_flow_namespace *root_ns;
+       struct mlx5_flow_table *acl;
+       struct mlx5_flow_group *g;
+       void *match_criteria;
+       u32 *flow_group_in;
+       /* The ingress acl table contains 4 groups
+        * (2 active rules at the same time -
+        *      1 allow rule from one of the first 3 groups.
+        *      1 drop rule from the last group):
+        * 1)Allow untagged traffic with smac=original mac.
+        * 2)Allow untagged traffic.
+        * 3)Allow traffic with smac=original mac.
+        * 4)Drop all other traffic.
+        */
+       int table_size = 4;
+       int err = 0;
+
+       if (!MLX5_CAP_ESW_INGRESS_ACL(dev, ft_support) ||
+           !IS_ERR_OR_NULL(vport->ingress.acl))
+               return;
+
+       esw_debug(dev, "Create vport[%d] ingress ACL log_max_size(%d)\n",
+                 vport->vport, MLX5_CAP_ESW_INGRESS_ACL(dev, log_max_ft_size));
+
+       root_ns = mlx5_get_flow_namespace(dev, MLX5_FLOW_NAMESPACE_ESW_INGRESS);
+       if (!root_ns) {
+               esw_warn(dev, "Failed to get E-Switch ingress flow namespace\n");
+               return;
+       }
+
+       flow_group_in = mlx5_vzalloc(inlen);
+       if (!flow_group_in)
+               return;
+
+       acl = mlx5_create_vport_flow_table(root_ns, 0, table_size, 0, vport->vport);
+       if (IS_ERR_OR_NULL(acl)) {
+               err = PTR_ERR(acl);
+               esw_warn(dev, "Failed to create E-Switch vport[%d] ingress flow Table, err(%d)\n",
+                        vport->vport, err);
+               goto out;
+       }
+       vport->ingress.acl = acl;
+
+       match_criteria = MLX5_ADDR_OF(create_flow_group_in, flow_group_in, match_criteria);
+
+       MLX5_SET(create_flow_group_in, flow_group_in, match_criteria_enable, MLX5_MATCH_OUTER_HEADERS);
+       MLX5_SET_TO_ONES(fte_match_param, match_criteria, outer_headers.vlan_tag);
+       MLX5_SET_TO_ONES(fte_match_param, match_criteria, outer_headers.smac_47_16);
+       MLX5_SET_TO_ONES(fte_match_param, match_criteria, outer_headers.smac_15_0);
+       MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, 0);
+       MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, 0);
+
+       g = mlx5_create_flow_group(acl, flow_group_in);
+       if (IS_ERR_OR_NULL(g)) {
+               err = PTR_ERR(g);
+               esw_warn(dev, "Failed to create E-Switch vport[%d] ingress untagged spoofchk flow group, err(%d)\n",
+                        vport->vport, err);
+               goto out;
+       }
+       vport->ingress.allow_untagged_spoofchk_grp = g;
+
+       memset(flow_group_in, 0, inlen);
+       MLX5_SET(create_flow_group_in, flow_group_in, match_criteria_enable, MLX5_MATCH_OUTER_HEADERS);
+       MLX5_SET_TO_ONES(fte_match_param, match_criteria, outer_headers.vlan_tag);
+       MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, 1);
+       MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, 1);
+
+       g = mlx5_create_flow_group(acl, flow_group_in);
+       if (IS_ERR_OR_NULL(g)) {
+               err = PTR_ERR(g);
+               esw_warn(dev, "Failed to create E-Switch vport[%d] ingress untagged flow group, err(%d)\n",
+                        vport->vport, err);
+               goto out;
+       }
+       vport->ingress.allow_untagged_only_grp = g;
+
+       memset(flow_group_in, 0, inlen);
+       MLX5_SET(create_flow_group_in, flow_group_in, match_criteria_enable, MLX5_MATCH_OUTER_HEADERS);
+       MLX5_SET_TO_ONES(fte_match_param, match_criteria, outer_headers.smac_47_16);
+       MLX5_SET_TO_ONES(fte_match_param, match_criteria, outer_headers.smac_15_0);
+       MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, 2);
+       MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, 2);
+
+       g = mlx5_create_flow_group(acl, flow_group_in);
+       if (IS_ERR_OR_NULL(g)) {
+               err = PTR_ERR(g);
+               esw_warn(dev, "Failed to create E-Switch vport[%d] ingress spoofchk flow group, err(%d)\n",
+                        vport->vport, err);
+               goto out;
+       }
+       vport->ingress.allow_spoofchk_only_grp = g;
+
+       memset(flow_group_in, 0, inlen);
+       MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, 3);
+       MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, 3);
+
+       g = mlx5_create_flow_group(acl, flow_group_in);
+       if (IS_ERR_OR_NULL(g)) {
+               err = PTR_ERR(g);
+               esw_warn(dev, "Failed to create E-Switch vport[%d] ingress drop flow group, err(%d)\n",
+                        vport->vport, err);
+               goto out;
+       }
+       vport->ingress.drop_grp = g;
+
+out:
+       if (err) {
+               if (!IS_ERR_OR_NULL(vport->ingress.allow_spoofchk_only_grp))
+                       mlx5_destroy_flow_group(
+                                       vport->ingress.allow_spoofchk_only_grp);
+               if (!IS_ERR_OR_NULL(vport->ingress.allow_untagged_only_grp))
+                       mlx5_destroy_flow_group(
+                                       vport->ingress.allow_untagged_only_grp);
+               if (!IS_ERR_OR_NULL(vport->ingress.allow_untagged_spoofchk_grp))
+                       mlx5_destroy_flow_group(
+                               vport->ingress.allow_untagged_spoofchk_grp);
+               if (!IS_ERR_OR_NULL(vport->ingress.acl))
+                       mlx5_destroy_flow_table(vport->ingress.acl);
+       }
+
+       kfree(flow_group_in);
+}
+
+static void esw_vport_cleanup_ingress_rules(struct mlx5_eswitch *esw,
+                                           struct mlx5_vport *vport)
+{
+       if (!IS_ERR_OR_NULL(vport->ingress.drop_rule))
+               mlx5_del_flow_rule(vport->ingress.drop_rule);
+
+       if (!IS_ERR_OR_NULL(vport->ingress.allow_rule))
+               mlx5_del_flow_rule(vport->ingress.allow_rule);
+
+       vport->ingress.drop_rule = NULL;
+       vport->ingress.allow_rule = NULL;
+}
+
+static void esw_vport_disable_ingress_acl(struct mlx5_eswitch *esw,
+                                         struct mlx5_vport *vport)
+{
+       if (IS_ERR_OR_NULL(vport->ingress.acl))
+               return;
+
+       esw_debug(esw->dev, "Destroy vport[%d] E-Switch ingress ACL\n", vport->vport);
+
+       esw_vport_cleanup_ingress_rules(esw, vport);
+       mlx5_destroy_flow_group(vport->ingress.allow_spoofchk_only_grp);
+       mlx5_destroy_flow_group(vport->ingress.allow_untagged_only_grp);
+       mlx5_destroy_flow_group(vport->ingress.allow_untagged_spoofchk_grp);
+       mlx5_destroy_flow_group(vport->ingress.drop_grp);
+       mlx5_destroy_flow_table(vport->ingress.acl);
+       vport->ingress.acl = NULL;
+       vport->ingress.drop_grp = NULL;
+       vport->ingress.allow_spoofchk_only_grp = NULL;
+       vport->ingress.allow_untagged_only_grp = NULL;
+       vport->ingress.allow_untagged_spoofchk_grp = NULL;
+}
+
+static int esw_vport_ingress_config(struct mlx5_eswitch *esw,
+                                   struct mlx5_vport *vport)
+{
+       u8 smac[ETH_ALEN];
+       u32 *match_v;
+       u32 *match_c;
+       int err = 0;
+       u8 *smac_v;
+
+       if (vport->spoofchk) {
+               err = mlx5_query_nic_vport_mac_address(esw->dev, vport->vport, smac);
+               if (err) {
+                       esw_warn(esw->dev,
+                                "vport[%d] configure ingress rules failed, query smac failed, err(%d)\n",
+                                vport->vport, err);
+                       return err;
+               }
+
+               if (!is_valid_ether_addr(smac)) {
+                       mlx5_core_warn(esw->dev,
+                                      "vport[%d] configure ingress rules failed, illegal mac with spoofchk\n",
+                                      vport->vport);
+                       return -EPERM;
+               }
+       }
+
+       esw_vport_cleanup_ingress_rules(esw, vport);
+
+       if (!vport->vlan && !vport->qos && !vport->spoofchk) {
+               esw_vport_disable_ingress_acl(esw, vport);
+               return 0;
+       }
+
+       esw_vport_enable_ingress_acl(esw, vport);
+
+       esw_debug(esw->dev,
+                 "vport[%d] configure ingress rules, vlan(%d) qos(%d)\n",
+                 vport->vport, vport->vlan, vport->qos);
+
+       match_v = kzalloc(MLX5_ST_SZ_BYTES(fte_match_param), GFP_KERNEL);
+       match_c = kzalloc(MLX5_ST_SZ_BYTES(fte_match_param), GFP_KERNEL);
+       if (!match_v || !match_c) {
+               err = -ENOMEM;
+               esw_warn(esw->dev, "vport[%d] configure ingress rules failed, err(%d)\n",
+                        vport->vport, err);
+               goto out;
+       }
+
+       if (vport->vlan || vport->qos)
+               MLX5_SET_TO_ONES(fte_match_param, match_c, outer_headers.vlan_tag);
+
+       if (vport->spoofchk) {
+               MLX5_SET_TO_ONES(fte_match_param, match_c, outer_headers.smac_47_16);
+               MLX5_SET_TO_ONES(fte_match_param, match_c, outer_headers.smac_15_0);
+               smac_v = MLX5_ADDR_OF(fte_match_param,
+                                     match_v,
+                                     outer_headers.smac_47_16);
+               ether_addr_copy(smac_v, smac);
+       }
+
+       vport->ingress.allow_rule =
+               mlx5_add_flow_rule(vport->ingress.acl,
+                                  MLX5_MATCH_OUTER_HEADERS,
+                                  match_c,
+                                  match_v,
+                                  MLX5_FLOW_CONTEXT_ACTION_ALLOW,
+                                  0, NULL);
+       if (IS_ERR_OR_NULL(vport->ingress.allow_rule)) {
+               err = PTR_ERR(vport->ingress.allow_rule);
+               pr_warn("vport[%d] configure ingress allow rule, err(%d)\n",
+                       vport->vport, err);
+               vport->ingress.allow_rule = NULL;
+               goto out;
+       }
+
+       memset(match_c, 0, MLX5_ST_SZ_BYTES(fte_match_param));
+       memset(match_v, 0, MLX5_ST_SZ_BYTES(fte_match_param));
+       vport->ingress.drop_rule =
+               mlx5_add_flow_rule(vport->ingress.acl,
+                                  0,
+                                  match_c,
+                                  match_v,
+                                  MLX5_FLOW_CONTEXT_ACTION_DROP,
+                                  0, NULL);
+       if (IS_ERR_OR_NULL(vport->ingress.drop_rule)) {
+               err = PTR_ERR(vport->ingress.drop_rule);
+               pr_warn("vport[%d] configure ingress drop rule, err(%d)\n",
+                       vport->vport, err);
+               vport->ingress.drop_rule = NULL;
+               goto out;
+       }
+
+out:
+       if (err)
+               esw_vport_cleanup_ingress_rules(esw, vport);
+
+       kfree(match_v);
+       kfree(match_c);
+       return err;
+}
+
+static int esw_vport_egress_config(struct mlx5_eswitch *esw,
+                                  struct mlx5_vport *vport)
+{
+       u32 *match_v;
+       u32 *match_c;
+       int err = 0;
+
+       esw_vport_cleanup_egress_rules(esw, vport);
+
+       if (!vport->vlan && !vport->qos) {
+               esw_vport_disable_egress_acl(esw, vport);
+               return 0;
+       }
+
+       esw_vport_enable_egress_acl(esw, vport);
+
+       esw_debug(esw->dev,
+                 "vport[%d] configure egress rules, vlan(%d) qos(%d)\n",
+                 vport->vport, vport->vlan, vport->qos);
+
+       match_v = kzalloc(MLX5_ST_SZ_BYTES(fte_match_param), GFP_KERNEL);
+       match_c = kzalloc(MLX5_ST_SZ_BYTES(fte_match_param), GFP_KERNEL);
+       if (!match_v || !match_c) {
+               err = -ENOMEM;
+               esw_warn(esw->dev, "vport[%d] configure egress rules failed, err(%d)\n",
+                        vport->vport, err);
+               goto out;
+       }
+
+       /* Allowed vlan rule */
+       MLX5_SET_TO_ONES(fte_match_param, match_c, outer_headers.vlan_tag);
+       MLX5_SET_TO_ONES(fte_match_param, match_v, outer_headers.vlan_tag);
+       MLX5_SET_TO_ONES(fte_match_param, match_c, outer_headers.first_vid);
+       MLX5_SET(fte_match_param, match_v, outer_headers.first_vid, vport->vlan);
+
+       vport->egress.allowed_vlan =
+               mlx5_add_flow_rule(vport->egress.acl,
+                                  MLX5_MATCH_OUTER_HEADERS,
+                                  match_c,
+                                  match_v,
+                                  MLX5_FLOW_CONTEXT_ACTION_ALLOW,
+                                  0, NULL);
+       if (IS_ERR_OR_NULL(vport->egress.allowed_vlan)) {
+               err = PTR_ERR(vport->egress.allowed_vlan);
+               pr_warn("vport[%d] configure egress allowed vlan rule failed, err(%d)\n",
+                       vport->vport, err);
+               vport->egress.allowed_vlan = NULL;
+               goto out;
+       }
+
+       /* Drop others rule (star rule) */
+       memset(match_c, 0, MLX5_ST_SZ_BYTES(fte_match_param));
+       memset(match_v, 0, MLX5_ST_SZ_BYTES(fte_match_param));
+       vport->egress.drop_rule =
+               mlx5_add_flow_rule(vport->egress.acl,
+                                  0,
+                                  match_c,
+                                  match_v,
+                                  MLX5_FLOW_CONTEXT_ACTION_DROP,
+                                  0, NULL);
+       if (IS_ERR_OR_NULL(vport->egress.drop_rule)) {
+               err = PTR_ERR(vport->egress.drop_rule);
+               pr_warn("vport[%d] configure egress drop rule failed, err(%d)\n",
+                       vport->vport, err);
+               vport->egress.drop_rule = NULL;
+       }
+out:
+       kfree(match_v);
+       kfree(match_c);
+       return err;
+}
+
 static void esw_enable_vport(struct mlx5_eswitch *esw, int vport_num,
                             int enable_events)
 {
        struct mlx5_vport *vport = &esw->vports[vport_num];
-       unsigned long flags;
 
+       mutex_lock(&esw->state_lock);
        WARN_ON(vport->enabled);
 
        esw_debug(esw->dev, "Enabling VPORT(%d)\n", vport_num);
+
+       if (vport_num) { /* Only VFs need ACLs for VST and spoofchk filtering */
+               esw_vport_ingress_config(esw, vport);
+               esw_vport_egress_config(esw, vport);
+       }
+
        mlx5_modify_vport_admin_state(esw->dev,
                                      MLX5_QUERY_VPORT_STATE_IN_OP_MOD_ESW_VPORT,
                                      vport_num,
@@ -725,53 +1491,32 @@ static void esw_enable_vport(struct mlx5_eswitch *esw, int vport_num,
 
        /* Sync with current vport context */
        vport->enabled_events = enable_events;
-       esw_vport_change_handler(&vport->vport_change_handler);
+       esw_vport_change_handle_locked(vport);
 
-       spin_lock_irqsave(&vport->lock, flags);
        vport->enabled = true;
-       spin_unlock_irqrestore(&vport->lock, flags);
+
+       /* only PF is trusted by default */
+       vport->trusted = (vport_num) ? false : true;
 
        arm_vport_context_events_cmd(esw->dev, vport_num, enable_events);
 
        esw->enabled_vports++;
        esw_debug(esw->dev, "Enabled VPORT(%d)\n", vport_num);
-}
-
-static void esw_cleanup_vport(struct mlx5_eswitch *esw, u16 vport_num)
-{
-       struct mlx5_vport *vport = &esw->vports[vport_num];
-       struct l2addr_node *node;
-       struct vport_addr *addr;
-       struct hlist_node *tmp;
-       int hi;
-
-       for_each_l2hash_node(node, tmp, vport->uc_list, hi) {
-               addr = container_of(node, struct vport_addr, node);
-               addr->action = MLX5_ACTION_DEL;
-       }
-       esw_apply_vport_addr_list(esw, vport_num, MLX5_NVPRT_LIST_TYPE_UC);
-
-       for_each_l2hash_node(node, tmp, vport->mc_list, hi) {
-               addr = container_of(node, struct vport_addr, node);
-               addr->action = MLX5_ACTION_DEL;
-       }
-       esw_apply_vport_addr_list(esw, vport_num, MLX5_NVPRT_LIST_TYPE_MC);
+       mutex_unlock(&esw->state_lock);
 }
 
 static void esw_disable_vport(struct mlx5_eswitch *esw, int vport_num)
 {
        struct mlx5_vport *vport = &esw->vports[vport_num];
-       unsigned long flags;
 
        if (!vport->enabled)
                return;
 
        esw_debug(esw->dev, "Disabling vport(%d)\n", vport_num);
        /* Mark this vport as disabled to discard new events */
-       spin_lock_irqsave(&vport->lock, flags);
        vport->enabled = false;
-       vport->enabled_events = 0;
-       spin_unlock_irqrestore(&vport->lock, flags);
+
+       synchronize_irq(mlx5_get_msix_vec(esw->dev, MLX5_EQ_VEC_ASYNC));
 
        mlx5_modify_vport_admin_state(esw->dev,
                                      MLX5_QUERY_VPORT_STATE_IN_OP_MOD_ESW_VPORT,
@@ -781,9 +1526,19 @@ static void esw_disable_vport(struct mlx5_eswitch *esw, int vport_num)
        flush_workqueue(esw->work_queue);
        /* Disable events from this vport */
        arm_vport_context_events_cmd(esw->dev, vport->vport, 0);
-       /* We don't assume VFs will cleanup after themselves */
-       esw_cleanup_vport(esw, vport_num);
+       mutex_lock(&esw->state_lock);
+       /* We don't assume VFs will cleanup after themselves.
+        * Calling vport change handler while vport is disabled will cleanup
+        * the vport resources.
+        */
+       esw_vport_change_handle_locked(vport);
+       vport->enabled_events = 0;
+       if (vport_num) {
+               esw_vport_disable_egress_acl(esw, vport);
+               esw_vport_disable_ingress_acl(esw, vport);
+       }
        esw->enabled_vports--;
+       mutex_unlock(&esw->state_lock);
 }
 
 /* Public E-Switch API */
@@ -802,6 +1557,12 @@ int mlx5_eswitch_enable_sriov(struct mlx5_eswitch *esw, int nvfs)
                return -ENOTSUPP;
        }
 
+       if (!MLX5_CAP_ESW_INGRESS_ACL(esw->dev, ft_support))
+               esw_warn(esw->dev, "E-Switch ingress ACL is not supported by FW\n");
+
+       if (!MLX5_CAP_ESW_EGRESS_ACL(esw->dev, ft_support))
+               esw_warn(esw->dev, "E-Switch engress ACL is not supported by FW\n");
+
        esw_info(esw->dev, "E-Switch enable SRIOV: nvfs(%d)\n", nvfs);
 
        esw_disable_vport(esw, 0);
@@ -824,6 +1585,7 @@ abort:
 
 void mlx5_eswitch_disable_sriov(struct mlx5_eswitch *esw)
 {
+       struct esw_mc_addr *mc_promisc;
        int i;
 
        if (!esw || !MLX5_CAP_GEN(esw->dev, vport_group_manager) ||
@@ -833,9 +1595,14 @@ void mlx5_eswitch_disable_sriov(struct mlx5_eswitch *esw)
        esw_info(esw->dev, "disable SRIOV: active vports(%d)\n",
                 esw->enabled_vports);
 
+       mc_promisc = esw->mc_promisc;
+
        for (i = 0; i < esw->total_vports; i++)
                esw_disable_vport(esw, i);
 
+       if (mc_promisc && mc_promisc->uplink_rule)
+               mlx5_del_flow_rule(mc_promisc->uplink_rule);
+
        esw_destroy_fdb_table(esw);
 
        /* VPORT 0 (PF) must be enabled back with non-sriov configuration */
@@ -845,7 +1612,8 @@ void mlx5_eswitch_disable_sriov(struct mlx5_eswitch *esw)
 int mlx5_eswitch_init(struct mlx5_core_dev *dev)
 {
        int l2_table_size = 1 << MLX5_CAP_GEN(dev, log_max_l2_table);
-       int total_vports = 1 + pci_sriov_get_totalvfs(dev->pdev);
+       int total_vports = MLX5_TOTAL_VPORTS(dev);
+       struct esw_mc_addr *mc_promisc;
        struct mlx5_eswitch *esw;
        int vport_num;
        int err;
@@ -874,6 +1642,13 @@ int mlx5_eswitch_init(struct mlx5_core_dev *dev)
        }
        esw->l2_table.size = l2_table_size;
 
+       mc_promisc = kzalloc(sizeof(*mc_promisc), GFP_KERNEL);
+       if (!mc_promisc) {
+               err = -ENOMEM;
+               goto abort;
+       }
+       esw->mc_promisc = mc_promisc;
+
        esw->work_queue = create_singlethread_workqueue("mlx5_esw_wq");
        if (!esw->work_queue) {
                err = -ENOMEM;
@@ -887,6 +1662,8 @@ int mlx5_eswitch_init(struct mlx5_core_dev *dev)
                goto abort;
        }
 
+       mutex_init(&esw->state_lock);
+
        for (vport_num = 0; vport_num < total_vports; vport_num++) {
                struct mlx5_vport *vport = &esw->vports[vport_num];
 
@@ -894,7 +1671,6 @@ int mlx5_eswitch_init(struct mlx5_core_dev *dev)
                vport->dev = dev;
                INIT_WORK(&vport->vport_change_handler,
                          esw_vport_change_handler);
-               spin_lock_init(&vport->lock);
        }
 
        esw->total_vports = total_vports;
@@ -925,6 +1701,7 @@ void mlx5_eswitch_cleanup(struct mlx5_eswitch *esw)
        esw->dev->priv.eswitch = NULL;
        destroy_workqueue(esw->work_queue);
        kfree(esw->l2_table.bitmap);
+       kfree(esw->mc_promisc);
        kfree(esw->vports);
        kfree(esw);
 }
@@ -942,10 +1719,8 @@ void mlx5_eswitch_vport_event(struct mlx5_eswitch *esw, struct mlx5_eqe *eqe)
        }
 
        vport = &esw->vports[vport_num];
-       spin_lock(&vport->lock);
        if (vport->enabled)
                queue_work(esw->work_queue, &vport->vport_change_handler);
-       spin_unlock(&vport->lock);
 }
 
 /* Vport Administration */
@@ -957,12 +1732,22 @@ int mlx5_eswitch_set_vport_mac(struct mlx5_eswitch *esw,
                               int vport, u8 mac[ETH_ALEN])
 {
        int err = 0;
+       struct mlx5_vport *evport;
 
        if (!ESW_ALLOWED(esw))
                return -EPERM;
        if (!LEGAL_VPORT(esw, vport))
                return -EINVAL;
 
+       evport = &esw->vports[vport];
+
+       if (evport->spoofchk && !is_valid_ether_addr(mac)) {
+               mlx5_core_warn(esw->dev,
+                              "MAC invalidation is not allowed when spoofchk is on, vport(%d)\n",
+                              vport);
+               return -EPERM;
+       }
+
        err = mlx5_modify_nic_vport_mac_address(esw->dev, vport, mac);
        if (err) {
                mlx5_core_warn(esw->dev,
@@ -971,6 +1756,11 @@ int mlx5_eswitch_set_vport_mac(struct mlx5_eswitch *esw,
                return err;
        }
 
+       mutex_lock(&esw->state_lock);
+       if (evport->enabled)
+               err = esw_vport_ingress_config(esw, evport);
+       mutex_unlock(&esw->state_lock);
+
        return err;
 }
 
@@ -990,6 +1780,7 @@ int mlx5_eswitch_set_vport_state(struct mlx5_eswitch *esw,
 int mlx5_eswitch_get_vport_config(struct mlx5_eswitch *esw,
                                  int vport, struct ifla_vf_info *ivi)
 {
+       struct mlx5_vport *evport;
        u16 vlan;
        u8 qos;
 
@@ -998,6 +1789,8 @@ int mlx5_eswitch_get_vport_config(struct mlx5_eswitch *esw,
        if (!LEGAL_VPORT(esw, vport))
                return -EINVAL;
 
+       evport = &esw->vports[vport];
+
        memset(ivi, 0, sizeof(*ivi));
        ivi->vf = vport - 1;
 
@@ -1008,7 +1801,7 @@ int mlx5_eswitch_get_vport_config(struct mlx5_eswitch *esw,
        query_esw_vport_cvlan(esw->dev, vport, &vlan, &qos);
        ivi->vlan = vlan;
        ivi->qos = qos;
-       ivi->spoofchk = 0;
+       ivi->spoofchk = evport->spoofchk;
 
        return 0;
 }
@@ -1016,6 +1809,8 @@ int mlx5_eswitch_get_vport_config(struct mlx5_eswitch *esw,
 int mlx5_eswitch_set_vport_vlan(struct mlx5_eswitch *esw,
                                int vport, u16 vlan, u8 qos)
 {
+       struct mlx5_vport *evport;
+       int err = 0;
        int set = 0;
 
        if (!ESW_ALLOWED(esw))
@@ -1026,7 +1821,72 @@ int mlx5_eswitch_set_vport_vlan(struct mlx5_eswitch *esw,
        if (vlan || qos)
                set = 1;
 
-       return modify_esw_vport_cvlan(esw->dev, vport, vlan, qos, set);
+       evport = &esw->vports[vport];
+
+       err = modify_esw_vport_cvlan(esw->dev, vport, vlan, qos, set);
+       if (err)
+               return err;
+
+       mutex_lock(&esw->state_lock);
+       evport->vlan = vlan;
+       evport->qos = qos;
+       if (evport->enabled) {
+               err = esw_vport_ingress_config(esw, evport);
+               if (err)
+                       goto out;
+               err = esw_vport_egress_config(esw, evport);
+       }
+
+out:
+       mutex_unlock(&esw->state_lock);
+       return err;
+}
+
+int mlx5_eswitch_set_vport_spoofchk(struct mlx5_eswitch *esw,
+                                   int vport, bool spoofchk)
+{
+       struct mlx5_vport *evport;
+       bool pschk;
+       int err = 0;
+
+       if (!ESW_ALLOWED(esw))
+               return -EPERM;
+       if (!LEGAL_VPORT(esw, vport))
+               return -EINVAL;
+
+       evport = &esw->vports[vport];
+
+       mutex_lock(&esw->state_lock);
+       pschk = evport->spoofchk;
+       evport->spoofchk = spoofchk;
+       if (evport->enabled)
+               err = esw_vport_ingress_config(esw, evport);
+       if (err)
+               evport->spoofchk = pschk;
+       mutex_unlock(&esw->state_lock);
+
+       return err;
+}
+
+int mlx5_eswitch_set_vport_trust(struct mlx5_eswitch *esw,
+                                int vport, bool setting)
+{
+       struct mlx5_vport *evport;
+
+       if (!ESW_ALLOWED(esw))
+               return -EPERM;
+       if (!LEGAL_VPORT(esw, vport))
+               return -EINVAL;
+
+       evport = &esw->vports[vport];
+
+       mutex_lock(&esw->state_lock);
+       evport->trusted = setting;
+       if (evport->enabled)
+               esw_vport_change_handle_locked(evport);
+       mutex_unlock(&esw->state_lock);
+
+       return 0;
 }
 
 int mlx5_eswitch_get_vport_stats(struct mlx5_eswitch *esw,
index 3416a42..fd68002 100644 (file)
@@ -88,18 +88,40 @@ struct l2addr_node {
        kfree(ptr);                                         \
 })
 
+struct vport_ingress {
+       struct mlx5_flow_table *acl;
+       struct mlx5_flow_group *allow_untagged_spoofchk_grp;
+       struct mlx5_flow_group *allow_spoofchk_only_grp;
+       struct mlx5_flow_group *allow_untagged_only_grp;
+       struct mlx5_flow_group *drop_grp;
+       struct mlx5_flow_rule  *allow_rule;
+       struct mlx5_flow_rule  *drop_rule;
+};
+
+struct vport_egress {
+       struct mlx5_flow_table *acl;
+       struct mlx5_flow_group *allowed_vlans_grp;
+       struct mlx5_flow_group *drop_grp;
+       struct mlx5_flow_rule  *allowed_vlan;
+       struct mlx5_flow_rule  *drop_rule;
+};
+
 struct mlx5_vport {
        struct mlx5_core_dev    *dev;
        int                     vport;
        struct hlist_head       uc_list[MLX5_L2_ADDR_HASH_SIZE];
        struct hlist_head       mc_list[MLX5_L2_ADDR_HASH_SIZE];
+       struct mlx5_flow_rule   *promisc_rule;
+       struct mlx5_flow_rule   *allmulti_rule;
        struct work_struct      vport_change_handler;
 
-       /* This spinlock protects access to vport data, between
-        * "esw_vport_disable" and ongoing interrupt "mlx5_eswitch_vport_event"
-        * once vport marked as disabled new interrupts are discarded.
-        */
-       spinlock_t              lock; /* vport events sync */
+       struct vport_ingress    ingress;
+       struct vport_egress     egress;
+
+       u16                     vlan;
+       u8                      qos;
+       bool                    spoofchk;
+       bool                    trusted;
        bool                    enabled;
        u16                     enabled_events;
 };
@@ -113,6 +135,8 @@ struct mlx5_l2_table {
 struct mlx5_eswitch_fdb {
        void *fdb;
        struct mlx5_flow_group *addr_grp;
+       struct mlx5_flow_group *allmulti_grp;
+       struct mlx5_flow_group *promisc_grp;
 };
 
 struct mlx5_eswitch {
@@ -124,6 +148,11 @@ struct mlx5_eswitch {
        struct mlx5_vport       *vports;
        int                     total_vports;
        int                     enabled_vports;
+       /* Synchronize between vport change events
+        * and async SRIOV admin state changes
+        */
+       struct mutex            state_lock;
+       struct esw_mc_addr      *mc_promisc;
 };
 
 /* E-Switch API */
@@ -138,6 +167,10 @@ int mlx5_eswitch_set_vport_state(struct mlx5_eswitch *esw,
                                 int vport, int link_state);
 int mlx5_eswitch_set_vport_vlan(struct mlx5_eswitch *esw,
                                int vport, u16 vlan, u8 qos);
+int mlx5_eswitch_set_vport_spoofchk(struct mlx5_eswitch *esw,
+                                   int vport, bool spoofchk);
+int mlx5_eswitch_set_vport_trust(struct mlx5_eswitch *esw,
+                                int vport_num, bool setting);
 int mlx5_eswitch_get_vport_config(struct mlx5_eswitch *esw,
                                  int vport, struct ifla_vf_info *ivi);
 int mlx5_eswitch_get_vport_stats(struct mlx5_eswitch *esw,
index f46f1db..9797768 100644 (file)
@@ -50,6 +50,10 @@ int mlx5_cmd_update_root_ft(struct mlx5_core_dev *dev,
                 MLX5_CMD_OP_SET_FLOW_TABLE_ROOT);
        MLX5_SET(set_flow_table_root_in, in, table_type, ft->type);
        MLX5_SET(set_flow_table_root_in, in, table_id, ft->id);
+       if (ft->vport) {
+               MLX5_SET(set_flow_table_root_in, in, vport_number, ft->vport);
+               MLX5_SET(set_flow_table_root_in, in, other_vport, 1);
+       }
 
        memset(out, 0, sizeof(out));
        return mlx5_cmd_exec_check_status(dev, in, sizeof(in), out,
@@ -57,6 +61,7 @@ int mlx5_cmd_update_root_ft(struct mlx5_core_dev *dev,
 }
 
 int mlx5_cmd_create_flow_table(struct mlx5_core_dev *dev,
+                              u16 vport,
                               enum fs_flow_table_type type, unsigned int level,
                               unsigned int log_size, struct mlx5_flow_table
                               *next_ft, unsigned int *table_id)
@@ -77,6 +82,10 @@ int mlx5_cmd_create_flow_table(struct mlx5_core_dev *dev,
        MLX5_SET(create_flow_table_in, in, table_type, type);
        MLX5_SET(create_flow_table_in, in, level, level);
        MLX5_SET(create_flow_table_in, in, log_size, log_size);
+       if (vport) {
+               MLX5_SET(create_flow_table_in, in, vport_number, vport);
+               MLX5_SET(create_flow_table_in, in, other_vport, 1);
+       }
 
        memset(out, 0, sizeof(out));
        err = mlx5_cmd_exec_check_status(dev, in, sizeof(in), out,
@@ -101,6 +110,10 @@ int mlx5_cmd_destroy_flow_table(struct mlx5_core_dev *dev,
                 MLX5_CMD_OP_DESTROY_FLOW_TABLE);
        MLX5_SET(destroy_flow_table_in, in, table_type, ft->type);
        MLX5_SET(destroy_flow_table_in, in, table_id, ft->id);
+       if (ft->vport) {
+               MLX5_SET(destroy_flow_table_in, in, vport_number, ft->vport);
+               MLX5_SET(destroy_flow_table_in, in, other_vport, 1);
+       }
 
        return mlx5_cmd_exec_check_status(dev, in, sizeof(in), out,
                                          sizeof(out));
@@ -120,6 +133,10 @@ int mlx5_cmd_modify_flow_table(struct mlx5_core_dev *dev,
                 MLX5_CMD_OP_MODIFY_FLOW_TABLE);
        MLX5_SET(modify_flow_table_in, in, table_type, ft->type);
        MLX5_SET(modify_flow_table_in, in, table_id, ft->id);
+       if (ft->vport) {
+               MLX5_SET(modify_flow_table_in, in, vport_number, ft->vport);
+               MLX5_SET(modify_flow_table_in, in, other_vport, 1);
+       }
        MLX5_SET(modify_flow_table_in, in, modify_field_select,
                 MLX5_MODIFY_FLOW_TABLE_MISS_TABLE_ID);
        if (next_ft) {
@@ -148,6 +165,10 @@ int mlx5_cmd_create_flow_group(struct mlx5_core_dev *dev,
                 MLX5_CMD_OP_CREATE_FLOW_GROUP);
        MLX5_SET(create_flow_group_in, in, table_type, ft->type);
        MLX5_SET(create_flow_group_in, in, table_id, ft->id);
+       if (ft->vport) {
+               MLX5_SET(create_flow_group_in, in, vport_number, ft->vport);
+               MLX5_SET(create_flow_group_in, in, other_vport, 1);
+       }
 
        err = mlx5_cmd_exec_check_status(dev, in,
                                         inlen, out,
@@ -174,6 +195,10 @@ int mlx5_cmd_destroy_flow_group(struct mlx5_core_dev *dev,
        MLX5_SET(destroy_flow_group_in, in, table_type, ft->type);
        MLX5_SET(destroy_flow_group_in, in, table_id, ft->id);
        MLX5_SET(destroy_flow_group_in, in, group_id, group_id);
+       if (ft->vport) {
+               MLX5_SET(destroy_flow_group_in, in, vport_number, ft->vport);
+               MLX5_SET(destroy_flow_group_in, in, other_vport, 1);
+       }
 
        return mlx5_cmd_exec_check_status(dev, in, sizeof(in), out,
                                          sizeof(out));
@@ -207,6 +232,10 @@ static int mlx5_cmd_set_fte(struct mlx5_core_dev *dev,
        MLX5_SET(set_fte_in, in, table_type, ft->type);
        MLX5_SET(set_fte_in, in, table_id,   ft->id);
        MLX5_SET(set_fte_in, in, flow_index, fte->index);
+       if (ft->vport) {
+               MLX5_SET(set_fte_in, in, vport_number, ft->vport);
+               MLX5_SET(set_fte_in, in, other_vport, 1);
+       }
 
        in_flow_context = MLX5_ADDR_OF(set_fte_in, in, flow_context);
        MLX5_SET(flow_context, in_flow_context, group_id, group_id);
@@ -285,6 +314,10 @@ int mlx5_cmd_delete_fte(struct mlx5_core_dev *dev,
        MLX5_SET(delete_fte_in, in, table_type, ft->type);
        MLX5_SET(delete_fte_in, in, table_id, ft->id);
        MLX5_SET(delete_fte_in, in, flow_index, index);
+       if (ft->vport) {
+               MLX5_SET(delete_fte_in, in, vport_number, ft->vport);
+               MLX5_SET(delete_fte_in, in, other_vport, 1);
+       }
 
        err =  mlx5_cmd_exec_check_status(dev, in, sizeof(in), out, sizeof(out));
 
index 9814d47..c97b4a0 100644 (file)
@@ -34,6 +34,7 @@
 #define _MLX5_FS_CMD_
 
 int mlx5_cmd_create_flow_table(struct mlx5_core_dev *dev,
+                              u16 vport,
                               enum fs_flow_table_type type, unsigned int level,
                               unsigned int log_size, struct mlx5_flow_table
                               *next_ft, unsigned int *table_id);
index 5121be4..659a698 100644 (file)
 #define INIT_TREE_NODE_ARRAY_SIZE(...) (sizeof((struct init_tree_node[]){__VA_ARGS__}) /\
                                         sizeof(struct init_tree_node))
 
-#define ADD_PRIO(num_prios_val, min_level_val, max_ft_val, caps_val,\
+#define ADD_PRIO(num_prios_val, min_level_val, num_levels_val, caps_val,\
                 ...) {.type = FS_TYPE_PRIO,\
        .min_ft_level = min_level_val,\
-       .max_ft = max_ft_val,\
+       .num_levels = num_levels_val,\
        .num_leaf_prios = num_prios_val,\
        .caps = caps_val,\
        .children = (struct init_tree_node[]) {__VA_ARGS__},\
        .ar_size = INIT_TREE_NODE_ARRAY_SIZE(__VA_ARGS__) \
 }
 
-#define ADD_MULTIPLE_PRIO(num_prios_val, max_ft_val, ...)\
-       ADD_PRIO(num_prios_val, 0, max_ft_val, {},\
+#define ADD_MULTIPLE_PRIO(num_prios_val, num_levels_val, ...)\
+       ADD_PRIO(num_prios_val, 0, num_levels_val, {},\
                 __VA_ARGS__)\
 
 #define ADD_NS(...) {.type = FS_TYPE_NAMESPACE,\
 #define FS_REQUIRED_CAPS(...) {.arr_sz = INIT_CAPS_ARRAY_SIZE(__VA_ARGS__), \
                               .caps = (long[]) {__VA_ARGS__} }
 
-#define LEFTOVERS_MAX_FT 1
+#define LEFTOVERS_NUM_LEVELS 1
 #define LEFTOVERS_NUM_PRIOS 1
-#define BY_PASS_PRIO_MAX_FT 1
-#define BY_PASS_MIN_LEVEL (KENREL_MIN_LEVEL + MLX5_BY_PASS_NUM_PRIOS +\
-                          LEFTOVERS_MAX_FT)
 
-#define KERNEL_MAX_FT 3
-#define KERNEL_NUM_PRIOS 2
-#define KENREL_MIN_LEVEL 2
+#define BY_PASS_PRIO_NUM_LEVELS 1
+#define BY_PASS_MIN_LEVEL (KERNEL_MIN_LEVEL + MLX5_BY_PASS_NUM_PRIOS +\
+                          LEFTOVERS_NUM_PRIOS)
 
-#define ANCHOR_MAX_FT 1
+/* Vlan, mac, ttc, aRFS */
+#define KERNEL_NIC_PRIO_NUM_LEVELS 4
+#define KERNEL_NIC_NUM_PRIOS 1
+/* One more level for tc */
+#define KERNEL_MIN_LEVEL (KERNEL_NIC_PRIO_NUM_LEVELS + 1)
+
+#define ANCHOR_NUM_LEVELS 1
 #define ANCHOR_NUM_PRIOS 1
 #define ANCHOR_MIN_LEVEL (BY_PASS_MIN_LEVEL + 1)
 struct node_caps {
@@ -92,7 +95,7 @@ static struct init_tree_node {
        int min_ft_level;
        int num_leaf_prios;
        int prio;
-       int max_ft;
+       int num_levels;
 } root_fs = {
        .type = FS_TYPE_NAMESPACE,
        .ar_size = 4,
@@ -102,17 +105,20 @@ static struct init_tree_node {
                                          FS_CAP(flow_table_properties_nic_receive.modify_root),
                                          FS_CAP(flow_table_properties_nic_receive.identified_miss_table_mode),
                                          FS_CAP(flow_table_properties_nic_receive.flow_table_modify)),
-                        ADD_NS(ADD_MULTIPLE_PRIO(MLX5_BY_PASS_NUM_PRIOS, BY_PASS_PRIO_MAX_FT))),
-               ADD_PRIO(0, KENREL_MIN_LEVEL, 0, {},
-                        ADD_NS(ADD_MULTIPLE_PRIO(KERNEL_NUM_PRIOS, KERNEL_MAX_FT))),
+                        ADD_NS(ADD_MULTIPLE_PRIO(MLX5_BY_PASS_NUM_PRIOS,
+                                                 BY_PASS_PRIO_NUM_LEVELS))),
+               ADD_PRIO(0, KERNEL_MIN_LEVEL, 0, {},
+                        ADD_NS(ADD_MULTIPLE_PRIO(1, 1),
+                               ADD_MULTIPLE_PRIO(KERNEL_NIC_NUM_PRIOS,
+                                                 KERNEL_NIC_PRIO_NUM_LEVELS))),
                ADD_PRIO(0, BY_PASS_MIN_LEVEL, 0,
                         FS_REQUIRED_CAPS(FS_CAP(flow_table_properties_nic_receive.flow_modify_en),
                                          FS_CAP(flow_table_properties_nic_receive.modify_root),
                                          FS_CAP(flow_table_properties_nic_receive.identified_miss_table_mode),
                                          FS_CAP(flow_table_properties_nic_receive.flow_table_modify)),
-                        ADD_NS(ADD_MULTIPLE_PRIO(LEFTOVERS_NUM_PRIOS, LEFTOVERS_MAX_FT))),
+                        ADD_NS(ADD_MULTIPLE_PRIO(LEFTOVERS_NUM_PRIOS, LEFTOVERS_NUM_LEVELS))),
                ADD_PRIO(0, ANCHOR_MIN_LEVEL, 0, {},
-                        ADD_NS(ADD_MULTIPLE_PRIO(ANCHOR_NUM_PRIOS, ANCHOR_MAX_FT))),
+                        ADD_NS(ADD_MULTIPLE_PRIO(ANCHOR_NUM_PRIOS, ANCHOR_NUM_LEVELS))),
        }
 };
 
@@ -222,19 +228,6 @@ static struct fs_prio *find_prio(struct mlx5_flow_namespace *ns,
        return NULL;
 }
 
-static unsigned int find_next_free_level(struct fs_prio *prio)
-{
-       if (!list_empty(&prio->node.children)) {
-               struct mlx5_flow_table *ft;
-
-               ft = list_last_entry(&prio->node.children,
-                                    struct mlx5_flow_table,
-                                    node.list);
-               return ft->level + 1;
-       }
-       return prio->start_level;
-}
-
 static bool masked_memcmp(void *mask, void *val1, void *val2, size_t size)
 {
        unsigned int i;
@@ -464,7 +457,7 @@ static struct mlx5_flow_group *alloc_flow_group(u32 *create_fg_in)
        return fg;
 }
 
-static struct mlx5_flow_table *alloc_flow_table(int level, int max_fte,
+static struct mlx5_flow_table *alloc_flow_table(int level, u16 vport, int max_fte,
                                                enum fs_flow_table_type table_type)
 {
        struct mlx5_flow_table *ft;
@@ -476,6 +469,7 @@ static struct mlx5_flow_table *alloc_flow_table(int level, int max_fte,
        ft->level = level;
        ft->node.type = FS_TYPE_FLOW_TABLE;
        ft->type = table_type;
+       ft->vport = vport;
        ft->max_fte = max_fte;
        INIT_LIST_HEAD(&ft->fwd_rules);
        mutex_init(&ft->lock);
@@ -615,8 +609,8 @@ static int update_root_ft_create(struct mlx5_flow_table *ft, struct fs_prio
        return err;
 }
 
-static int mlx5_modify_rule_destination(struct mlx5_flow_rule *rule,
-                                       struct mlx5_flow_destination *dest)
+int mlx5_modify_rule_destination(struct mlx5_flow_rule *rule,
+                                struct mlx5_flow_destination *dest)
 {
        struct mlx5_flow_table *ft;
        struct mlx5_flow_group *fg;
@@ -693,9 +687,23 @@ static int connect_flow_table(struct mlx5_core_dev *dev, struct mlx5_flow_table
        return err;
 }
 
-struct mlx5_flow_table *mlx5_create_flow_table(struct mlx5_flow_namespace *ns,
-                                              int prio,
-                                              int max_fte)
+static void list_add_flow_table(struct mlx5_flow_table *ft,
+                               struct fs_prio *prio)
+{
+       struct list_head *prev = &prio->node.children;
+       struct mlx5_flow_table *iter;
+
+       fs_for_each_ft(iter, prio) {
+               if (iter->level > ft->level)
+                       break;
+               prev = &iter->node.list;
+       }
+       list_add(&ft->node.list, prev);
+}
+
+static struct mlx5_flow_table *__mlx5_create_flow_table(struct mlx5_flow_namespace *ns,
+                                                       u16 vport, int prio,
+                                                       int max_fte, u32 level)
 {
        struct mlx5_flow_table *next_ft = NULL;
        struct mlx5_flow_table *ft;
@@ -716,12 +724,16 @@ struct mlx5_flow_table *mlx5_create_flow_table(struct mlx5_flow_namespace *ns,
                err = -EINVAL;
                goto unlock_root;
        }
-       if (fs_prio->num_ft == fs_prio->max_ft) {
+       if (level >= fs_prio->num_levels) {
                err = -ENOSPC;
                goto unlock_root;
        }
-
-       ft = alloc_flow_table(find_next_free_level(fs_prio),
+       /* The level is related to the
+        * priority level range.
+        */
+       level += fs_prio->start_level;
+       ft = alloc_flow_table(level,
+                             vport,
                              roundup_pow_of_two(max_fte),
                              root->table_type);
        if (!ft) {
@@ -732,7 +744,7 @@ struct mlx5_flow_table *mlx5_create_flow_table(struct mlx5_flow_namespace *ns,
        tree_init_node(&ft->node, 1, del_flow_table);
        log_table_sz = ilog2(ft->max_fte);
        next_ft = find_next_chained_ft(fs_prio);
-       err = mlx5_cmd_create_flow_table(root->dev, ft->type, ft->level,
+       err = mlx5_cmd_create_flow_table(root->dev, ft->vport, ft->type, ft->level,
                                         log_table_sz, next_ft, &ft->id);
        if (err)
                goto free_ft;
@@ -742,7 +754,7 @@ struct mlx5_flow_table *mlx5_create_flow_table(struct mlx5_flow_namespace *ns,
                goto destroy_ft;
        lock_ref_node(&fs_prio->node);
        tree_add_node(&ft->node, &fs_prio->node);
-       list_add_tail(&ft->node.list, &fs_prio->node.children);
+       list_add_flow_table(ft, fs_prio);
        fs_prio->num_ft++;
        unlock_ref_node(&fs_prio->node);
        mutex_unlock(&root->chain_lock);
@@ -756,17 +768,32 @@ unlock_root:
        return ERR_PTR(err);
 }
 
+struct mlx5_flow_table *mlx5_create_flow_table(struct mlx5_flow_namespace *ns,
+                                              int prio, int max_fte,
+                                              u32 level)
+{
+       return __mlx5_create_flow_table(ns, 0, prio, max_fte, level);
+}
+
+struct mlx5_flow_table *mlx5_create_vport_flow_table(struct mlx5_flow_namespace *ns,
+                                                    int prio, int max_fte,
+                                                    u32 level, u16 vport)
+{
+       return __mlx5_create_flow_table(ns, vport, prio, max_fte, level);
+}
+
 struct mlx5_flow_table *mlx5_create_auto_grouped_flow_table(struct mlx5_flow_namespace *ns,
                                                            int prio,
                                                            int num_flow_table_entries,
-                                                           int max_num_groups)
+                                                           int max_num_groups,
+                                                           u32 level)
 {
        struct mlx5_flow_table *ft;
 
        if (max_num_groups > num_flow_table_entries)
                return ERR_PTR(-EINVAL);
 
-       ft = mlx5_create_flow_table(ns, prio, num_flow_table_entries);
+       ft = mlx5_create_flow_table(ns, prio, num_flow_table_entries, level);
        if (IS_ERR(ft))
                return ft;
 
@@ -1065,31 +1092,18 @@ unlock_fg:
        return rule;
 }
 
-static struct mlx5_flow_rule *add_rule_to_auto_fg(struct mlx5_flow_table *ft,
-                                                 u8 match_criteria_enable,
-                                                 u32 *match_criteria,
-                                                 u32 *match_value,
-                                                 u8 action,
-                                                 u32 flow_tag,
-                                                 struct mlx5_flow_destination *dest)
+static bool dest_is_valid(struct mlx5_flow_destination *dest,
+                         u32 action,
+                         struct mlx5_flow_table *ft)
 {
-       struct mlx5_flow_rule *rule;
-       struct mlx5_flow_group *g;
+       if (!(action & MLX5_FLOW_CONTEXT_ACTION_FWD_DEST))
+               return true;
 
-       g = create_autogroup(ft, match_criteria_enable, match_criteria);
-       if (IS_ERR(g))
-               return (void *)g;
-
-       rule = add_rule_fg(g, match_value,
-                          action, flow_tag, dest);
-       if (IS_ERR(rule)) {
-               /* Remove assumes refcount > 0 and autogroup creates a group
-                * with a refcount = 0.
-                */
-               tree_get_node(&g->node);
-               tree_remove_node(&g->node);
-       }
-       return rule;
+       if (!dest || ((dest->type ==
+           MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE) &&
+           (dest->ft->level <= ft->level)))
+               return false;
+       return true;
 }
 
 static struct mlx5_flow_rule *
@@ -1104,7 +1118,7 @@ _mlx5_add_flow_rule(struct mlx5_flow_table *ft,
        struct mlx5_flow_group *g;
        struct mlx5_flow_rule *rule;
 
-       if ((action & MLX5_FLOW_CONTEXT_ACTION_FWD_DEST) && !dest)
+       if (!dest_is_valid(dest, action, ft))
                return ERR_PTR(-EINVAL);
 
        nested_lock_ref_node(&ft->node, FS_MUTEX_GRANDPARENT);
@@ -1119,8 +1133,23 @@ _mlx5_add_flow_rule(struct mlx5_flow_table *ft,
                                goto unlock;
                }
 
-       rule = add_rule_to_auto_fg(ft, match_criteria_enable, match_criteria,
-                                  match_value, action, flow_tag, dest);
+       g = create_autogroup(ft, match_criteria_enable, match_criteria);
+       if (IS_ERR(g)) {
+               rule = (void *)g;
+               goto unlock;
+       }
+
+       rule = add_rule_fg(g, match_value,
+                          action, flow_tag, dest);
+       if (IS_ERR(rule)) {
+               /* Remove assumes refcount > 0 and autogroup creates a group
+                * with a refcount = 0.
+                */
+               unlock_ref_node(&ft->node);
+               tree_get_node(&g->node);
+               tree_remove_node(&g->node);
+               return rule;
+       }
 unlock:
        unlock_ref_node(&ft->node);
        return rule;
@@ -1288,7 +1317,7 @@ struct mlx5_flow_namespace *mlx5_get_flow_namespace(struct mlx5_core_dev *dev,
 {
        struct mlx5_flow_root_namespace *root_ns = dev->priv.root_ns;
        int prio;
-       static struct fs_prio *fs_prio;
+       struct fs_prio *fs_prio;
        struct mlx5_flow_namespace *ns;
 
        if (!root_ns)
@@ -1306,6 +1335,16 @@ struct mlx5_flow_namespace *mlx5_get_flow_namespace(struct mlx5_core_dev *dev,
                        return &dev->priv.fdb_root_ns->ns;
                else
                        return NULL;
+       case MLX5_FLOW_NAMESPACE_ESW_EGRESS:
+               if (dev->priv.esw_egress_root_ns)
+                       return &dev->priv.esw_egress_root_ns->ns;
+               else
+                       return NULL;
+       case MLX5_FLOW_NAMESPACE_ESW_INGRESS:
+               if (dev->priv.esw_ingress_root_ns)
+                       return &dev->priv.esw_ingress_root_ns->ns;
+               else
+                       return NULL;
        default:
                return NULL;
        }
@@ -1323,7 +1362,7 @@ struct mlx5_flow_namespace *mlx5_get_flow_namespace(struct mlx5_core_dev *dev,
 EXPORT_SYMBOL(mlx5_get_flow_namespace);
 
 static struct fs_prio *fs_create_prio(struct mlx5_flow_namespace *ns,
-                                     unsigned prio, int max_ft)
+                                     unsigned int prio, int num_levels)
 {
        struct fs_prio *fs_prio;
 
@@ -1334,7 +1373,7 @@ static struct fs_prio *fs_create_prio(struct mlx5_flow_namespace *ns,
        fs_prio->node.type = FS_TYPE_PRIO;
        tree_init_node(&fs_prio->node, 1, NULL);
        tree_add_node(&fs_prio->node, &ns->node);
-       fs_prio->max_ft = max_ft;
+       fs_prio->num_levels = num_levels;
        fs_prio->prio = prio;
        list_add_tail(&fs_prio->node.list, &ns->node.children);
 
@@ -1365,14 +1404,14 @@ static struct mlx5_flow_namespace *fs_create_namespace(struct fs_prio *prio)
        return ns;
 }
 
-static int create_leaf_prios(struct mlx5_flow_namespace *ns, struct init_tree_node
-                            *prio_metadata)
+static int create_leaf_prios(struct mlx5_flow_namespace *ns, int prio,
+                            struct init_tree_node *prio_metadata)
 {
        struct fs_prio *fs_prio;
        int i;
 
        for (i = 0; i < prio_metadata->num_leaf_prios; i++) {
-               fs_prio = fs_create_prio(ns, i, prio_metadata->max_ft);
+               fs_prio = fs_create_prio(ns, prio++, prio_metadata->num_levels);
                if (IS_ERR(fs_prio))
                        return PTR_ERR(fs_prio);
        }
@@ -1399,7 +1438,7 @@ static int init_root_tree_recursive(struct mlx5_core_dev *dev,
                                    struct init_tree_node *init_node,
                                    struct fs_node *fs_parent_node,
                                    struct init_tree_node *init_parent_node,
-                                   int index)
+                                   int prio)
 {
        int max_ft_level = MLX5_CAP_FLOWTABLE(dev,
                                              flow_table_properties_nic_receive.
@@ -1417,8 +1456,8 @@ static int init_root_tree_recursive(struct mlx5_core_dev *dev,
 
                fs_get_obj(fs_ns, fs_parent_node);
                if (init_node->num_leaf_prios)
-                       return create_leaf_prios(fs_ns, init_node);
-               fs_prio = fs_create_prio(fs_ns, index, init_node->max_ft);
+                       return create_leaf_prios(fs_ns, prio, init_node);
+               fs_prio = fs_create_prio(fs_ns, prio, init_node->num_levels);
                if (IS_ERR(fs_prio))
                        return PTR_ERR(fs_prio);
                base = &fs_prio->node;
@@ -1431,11 +1470,16 @@ static int init_root_tree_recursive(struct mlx5_core_dev *dev,
        } else {
                return -EINVAL;
        }
+       prio = 0;
        for (i = 0; i < init_node->ar_size; i++) {
                err = init_root_tree_recursive(dev, &init_node->children[i],
-                                              base, init_node, i);
+                                              base, init_node, prio);
                if (err)
                        return err;
+               if (init_node->children[i].type == FS_TYPE_PRIO &&
+                   init_node->children[i].num_leaf_prios) {
+                       prio += init_node->children[i].num_leaf_prios;
+               }
        }
 
        return 0;
@@ -1491,9 +1535,9 @@ static int set_prio_attrs_in_ns(struct mlx5_flow_namespace *ns, int acc_level)
        struct fs_prio *prio;
 
        fs_for_each_prio(prio, ns) {
-                /* This updates prio start_level and max_ft */
+                /* This updates prio start_level and num_levels */
                set_prio_attrs_in_prio(prio, acc_level);
-               acc_level += prio->max_ft;
+               acc_level += prio->num_levels;
        }
        return acc_level;
 }
@@ -1505,11 +1549,11 @@ static void set_prio_attrs_in_prio(struct fs_prio *prio, int acc_level)
 
        prio->start_level = acc_level;
        fs_for_each_ns(ns, prio)
-               /* This updates start_level and max_ft of ns's priority descendants */
+               /* This updates start_level and num_levels of ns's priority descendants */
                acc_level_ns = set_prio_attrs_in_ns(ns, acc_level);
-       if (!prio->max_ft)
-               prio->max_ft = acc_level_ns - prio->start_level;
-       WARN_ON(prio->max_ft < acc_level_ns - prio->start_level);
+       if (!prio->num_levels)
+               prio->num_levels = acc_level_ns - prio->start_level;
+       WARN_ON(prio->num_levels < acc_level_ns - prio->start_level);
 }
 
 static void set_prio_attrs(struct mlx5_flow_root_namespace *root_ns)
@@ -1520,12 +1564,13 @@ static void set_prio_attrs(struct mlx5_flow_root_namespace *root_ns)
 
        fs_for_each_prio(prio, ns) {
                set_prio_attrs_in_prio(prio, start_level);
-               start_level += prio->max_ft;
+               start_level += prio->num_levels;
        }
 }
 
 #define ANCHOR_PRIO 0
 #define ANCHOR_SIZE 1
+#define ANCHOR_LEVEL 0
 static int create_anchor_flow_table(struct mlx5_core_dev
                                                        *dev)
 {
@@ -1535,7 +1580,7 @@ static int create_anchor_flow_table(struct mlx5_core_dev
        ns = mlx5_get_flow_namespace(dev, MLX5_FLOW_NAMESPACE_ANCHOR);
        if (!ns)
                return -EINVAL;
-       ft = mlx5_create_flow_table(ns, ANCHOR_PRIO, ANCHOR_SIZE);
+       ft = mlx5_create_flow_table(ns, ANCHOR_PRIO, ANCHOR_SIZE, ANCHOR_LEVEL);
        if (IS_ERR(ft)) {
                mlx5_core_err(dev, "Failed to create last anchor flow table");
                return PTR_ERR(ft);
@@ -1680,6 +1725,8 @@ void mlx5_cleanup_fs(struct mlx5_core_dev *dev)
 {
        cleanup_root_ns(dev);
        cleanup_single_prio_root_ns(dev, dev->priv.fdb_root_ns);
+       cleanup_single_prio_root_ns(dev, dev->priv.esw_egress_root_ns);
+       cleanup_single_prio_root_ns(dev, dev->priv.esw_ingress_root_ns);
 }
 
 static int init_fdb_root_ns(struct mlx5_core_dev *dev)
@@ -1700,6 +1747,38 @@ static int init_fdb_root_ns(struct mlx5_core_dev *dev)
        }
 }
 
+static int init_egress_acl_root_ns(struct mlx5_core_dev *dev)
+{
+       struct fs_prio *prio;
+
+       dev->priv.esw_egress_root_ns = create_root_ns(dev, FS_FT_ESW_EGRESS_ACL);
+       if (!dev->priv.esw_egress_root_ns)
+               return -ENOMEM;
+
+       /* create 1 prio*/
+       prio = fs_create_prio(&dev->priv.esw_egress_root_ns->ns, 0, MLX5_TOTAL_VPORTS(dev));
+       if (IS_ERR(prio))
+               return PTR_ERR(prio);
+       else
+               return 0;
+}
+
+static int init_ingress_acl_root_ns(struct mlx5_core_dev *dev)
+{
+       struct fs_prio *prio;
+
+       dev->priv.esw_ingress_root_ns = create_root_ns(dev, FS_FT_ESW_INGRESS_ACL);
+       if (!dev->priv.esw_ingress_root_ns)
+               return -ENOMEM;
+
+       /* create 1 prio*/
+       prio = fs_create_prio(&dev->priv.esw_ingress_root_ns->ns, 0, MLX5_TOTAL_VPORTS(dev));
+       if (IS_ERR(prio))
+               return PTR_ERR(prio);
+       else
+               return 0;
+}
+
 int mlx5_init_fs(struct mlx5_core_dev *dev)
 {
        int err = 0;
@@ -1712,8 +1791,21 @@ int mlx5_init_fs(struct mlx5_core_dev *dev)
        if (MLX5_CAP_GEN(dev, eswitch_flow_table)) {
                err = init_fdb_root_ns(dev);
                if (err)
-                       cleanup_root_ns(dev);
+                       goto err;
+       }
+       if (MLX5_CAP_ESW_EGRESS_ACL(dev, ft_support)) {
+               err = init_egress_acl_root_ns(dev);
+               if (err)
+                       goto err;
+       }
+       if (MLX5_CAP_ESW_INGRESS_ACL(dev, ft_support)) {
+               err = init_ingress_acl_root_ns(dev);
+               if (err)
+                       goto err;
        }
 
+       return 0;
+err:
+       mlx5_cleanup_fs(dev);
        return err;
 }
index f37a624..8e76cc5 100644 (file)
@@ -45,8 +45,10 @@ enum fs_node_type {
 };
 
 enum fs_flow_table_type {
-       FS_FT_NIC_RX     = 0x0,
-       FS_FT_FDB        = 0X4,
+       FS_FT_NIC_RX          = 0x0,
+       FS_FT_ESW_EGRESS_ACL  = 0x2,
+       FS_FT_ESW_INGRESS_ACL = 0x3,
+       FS_FT_FDB             = 0X4,
 };
 
 enum fs_fte_status {
@@ -79,6 +81,7 @@ struct mlx5_flow_rule {
 struct mlx5_flow_table {
        struct fs_node                  node;
        u32                             id;
+       u16                             vport;
        unsigned int                    max_fte;
        unsigned int                    level;
        enum fs_flow_table_type         type;
@@ -107,7 +110,7 @@ struct fs_fte {
 /* Type of children is mlx5_flow_table/namespace */
 struct fs_prio {
        struct fs_node                  node;
-       unsigned int                    max_ft;
+       unsigned int                    num_levels;
        unsigned int                    start_level;
        unsigned int                    prio;
        unsigned int                    num_ft;
index 3f3b2fa..6feef7f 100644 (file)
@@ -48,6 +48,9 @@
 #include <linux/kmod.h>
 #include <linux/delay.h>
 #include <linux/mlx5/mlx5_ifc.h>
+#ifdef CONFIG_RFS_ACCEL
+#include <linux/cpu_rmap.h>
+#endif
 #include "mlx5_core.h"
 #include "fs_core.h"
 #ifdef CONFIG_MLX5_CORE_EN
@@ -665,6 +668,12 @@ static void free_comp_eqs(struct mlx5_core_dev *dev)
        struct mlx5_eq_table *table = &dev->priv.eq_table;
        struct mlx5_eq *eq, *n;
 
+#ifdef CONFIG_RFS_ACCEL
+       if (dev->rmap) {
+               free_irq_cpu_rmap(dev->rmap);
+               dev->rmap = NULL;
+       }
+#endif
        spin_lock(&table->lock);
        list_for_each_entry_safe(eq, n, &table->comp_eqs_list, list) {
                list_del(&eq->list);
@@ -691,6 +700,11 @@ static int alloc_comp_eqs(struct mlx5_core_dev *dev)
        INIT_LIST_HEAD(&table->comp_eqs_list);
        ncomp_vec = table->num_comp_vectors;
        nent = MLX5_COMP_EQ_SIZE;
+#ifdef CONFIG_RFS_ACCEL
+       dev->rmap = alloc_irq_cpu_rmap(ncomp_vec);
+       if (!dev->rmap)
+               return -ENOMEM;
+#endif
        for (i = 0; i < ncomp_vec; i++) {
                eq = kzalloc(sizeof(*eq), GFP_KERNEL);
                if (!eq) {
@@ -698,6 +712,10 @@ static int alloc_comp_eqs(struct mlx5_core_dev *dev)
                        goto clean;
                }
 
+#ifdef CONFIG_RFS_ACCEL
+               irq_cpu_rmap_add(dev->rmap,
+                                dev->priv.msix_arr[i + MLX5_EQ_VEC_COMP_BASE].vector);
+#endif
                snprintf(name, MLX5_MAX_IRQ_NAME, "mlx5_comp%d", i);
                err = mlx5_create_map_eq(dev, eq,
                                         i + MLX5_EQ_VEC_COMP_BASE, nent, 0,
@@ -966,7 +984,7 @@ static int mlx5_load_one(struct mlx5_core_dev *dev, struct mlx5_priv *priv)
        int err;
 
        mutex_lock(&dev->intf_state_mutex);
-       if (dev->interface_state == MLX5_INTERFACE_STATE_UP) {
+       if (test_bit(MLX5_INTERFACE_STATE_UP, &dev->intf_state)) {
                dev_warn(&dev->pdev->dev, "%s: interface is up, NOP\n",
                         __func__);
                goto out;
@@ -1133,7 +1151,8 @@ static int mlx5_load_one(struct mlx5_core_dev *dev, struct mlx5_priv *priv)
        if (err)
                pr_info("failed request module on %s\n", MLX5_IB_MOD);
 
-       dev->interface_state = MLX5_INTERFACE_STATE_UP;
+       clear_bit(MLX5_INTERFACE_STATE_DOWN, &dev->intf_state);
+       set_bit(MLX5_INTERFACE_STATE_UP, &dev->intf_state);
 out:
        mutex_unlock(&dev->intf_state_mutex);
 
@@ -1207,7 +1226,7 @@ static int mlx5_unload_one(struct mlx5_core_dev *dev, struct mlx5_priv *priv)
        }
 
        mutex_lock(&dev->intf_state_mutex);
-       if (dev->interface_state == MLX5_INTERFACE_STATE_DOWN) {
+       if (test_bit(MLX5_INTERFACE_STATE_DOWN, &dev->intf_state)) {
                dev_warn(&dev->pdev->dev, "%s: interface is down, NOP\n",
                         __func__);
                goto out;
@@ -1241,7 +1260,8 @@ static int mlx5_unload_one(struct mlx5_core_dev *dev, struct mlx5_priv *priv)
        mlx5_cmd_cleanup(dev);
 
 out:
-       dev->interface_state = MLX5_INTERFACE_STATE_DOWN;
+       clear_bit(MLX5_INTERFACE_STATE_UP, &dev->intf_state);
+       set_bit(MLX5_INTERFACE_STATE_DOWN, &dev->intf_state);
        mutex_unlock(&dev->intf_state_mutex);
        return err;
 }
@@ -1452,6 +1472,18 @@ static const struct pci_error_handlers mlx5_err_handler = {
        .resume         = mlx5_pci_resume
 };
 
+static void shutdown(struct pci_dev *pdev)
+{
+       struct mlx5_core_dev *dev  = pci_get_drvdata(pdev);
+       struct mlx5_priv *priv = &dev->priv;
+
+       dev_info(&pdev->dev, "Shutdown was called\n");
+       /* Notify mlx5 clients that the kernel is being shut down */
+       set_bit(MLX5_INTERFACE_STATE_SHUTDOWN, &dev->intf_state);
+       mlx5_unload_one(dev, priv);
+       mlx5_pci_disable_device(dev);
+}
+
 static const struct pci_device_id mlx5_core_pci_table[] = {
        { PCI_VDEVICE(MELLANOX, 0x1011) },                      /* Connect-IB */
        { PCI_VDEVICE(MELLANOX, 0x1012), MLX5_PCI_DEV_IS_VF},   /* Connect-IB VF */
@@ -1459,6 +1491,8 @@ static const struct pci_device_id mlx5_core_pci_table[] = {
        { PCI_VDEVICE(MELLANOX, 0x1014), MLX5_PCI_DEV_IS_VF},   /* ConnectX-4 VF */
        { PCI_VDEVICE(MELLANOX, 0x1015) },                      /* ConnectX-4LX */
        { PCI_VDEVICE(MELLANOX, 0x1016), MLX5_PCI_DEV_IS_VF},   /* ConnectX-4LX VF */
+       { PCI_VDEVICE(MELLANOX, 0x1017) },                      /* ConnectX-5 */
+       { PCI_VDEVICE(MELLANOX, 0x1018), MLX5_PCI_DEV_IS_VF},   /* ConnectX-5 VF */
        { 0, }
 };
 
@@ -1469,6 +1503,7 @@ static struct pci_driver mlx5_core_driver = {
        .id_table       = mlx5_core_pci_table,
        .probe          = init_one,
        .remove         = remove_one,
+       .shutdown       = shutdown,
        .err_handler    = &mlx5_err_handler,
        .sriov_configure   = mlx5_core_sriov_configure,
 };
index 0b0b226..482604b 100644 (file)
@@ -42,6 +42,8 @@
 #define DRIVER_VERSION "3.0-1"
 #define DRIVER_RELDATE  "January 2015"
 
+#define MLX5_TOTAL_VPORTS(mdev) (1 + pci_sriov_get_totalvfs(mdev->pdev))
+
 extern int mlx5_core_debug_mask;
 
 #define mlx5_core_dbg(__dev, format, ...)                              \
index ae378c5..3e35611 100644 (file)
@@ -115,6 +115,19 @@ int mlx5_query_port_ptys(struct mlx5_core_dev *dev, u32 *ptys,
 }
 EXPORT_SYMBOL_GPL(mlx5_query_port_ptys);
 
+int mlx5_set_port_beacon(struct mlx5_core_dev *dev, u16 beacon_duration)
+{
+       u32 out[MLX5_ST_SZ_DW(mlcr_reg)];
+       u32 in[MLX5_ST_SZ_DW(mlcr_reg)];
+
+       memset(in, 0, sizeof(in));
+       MLX5_SET(mlcr_reg, in, local_port, 1);
+       MLX5_SET(mlcr_reg, in, beacon_duration, beacon_duration);
+
+       return mlx5_core_access_reg(dev, in, sizeof(in), out,
+                                   sizeof(out), MLX5_REG_MLCR, 0, 1);
+}
+
 int mlx5_query_port_proto_cap(struct mlx5_core_dev *dev,
                              u32 *proto_cap, int proto_mask)
 {
@@ -247,8 +260,8 @@ int mlx5_query_port_admin_status(struct mlx5_core_dev *dev,
 }
 EXPORT_SYMBOL_GPL(mlx5_query_port_admin_status);
 
-static void mlx5_query_port_mtu(struct mlx5_core_dev *dev, int *admin_mtu,
-                               int *max_mtu, int *oper_mtu, u8 port)
+static void mlx5_query_port_mtu(struct mlx5_core_dev *dev, u16 *admin_mtu,
+                               u16 *max_mtu, u16 *oper_mtu, u8 port)
 {
        u32 in[MLX5_ST_SZ_DW(pmtu_reg)];
        u32 out[MLX5_ST_SZ_DW(pmtu_reg)];
@@ -268,7 +281,7 @@ static void mlx5_query_port_mtu(struct mlx5_core_dev *dev, int *admin_mtu,
                *admin_mtu = MLX5_GET(pmtu_reg, out, admin_mtu);
 }
 
-int mlx5_set_port_mtu(struct mlx5_core_dev *dev, int mtu, u8 port)
+int mlx5_set_port_mtu(struct mlx5_core_dev *dev, u16 mtu, u8 port)
 {
        u32 in[MLX5_ST_SZ_DW(pmtu_reg)];
        u32 out[MLX5_ST_SZ_DW(pmtu_reg)];
@@ -283,20 +296,96 @@ int mlx5_set_port_mtu(struct mlx5_core_dev *dev, int mtu, u8 port)
 }
 EXPORT_SYMBOL_GPL(mlx5_set_port_mtu);
 
-void mlx5_query_port_max_mtu(struct mlx5_core_dev *dev, int *max_mtu,
+void mlx5_query_port_max_mtu(struct mlx5_core_dev *dev, u16 *max_mtu,
                             u8 port)
 {
        mlx5_query_port_mtu(dev, NULL, max_mtu, NULL, port);
 }
 EXPORT_SYMBOL_GPL(mlx5_query_port_max_mtu);
 
-void mlx5_query_port_oper_mtu(struct mlx5_core_dev *dev, int *oper_mtu,
+void mlx5_query_port_oper_mtu(struct mlx5_core_dev *dev, u16 *oper_mtu,
                              u8 port)
 {
        mlx5_query_port_mtu(dev, NULL, NULL, oper_mtu, port);
 }
 EXPORT_SYMBOL_GPL(mlx5_query_port_oper_mtu);
 
+static int mlx5_query_module_num(struct mlx5_core_dev *dev, int *module_num)
+{
+       u32 out[MLX5_ST_SZ_DW(pmlp_reg)];
+       u32 in[MLX5_ST_SZ_DW(pmlp_reg)];
+       int module_mapping;
+       int err;
+
+       memset(in, 0, sizeof(in));
+
+       MLX5_SET(pmlp_reg, in, local_port, 1);
+
+       err = mlx5_core_access_reg(dev, in, sizeof(in), out, sizeof(out),
+                                  MLX5_REG_PMLP, 0, 0);
+       if (err)
+               return err;
+
+       module_mapping = MLX5_GET(pmlp_reg, out, lane0_module_mapping);
+       *module_num = module_mapping & MLX5_EEPROM_IDENTIFIER_BYTE_MASK;
+
+       return 0;
+}
+
+int mlx5_query_module_eeprom(struct mlx5_core_dev *dev,
+                            u16 offset, u16 size, u8 *data)
+{
+       u32 out[MLX5_ST_SZ_DW(mcia_reg)];
+       u32 in[MLX5_ST_SZ_DW(mcia_reg)];
+       int module_num;
+       u16 i2c_addr;
+       int status;
+       int err;
+       void *ptr = MLX5_ADDR_OF(mcia_reg, out, dword_0);
+
+       err = mlx5_query_module_num(dev, &module_num);
+       if (err)
+               return err;
+
+       memset(in, 0, sizeof(in));
+       size = min_t(int, size, MLX5_EEPROM_MAX_BYTES);
+
+       if (offset < MLX5_EEPROM_PAGE_LENGTH &&
+           offset + size > MLX5_EEPROM_PAGE_LENGTH)
+               /* Cross pages read, read until offset 256 in low page */
+               size -= offset + size - MLX5_EEPROM_PAGE_LENGTH;
+
+       i2c_addr = MLX5_I2C_ADDR_LOW;
+       if (offset >= MLX5_EEPROM_PAGE_LENGTH) {
+               i2c_addr = MLX5_I2C_ADDR_HIGH;
+               offset -= MLX5_EEPROM_PAGE_LENGTH;
+       }
+
+       MLX5_SET(mcia_reg, in, l, 0);
+       MLX5_SET(mcia_reg, in, module, module_num);
+       MLX5_SET(mcia_reg, in, i2c_device_address, i2c_addr);
+       MLX5_SET(mcia_reg, in, page_number, 0);
+       MLX5_SET(mcia_reg, in, device_address, offset);
+       MLX5_SET(mcia_reg, in, size, size);
+
+       err = mlx5_core_access_reg(dev, in, sizeof(in), out,
+                                  sizeof(out), MLX5_REG_MCIA, 0, 0);
+       if (err)
+               return err;
+
+       status = MLX5_GET(mcia_reg, out, status);
+       if (status) {
+               mlx5_core_err(dev, "query_mcia_reg failed: status: 0x%x\n",
+                             status);
+               return -EIO;
+       }
+
+       memcpy(data, ptr, size);
+
+       return size;
+}
+EXPORT_SYMBOL_GPL(mlx5_query_module_eeprom);
+
 static int mlx5_query_port_pvlc(struct mlx5_core_dev *dev, u32 *pvlc,
                                int pvlc_size,  u8 local_port)
 {
@@ -607,3 +696,52 @@ int mlx5_query_port_wol(struct mlx5_core_dev *mdev, u8 *wol_mode)
        return err;
 }
 EXPORT_SYMBOL_GPL(mlx5_query_port_wol);
+
+static int mlx5_query_ports_check(struct mlx5_core_dev *mdev, u32 *out,
+                                 int outlen)
+{
+       u32 in[MLX5_ST_SZ_DW(pcmr_reg)];
+
+       memset(in, 0, sizeof(in));
+       MLX5_SET(pcmr_reg, in, local_port, 1);
+
+       return mlx5_core_access_reg(mdev, in, sizeof(in), out,
+                                   outlen, MLX5_REG_PCMR, 0, 0);
+}
+
+static int mlx5_set_ports_check(struct mlx5_core_dev *mdev, u32 *in, int inlen)
+{
+       u32 out[MLX5_ST_SZ_DW(pcmr_reg)];
+
+       return mlx5_core_access_reg(mdev, in, inlen, out,
+                                   sizeof(out), MLX5_REG_PCMR, 0, 1);
+}
+
+int mlx5_set_port_fcs(struct mlx5_core_dev *mdev, u8 enable)
+{
+       u32 in[MLX5_ST_SZ_DW(pcmr_reg)];
+
+       memset(in, 0, sizeof(in));
+       MLX5_SET(pcmr_reg, in, local_port, 1);
+       MLX5_SET(pcmr_reg, in, fcs_chk, enable);
+
+       return mlx5_set_ports_check(mdev, in, sizeof(in));
+}
+
+void mlx5_query_port_fcs(struct mlx5_core_dev *mdev, bool *supported,
+                        bool *enabled)
+{
+       u32 out[MLX5_ST_SZ_DW(pcmr_reg)];
+       /* Default values for FW which do not support MLX5_REG_PCMR */
+       *supported = false;
+       *enabled = true;
+
+       if (!MLX5_CAP_GEN(mdev, ports_check))
+               return;
+
+       if (mlx5_query_ports_check(mdev, out, sizeof(out)))
+               return;
+
+       *supported = !!(MLX5_GET(pcmr_reg, out, fcs_cap));
+       *enabled = !!(MLX5_GET(pcmr_reg, out, fcs_chk));
+}
index 8ba080e..5ff8af4 100644 (file)
@@ -269,8 +269,10 @@ EXPORT_SYMBOL(mlx5_alloc_map_uar);
 
 void mlx5_unmap_free_uar(struct mlx5_core_dev *mdev, struct mlx5_uar *uar)
 {
-       iounmap(uar->map);
-       iounmap(uar->bf_map);
+       if (uar->map)
+               iounmap(uar->map);
+       else
+               iounmap(uar->bf_map);
        mlx5_cmd_free_uar(mdev, uar->index);
 }
 EXPORT_SYMBOL(mlx5_unmap_free_uar);
index bd51840..b69dadc 100644 (file)
@@ -196,6 +196,46 @@ int mlx5_modify_nic_vport_mac_address(struct mlx5_core_dev *mdev,
 }
 EXPORT_SYMBOL_GPL(mlx5_modify_nic_vport_mac_address);
 
+int mlx5_query_nic_vport_mtu(struct mlx5_core_dev *mdev, u16 *mtu)
+{
+       int outlen = MLX5_ST_SZ_BYTES(query_nic_vport_context_out);
+       u32 *out;
+       int err;
+
+       out = mlx5_vzalloc(outlen);
+       if (!out)
+               return -ENOMEM;
+
+       err = mlx5_query_nic_vport_context(mdev, 0, out, outlen);
+       if (!err)
+               *mtu = MLX5_GET(query_nic_vport_context_out, out,
+                               nic_vport_context.mtu);
+
+       kvfree(out);
+       return err;
+}
+EXPORT_SYMBOL_GPL(mlx5_query_nic_vport_mtu);
+
+int mlx5_modify_nic_vport_mtu(struct mlx5_core_dev *mdev, u16 mtu)
+{
+       int inlen = MLX5_ST_SZ_BYTES(modify_nic_vport_context_in);
+       void *in;
+       int err;
+
+       in = mlx5_vzalloc(inlen);
+       if (!in)
+               return -ENOMEM;
+
+       MLX5_SET(modify_nic_vport_context_in, in, field_select.mtu, 1);
+       MLX5_SET(modify_nic_vport_context_in, in, nic_vport_context.mtu, mtu);
+
+       err = mlx5_modify_nic_vport_context(mdev, in, inlen);
+
+       kvfree(in);
+       return err;
+}
+EXPORT_SYMBOL_GPL(mlx5_modify_nic_vport_mtu);
+
 int mlx5_query_nic_vport_mac_list(struct mlx5_core_dev *dev,
                                  u32 vport,
                                  enum mlx5_list_type list_type,
index 9f10df2..f2fd1ef 100644 (file)
@@ -95,21 +95,22 @@ struct mlx5e_vxlan *mlx5e_vxlan_lookup_port(struct mlx5e_priv *priv, u16 port)
        return vxlan;
 }
 
-int mlx5e_vxlan_add_port(struct mlx5e_priv *priv, u16 port)
+static void mlx5e_vxlan_add_port(struct work_struct *work)
 {
+       struct mlx5e_vxlan_work *vxlan_work =
+               container_of(work, struct mlx5e_vxlan_work, work);
+       struct mlx5e_priv *priv = vxlan_work->priv;
        struct mlx5e_vxlan_db *vxlan_db = &priv->vxlan;
+       u16 port = vxlan_work->port;
        struct mlx5e_vxlan *vxlan;
        int err;
 
-       err = mlx5e_vxlan_core_add_port_cmd(priv->mdev, port);
-       if (err)
-               return err;
+       if (mlx5e_vxlan_core_add_port_cmd(priv->mdev, port))
+               goto free_work;
 
        vxlan = kzalloc(sizeof(*vxlan), GFP_KERNEL);
-       if (!vxlan) {
-               err = -ENOMEM;
+       if (!vxlan)
                goto err_delete_port;
-       }
 
        vxlan->udp_port = port;
 
@@ -119,13 +120,14 @@ int mlx5e_vxlan_add_port(struct mlx5e_priv *priv, u16 port)
        if (err)
                goto err_free;
 
-       return 0;
+       goto free_work;
 
 err_free:
        kfree(vxlan);
 err_delete_port:
        mlx5e_vxlan_core_del_port_cmd(priv->mdev, port);
-       return err;
+free_work:
+       kfree(vxlan_work);
 }
 
 static void __mlx5e_vxlan_core_del_port(struct mlx5e_priv *priv, u16 port)
@@ -145,12 +147,36 @@ static void __mlx5e_vxlan_core_del_port(struct mlx5e_priv *priv, u16 port)
        kfree(vxlan);
 }
 
-void mlx5e_vxlan_del_port(struct mlx5e_priv *priv, u16 port)
+static void mlx5e_vxlan_del_port(struct work_struct *work)
 {
-       if (!mlx5e_vxlan_lookup_port(priv, port))
-               return;
+       struct mlx5e_vxlan_work *vxlan_work =
+               container_of(work, struct mlx5e_vxlan_work, work);
+       struct mlx5e_priv *priv = vxlan_work->priv;
+       u16 port = vxlan_work->port;
 
        __mlx5e_vxlan_core_del_port(priv, port);
+
+       kfree(vxlan_work);
+}
+
+void mlx5e_vxlan_queue_work(struct mlx5e_priv *priv, sa_family_t sa_family,
+                           u16 port, int add)
+{
+       struct mlx5e_vxlan_work *vxlan_work;
+
+       vxlan_work = kmalloc(sizeof(*vxlan_work), GFP_ATOMIC);
+       if (!vxlan_work)
+               return;
+
+       if (add)
+               INIT_WORK(&vxlan_work->work, mlx5e_vxlan_add_port);
+       else
+               INIT_WORK(&vxlan_work->work, mlx5e_vxlan_del_port);
+
+       vxlan_work->priv = priv;
+       vxlan_work->port = port;
+       vxlan_work->sa_family = sa_family;
+       queue_work(priv->wq, &vxlan_work->work);
 }
 
 void mlx5e_vxlan_cleanup(struct mlx5e_priv *priv)
index a016850..129f352 100644 (file)
@@ -39,6 +39,13 @@ struct mlx5e_vxlan {
        u16 udp_port;
 };
 
+struct mlx5e_vxlan_work {
+       struct work_struct      work;
+       struct mlx5e_priv       *priv;
+       sa_family_t             sa_family;
+       u16                     port;
+};
+
 static inline bool mlx5e_vxlan_allowed(struct mlx5_core_dev *mdev)
 {
        return (MLX5_CAP_ETH(mdev, tunnel_stateless_vxlan) &&
@@ -46,8 +53,8 @@ static inline bool mlx5e_vxlan_allowed(struct mlx5_core_dev *mdev)
 }
 
 void mlx5e_vxlan_init(struct mlx5e_priv *priv);
-int  mlx5e_vxlan_add_port(struct mlx5e_priv *priv, u16 port);
-void mlx5e_vxlan_del_port(struct mlx5e_priv *priv, u16 port);
+void mlx5e_vxlan_queue_work(struct mlx5e_priv *priv, sa_family_t sa_family,
+                           u16 port, int add);
 struct mlx5e_vxlan *mlx5e_vxlan_lookup_port(struct mlx5e_priv *priv, u16 port);
 void mlx5e_vxlan_cleanup(struct mlx5e_priv *priv);
 
index 681afe1..79cdd81 100644 (file)
@@ -2449,8 +2449,8 @@ static void mlxsw_sp_fini(struct mlxsw_core *mlxsw_core)
 {
        struct mlxsw_sp *mlxsw_sp = mlxsw_core_driver_priv(mlxsw_core);
 
-       mlxsw_sp_buffers_fini(mlxsw_sp);
        mlxsw_sp_switchdev_fini(mlxsw_sp);
+       mlxsw_sp_buffers_fini(mlxsw_sp);
        mlxsw_sp_traps_fini(mlxsw_sp);
        mlxsw_sp_event_unregister(mlxsw_sp, MLXSW_TRAP_ID_PUDE);
        mlxsw_sp_ports_remove(mlxsw_sp);
index 75dc46c..280e761 100644 (file)
@@ -4790,7 +4790,7 @@ static void transmit_cleanup(struct dev_info *hw_priv, int normal)
 
        /* Notify the network subsystem that the packet has been sent. */
        if (dev)
-               dev->trans_start = jiffies;
+               netif_trans_update(dev);
 }
 
 /**
@@ -4965,7 +4965,7 @@ static void netdev_tx_timeout(struct net_device *dev)
                hw_ena_intr(hw);
        }
 
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
        netif_wake_queue(dev);
 }
 
index 86ea17e..7066954 100644 (file)
 #include <linux/skbuff.h>
 #include <linux/delay.h>
 #include <linux/spi/spi.h>
+#include <linux/of_net.h>
 
 #include "enc28j60_hw.h"
 
 #define DRV_NAME       "enc28j60"
-#define DRV_VERSION    "1.01"
+#define DRV_VERSION    "1.02"
 
 #define SPI_OPLEN      1
 
@@ -89,22 +90,26 @@ spi_read_buf(struct enc28j60_net *priv, int len, u8 *data)
 {
        u8 *rx_buf = priv->spi_transfer_buf + 4;
        u8 *tx_buf = priv->spi_transfer_buf;
-       struct spi_transfer t = {
+       struct spi_transfer tx = {
                .tx_buf = tx_buf,
+               .len = SPI_OPLEN,
+       };
+       struct spi_transfer rx = {
                .rx_buf = rx_buf,
-               .len = SPI_OPLEN + len,
+               .len = len,
        };
        struct spi_message msg;
        int ret;
 
        tx_buf[0] = ENC28J60_READ_BUF_MEM;
-       tx_buf[1] = tx_buf[2] = tx_buf[3] = 0;  /* don't care */
 
        spi_message_init(&msg);
-       spi_message_add_tail(&t, &msg);
+       spi_message_add_tail(&tx, &msg);
+       spi_message_add_tail(&rx, &msg);
+
        ret = spi_sync(priv->spi, &msg);
        if (ret == 0) {
-               memcpy(data, &rx_buf[SPI_OPLEN], len);
+               memcpy(data, rx_buf, len);
                ret = msg.status;
        }
        if (ret && netif_msg_drv(priv))
@@ -1544,6 +1549,7 @@ static int enc28j60_probe(struct spi_device *spi)
 {
        struct net_device *dev;
        struct enc28j60_net *priv;
+       const void *macaddr;
        int ret = 0;
 
        if (netif_msg_drv(&debug))
@@ -1575,7 +1581,12 @@ static int enc28j60_probe(struct spi_device *spi)
                ret = -EIO;
                goto error_irq;
        }
-       eth_hw_addr_random(dev);
+
+       macaddr = of_get_mac_address(spi->dev.of_node);
+       if (macaddr)
+               ether_addr_copy(dev->dev_addr, macaddr);
+       else
+               eth_hw_addr_random(dev);
        enc28j60_set_hw_macaddr(dev);
 
        /* Board setup must set the relevant edge trigger type;
@@ -1630,9 +1641,16 @@ static int enc28j60_remove(struct spi_device *spi)
        return 0;
 }
 
+static const struct of_device_id enc28j60_dt_ids[] = {
+       { .compatible = "microchip,enc28j60" },
+       { /* sentinel */ }
+};
+MODULE_DEVICE_TABLE(of, enc28j60_dt_ids);
+
 static struct spi_driver enc28j60_driver = {
        .driver = {
-                  .name = DRV_NAME,
+               .name = DRV_NAME,
+               .of_match_table = enc28j60_dt_ids,
         },
        .probe = enc28j60_probe,
        .remove = enc28j60_remove,
index 7df3183..42e3407 100644 (file)
@@ -874,7 +874,7 @@ static netdev_tx_t encx24j600_tx(struct sk_buff *skb, struct net_device *dev)
        netif_stop_queue(dev);
 
        /* save the timestamp */
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
 
        /* Remember the skb for deferred processing */
        priv->tx_skb = skb;
@@ -890,7 +890,7 @@ static void encx24j600_tx_timeout(struct net_device *dev)
        struct encx24j600_priv *priv = netdev_priv(dev);
 
        netif_err(priv, tx_err, dev, "TX timeout at %ld, latency %ld\n",
-                 jiffies, jiffies - dev->trans_start);
+                 jiffies, jiffies - dev_trans_start(dev));
 
        dev->stats.tx_errors++;
        netif_wake_queue(dev);
index 3e67f45..4367dd6 100644 (file)
@@ -376,7 +376,7 @@ static int moxart_mac_start_xmit(struct sk_buff *skb, struct net_device *ndev)
 
        priv->tx_head = TX_NEXT(tx_head);
 
-       ndev->trans_start = jiffies;
+       netif_trans_update(ndev);
        ret = NETDEV_TX_OK;
 out_unlock:
        spin_unlock_irq(&priv->txlock);
index 270c9ee..6d1a956 100644 (file)
@@ -2668,9 +2668,9 @@ static int myri10ge_close(struct net_device *dev)
 
        del_timer_sync(&mgp->watchdog_timer);
        mgp->running = MYRI10GE_ETH_STOPPING;
-       local_bh_disable(); /* myri10ge_ss_lock_napi needs bh disabled */
        for (i = 0; i < mgp->num_slices; i++) {
                napi_disable(&mgp->ss[i].napi);
+               local_bh_disable(); /* myri10ge_ss_lock_napi needs this */
                /* Lock the slice to prevent the busy_poll handler from
                 * accessing it.  Later when we bring the NIC up, myri10ge_open
                 * resets the slice including this lock.
@@ -2679,8 +2679,8 @@ static int myri10ge_close(struct net_device *dev)
                        pr_info("Slice %d locked\n", i);
                        mdelay(1);
                }
+               local_bh_enable();
        }
-       local_bh_enable();
        netif_carrier_off(dev);
 
        netif_tx_stop_all_queues(dev);
index 122c2ee..ed89029 100644 (file)
@@ -1904,7 +1904,7 @@ static void ns_tx_timeout(struct net_device *dev)
        spin_unlock_irq(&np->lock);
        enable_irq(irq);
 
-       dev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(dev); /* prevent tx timeout */
        dev->stats.tx_errors++;
        netif_wake_queue(dev);
 }
index 1bd419d..612c7a4 100644 (file)
@@ -174,7 +174,7 @@ static void sonic_tx_timeout(struct net_device *dev)
        /* Try to restart the adaptor. */
        sonic_init(dev);
        lp->stats.tx_errors++;
-       dev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(dev); /* prevent tx timeout */
        netif_wake_queue(dev);
 }
 
index 9ba9758..2874dff 100644 (file)
@@ -4021,7 +4021,6 @@ static netdev_tx_t s2io_xmit(struct sk_buff *skb, struct net_device *dev)
        unsigned long flags = 0;
        u16 vlan_tag = 0;
        struct fifo_info *fifo = NULL;
-       int do_spin_lock = 1;
        int offload_type;
        int enable_per_list_interrupt = 0;
        struct config_param *config = &sp->config;
@@ -4074,7 +4073,6 @@ static netdev_tx_t s2io_xmit(struct sk_buff *skb, struct net_device *dev)
                                        queue += sp->udp_fifo_idx;
                                        if (skb->len > 1024)
                                                enable_per_list_interrupt = 1;
-                                       do_spin_lock = 0;
                                }
                        }
                }
@@ -4084,12 +4082,7 @@ static netdev_tx_t s2io_xmit(struct sk_buff *skb, struct net_device *dev)
                        [skb->priority & (MAX_TX_FIFOS - 1)];
        fifo = &mac_control->fifos[queue];
 
-       if (do_spin_lock)
-               spin_lock_irqsave(&fifo->tx_lock, flags);
-       else {
-               if (unlikely(!spin_trylock_irqsave(&fifo->tx_lock, flags)))
-                       return NETDEV_TX_LOCKED;
-       }
+       spin_lock_irqsave(&fifo->tx_lock, flags);
 
        if (sp->config.multiq) {
                if (__netif_subqueue_stopped(dev, fifo->fifo_no)) {
index 52d9a94..87b7b81 100644 (file)
@@ -476,7 +476,7 @@ static void w90p910_reset_mac(struct net_device *dev)
 
        w90p910_init_desc(dev);
 
-       dev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(dev); /* prevent tx timeout */
        ether->cur_tx = 0x0;
        ether->finish_tx = 0x0;
        ether->cur_rx = 0x0;
@@ -490,7 +490,7 @@ static void w90p910_reset_mac(struct net_device *dev)
        w90p910_trigger_tx(dev);
        w90p910_trigger_rx(dev);
 
-       dev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(dev); /* prevent tx timeout */
 
        if (netif_queue_stopped(dev))
                netif_wake_queue(dev);
index 2a55d6d..8d710a3 100644 (file)
@@ -481,7 +481,6 @@ struct pch_gbe_buffer {
 
 /**
  * struct pch_gbe_tx_ring - tx ring information
- * @tx_lock:   spinlock structs
  * @desc:      pointer to the descriptor ring memory
  * @dma:       physical address of the descriptor ring
  * @size:      length of descriptor ring in bytes
@@ -491,7 +490,6 @@ struct pch_gbe_buffer {
  * @buffer_info:       array of buffer information structs
  */
 struct pch_gbe_tx_ring {
-       spinlock_t tx_lock;
        struct pch_gbe_tx_desc *desc;
        dma_addr_t dma;
        unsigned int size;
index 3b98b26..3cd87a4 100644 (file)
@@ -1640,7 +1640,7 @@ pch_gbe_clean_tx(struct pch_gbe_adapter *adapter,
                   cleaned_count);
        if (cleaned_count > 0)  { /*skip this if nothing cleaned*/
                /* Recover from running out of Tx resources in xmit_frame */
-               spin_lock(&tx_ring->tx_lock);
+               netif_tx_lock(adapter->netdev);
                if (unlikely(cleaned && (netif_queue_stopped(adapter->netdev))))
                {
                        netif_wake_queue(adapter->netdev);
@@ -1652,7 +1652,7 @@ pch_gbe_clean_tx(struct pch_gbe_adapter *adapter,
 
                netdev_dbg(adapter->netdev, "next_to_clean : %d\n",
                           tx_ring->next_to_clean);
-               spin_unlock(&tx_ring->tx_lock);
+               netif_tx_unlock(adapter->netdev);
        }
        return cleaned;
 }
@@ -1805,7 +1805,6 @@ int pch_gbe_setup_tx_resources(struct pch_gbe_adapter *adapter,
 
        tx_ring->next_to_use = 0;
        tx_ring->next_to_clean = 0;
-       spin_lock_init(&tx_ring->tx_lock);
 
        for (desNo = 0; desNo < tx_ring->count; desNo++) {
                tx_desc = PCH_GBE_TX_DESC(*tx_ring, desNo);
@@ -2135,15 +2134,9 @@ static int pch_gbe_xmit_frame(struct sk_buff *skb, struct net_device *netdev)
 {
        struct pch_gbe_adapter *adapter = netdev_priv(netdev);
        struct pch_gbe_tx_ring *tx_ring = adapter->tx_ring;
-       unsigned long flags;
 
-       if (!spin_trylock_irqsave(&tx_ring->tx_lock, flags)) {
-               /* Collision - tell upper layer to requeue */
-               return NETDEV_TX_LOCKED;
-       }
        if (unlikely(!PCH_GBE_DESC_UNUSED(tx_ring))) {
                netif_stop_queue(netdev);
-               spin_unlock_irqrestore(&tx_ring->tx_lock, flags);
                netdev_dbg(netdev,
                           "Return : BUSY  next_to use : 0x%08x  next_to clean : 0x%08x\n",
                           tx_ring->next_to_use, tx_ring->next_to_clean);
@@ -2152,7 +2145,6 @@ static int pch_gbe_xmit_frame(struct sk_buff *skb, struct net_device *netdev)
 
        /* CRC,ITAG no support */
        pch_gbe_tx_queue(adapter, tx_ring, skb);
-       spin_unlock_irqrestore(&tx_ring->tx_lock, flags);
        return NETDEV_TX_OK;
 }
 
index 13d88a6..91be2f0 100644 (file)
@@ -1144,7 +1144,7 @@ static void hamachi_tx_timeout(struct net_device *dev)
        hmp->rx_ring[RX_RING_SIZE-1].status_n_length |= cpu_to_le32(DescEndRing);
 
        /* Trigger an immediate transmit demand. */
-       dev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(dev); /* prevent tx timeout */
        dev->stats.tx_errors++;
 
        /* Restart the chip's Tx/Rx processes . */
index fa2db41..fb1d103 100644 (file)
@@ -714,7 +714,7 @@ static void yellowfin_tx_timeout(struct net_device *dev)
        if (yp->cur_tx - yp->dirty_tx < TX_QUEUE_SIZE)
                netif_wake_queue (dev);         /* Typical path */
 
-       dev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(dev); /* prevent tx timeout */
        dev->stats.tx_errors++;
 }
 
index fd362b6..cad37af 100644 (file)
@@ -2285,7 +2285,7 @@ static void netxen_tx_timeout_task(struct work_struct *work)
                        goto request_reset;
                }
        }
-       adapter->netdev->trans_start = jiffies;
+       netif_trans_update(adapter->netdev);
        rtnl_unlock();
        return;
 
index 5c2fd57..aafa669 100644 (file)
@@ -1,4 +1,5 @@
 obj-$(CONFIG_QED) := qed.o
 
 qed-y := qed_cxt.o qed_dev.o qed_hw.o qed_init_fw_funcs.o qed_init_ops.o \
-        qed_int.o qed_main.o qed_mcp.o qed_sp_commands.o qed_spq.o qed_l2.o
+        qed_int.o qed_main.o qed_mcp.o qed_sp_commands.o qed_spq.o qed_l2.o \
+        qed_selftest.o
index 33e2ed6..cceac32 100644 (file)
@@ -32,6 +32,8 @@ extern const struct qed_common_ops qed_common_ops_pass;
 #define NAME_SIZE 16
 #define VER_SIZE 16
 
+#define QED_WFQ_UNIT   100
+
 /* cau states */
 enum qed_coalescing_mode {
        QED_COAL_MODE_DISABLE,
@@ -237,6 +239,12 @@ struct qed_dmae_info {
        struct dmae_cmd *p_dmae_cmd;
 };
 
+struct qed_wfq_data {
+       /* when feature is configured for at least 1 vport */
+       u32     min_speed;
+       bool    configured;
+};
+
 struct qed_qm_info {
        struct init_qm_pq_params        *qm_pq_params;
        struct init_qm_vport_params     *qm_vport_params;
@@ -257,6 +265,7 @@ struct qed_qm_info {
        bool                            vport_wfq_en;
        u8                              pf_wfq;
        u32                             pf_rl;
+       struct qed_wfq_data             *wfq_data;
 };
 
 struct storm_stats {
@@ -526,6 +535,8 @@ static inline u8 qed_concrete_to_sw_fid(struct qed_dev *cdev,
 
 #define PURE_LB_TC 8
 
+void qed_configure_vp_wfq_on_link_change(struct qed_dev *cdev, u32 min_pf_rate);
+
 #define QED_LEADING_HWFN(dev)   (&dev->hwfns[0])
 
 /* Other Linux specific common definitions */
index bdae5a5..b500c86 100644 (file)
@@ -105,6 +105,8 @@ static void qed_qm_info_free(struct qed_hwfn *p_hwfn)
        qm_info->qm_vport_params = NULL;
        kfree(qm_info->qm_port_params);
        qm_info->qm_port_params = NULL;
+       kfree(qm_info->wfq_data);
+       qm_info->wfq_data = NULL;
 }
 
 void qed_resc_free(struct qed_dev *cdev)
@@ -175,6 +177,11 @@ static int qed_init_qm_info(struct qed_hwfn *p_hwfn)
        if (!qm_info->qm_port_params)
                goto alloc_err;
 
+       qm_info->wfq_data = kcalloc(num_vports, sizeof(*qm_info->wfq_data),
+                                   GFP_KERNEL);
+       if (!qm_info->wfq_data)
+               goto alloc_err;
+
        vport_id = (u8)RESC_START(p_hwfn, QED_VPORT);
 
        /* First init per-TC PQs */
@@ -213,18 +220,19 @@ static int qed_init_qm_info(struct qed_hwfn *p_hwfn)
 
        qm_info->start_vport = (u8)RESC_START(p_hwfn, QED_VPORT);
 
+       for (i = 0; i < qm_info->num_vports; i++)
+               qm_info->qm_vport_params[i].vport_wfq = 1;
+
        qm_info->pf_wfq = 0;
        qm_info->pf_rl = 0;
        qm_info->vport_rl_en = 1;
+       qm_info->vport_wfq_en = 1;
 
        return 0;
 
 alloc_err:
        DP_NOTICE(p_hwfn, "Failed to allocate memory for QM params\n");
-       kfree(qm_info->qm_pq_params);
-       kfree(qm_info->qm_vport_params);
-       kfree(qm_info->qm_port_params);
-
+       qed_qm_info_free(p_hwfn);
        return -ENOMEM;
 }
 
@@ -575,7 +583,7 @@ static int qed_hw_init_pf(struct qed_hwfn *p_hwfn,
                        p_hwfn->qm_info.pf_wfq = p_info->bandwidth_min;
 
                /* Update rate limit once we'll actually have a link */
-               p_hwfn->qm_info.pf_rl = 100;
+               p_hwfn->qm_info.pf_rl = 100000;
        }
 
        qed_cxt_hw_init_pf(p_hwfn);
@@ -1595,3 +1603,312 @@ int qed_fw_rss_eng(struct qed_hwfn *p_hwfn,
 
        return 0;
 }
+
+/* Calculate final WFQ values for all vports and configure them.
+ * After this configuration each vport will have
+ * approx min rate =  min_pf_rate * (vport_wfq / QED_WFQ_UNIT)
+ */
+static void qed_configure_wfq_for_all_vports(struct qed_hwfn *p_hwfn,
+                                            struct qed_ptt *p_ptt,
+                                            u32 min_pf_rate)
+{
+       struct init_qm_vport_params *vport_params;
+       int i;
+
+       vport_params = p_hwfn->qm_info.qm_vport_params;
+
+       for (i = 0; i < p_hwfn->qm_info.num_vports; i++) {
+               u32 wfq_speed = p_hwfn->qm_info.wfq_data[i].min_speed;
+
+               vport_params[i].vport_wfq = (wfq_speed * QED_WFQ_UNIT) /
+                                               min_pf_rate;
+               qed_init_vport_wfq(p_hwfn, p_ptt,
+                                  vport_params[i].first_tx_pq_id,
+                                  vport_params[i].vport_wfq);
+       }
+}
+
+static void qed_init_wfq_default_param(struct qed_hwfn *p_hwfn,
+                                      u32 min_pf_rate)
+
+{
+       int i;
+
+       for (i = 0; i < p_hwfn->qm_info.num_vports; i++)
+               p_hwfn->qm_info.qm_vport_params[i].vport_wfq = 1;
+}
+
+static void qed_disable_wfq_for_all_vports(struct qed_hwfn *p_hwfn,
+                                          struct qed_ptt *p_ptt,
+                                          u32 min_pf_rate)
+{
+       struct init_qm_vport_params *vport_params;
+       int i;
+
+       vport_params = p_hwfn->qm_info.qm_vport_params;
+
+       for (i = 0; i < p_hwfn->qm_info.num_vports; i++) {
+               qed_init_wfq_default_param(p_hwfn, min_pf_rate);
+               qed_init_vport_wfq(p_hwfn, p_ptt,
+                                  vport_params[i].first_tx_pq_id,
+                                  vport_params[i].vport_wfq);
+       }
+}
+
+/* This function performs several validations for WFQ
+ * configuration and required min rate for a given vport
+ * 1. req_rate must be greater than one percent of min_pf_rate.
+ * 2. req_rate should not cause other vports [not configured for WFQ explicitly]
+ *    rates to get less than one percent of min_pf_rate.
+ * 3. total_req_min_rate [all vports min rate sum] shouldn't exceed min_pf_rate.
+ */
+static int qed_init_wfq_param(struct qed_hwfn *p_hwfn,
+                             u16 vport_id, u32 req_rate,
+                             u32 min_pf_rate)
+{
+       u32 total_req_min_rate = 0, total_left_rate = 0, left_rate_per_vp = 0;
+       int non_requested_count = 0, req_count = 0, i, num_vports;
+
+       num_vports = p_hwfn->qm_info.num_vports;
+
+       /* Accounting for the vports which are configured for WFQ explicitly */
+       for (i = 0; i < num_vports; i++) {
+               u32 tmp_speed;
+
+               if ((i != vport_id) &&
+                   p_hwfn->qm_info.wfq_data[i].configured) {
+                       req_count++;
+                       tmp_speed = p_hwfn->qm_info.wfq_data[i].min_speed;
+                       total_req_min_rate += tmp_speed;
+               }
+       }
+
+       /* Include current vport data as well */
+       req_count++;
+       total_req_min_rate += req_rate;
+       non_requested_count = num_vports - req_count;
+
+       if (req_rate < min_pf_rate / QED_WFQ_UNIT) {
+               DP_VERBOSE(p_hwfn, NETIF_MSG_LINK,
+                          "Vport [%d] - Requested rate[%d Mbps] is less than one percent of configured PF min rate[%d Mbps]\n",
+                          vport_id, req_rate, min_pf_rate);
+               return -EINVAL;
+       }
+
+       if (num_vports > QED_WFQ_UNIT) {
+               DP_VERBOSE(p_hwfn, NETIF_MSG_LINK,
+                          "Number of vports is greater than %d\n",
+                          QED_WFQ_UNIT);
+               return -EINVAL;
+       }
+
+       if (total_req_min_rate > min_pf_rate) {
+               DP_VERBOSE(p_hwfn, NETIF_MSG_LINK,
+                          "Total requested min rate for all vports[%d Mbps] is greater than configured PF min rate[%d Mbps]\n",
+                          total_req_min_rate, min_pf_rate);
+               return -EINVAL;
+       }
+
+       total_left_rate = min_pf_rate - total_req_min_rate;
+
+       left_rate_per_vp = total_left_rate / non_requested_count;
+       if (left_rate_per_vp <  min_pf_rate / QED_WFQ_UNIT) {
+               DP_VERBOSE(p_hwfn, NETIF_MSG_LINK,
+                          "Non WFQ configured vports rate [%d Mbps] is less than one percent of configured PF min rate[%d Mbps]\n",
+                          left_rate_per_vp, min_pf_rate);
+               return -EINVAL;
+       }
+
+       p_hwfn->qm_info.wfq_data[vport_id].min_speed = req_rate;
+       p_hwfn->qm_info.wfq_data[vport_id].configured = true;
+
+       for (i = 0; i < num_vports; i++) {
+               if (p_hwfn->qm_info.wfq_data[i].configured)
+                       continue;
+
+               p_hwfn->qm_info.wfq_data[i].min_speed = left_rate_per_vp;
+       }
+
+       return 0;
+}
+
+static int __qed_configure_vp_wfq_on_link_change(struct qed_hwfn *p_hwfn,
+                                                struct qed_ptt *p_ptt,
+                                                u32 min_pf_rate)
+{
+       bool use_wfq = false;
+       int rc = 0;
+       u16 i;
+
+       /* Validate all pre configured vports for wfq */
+       for (i = 0; i < p_hwfn->qm_info.num_vports; i++) {
+               u32 rate;
+
+               if (!p_hwfn->qm_info.wfq_data[i].configured)
+                       continue;
+
+               rate = p_hwfn->qm_info.wfq_data[i].min_speed;
+               use_wfq = true;
+
+               rc = qed_init_wfq_param(p_hwfn, i, rate, min_pf_rate);
+               if (rc) {
+                       DP_NOTICE(p_hwfn,
+                                 "WFQ validation failed while configuring min rate\n");
+                       break;
+               }
+       }
+
+       if (!rc && use_wfq)
+               qed_configure_wfq_for_all_vports(p_hwfn, p_ptt, min_pf_rate);
+       else
+               qed_disable_wfq_for_all_vports(p_hwfn, p_ptt, min_pf_rate);
+
+       return rc;
+}
+
+/* API to configure WFQ from mcp link change */
+void qed_configure_vp_wfq_on_link_change(struct qed_dev *cdev, u32 min_pf_rate)
+{
+       int i;
+
+       for_each_hwfn(cdev, i) {
+               struct qed_hwfn *p_hwfn = &cdev->hwfns[i];
+
+               __qed_configure_vp_wfq_on_link_change(p_hwfn,
+                                                     p_hwfn->p_dpc_ptt,
+                                                     min_pf_rate);
+       }
+}
+
+int __qed_configure_pf_max_bandwidth(struct qed_hwfn *p_hwfn,
+                                    struct qed_ptt *p_ptt,
+                                    struct qed_mcp_link_state *p_link,
+                                    u8 max_bw)
+{
+       int rc = 0;
+
+       p_hwfn->mcp_info->func_info.bandwidth_max = max_bw;
+
+       if (!p_link->line_speed && (max_bw != 100))
+               return rc;
+
+       p_link->speed = (p_link->line_speed * max_bw) / 100;
+       p_hwfn->qm_info.pf_rl = p_link->speed;
+
+       /* Since the limiter also affects Tx-switched traffic, we don't want it
+        * to limit such traffic in case there's no actual limit.
+        * In that case, set limit to imaginary high boundary.
+        */
+       if (max_bw == 100)
+               p_hwfn->qm_info.pf_rl = 100000;
+
+       rc = qed_init_pf_rl(p_hwfn, p_ptt, p_hwfn->rel_pf_id,
+                           p_hwfn->qm_info.pf_rl);
+
+       DP_VERBOSE(p_hwfn, NETIF_MSG_LINK,
+                  "Configured MAX bandwidth to be %08x Mb/sec\n",
+                  p_link->speed);
+
+       return rc;
+}
+
+/* Main API to configure PF max bandwidth where bw range is [1 - 100] */
+int qed_configure_pf_max_bandwidth(struct qed_dev *cdev, u8 max_bw)
+{
+       int i, rc = -EINVAL;
+
+       if (max_bw < 1 || max_bw > 100) {
+               DP_NOTICE(cdev, "PF max bw valid range is [1-100]\n");
+               return rc;
+       }
+
+       for_each_hwfn(cdev, i) {
+               struct qed_hwfn *p_hwfn = &cdev->hwfns[i];
+               struct qed_hwfn *p_lead = QED_LEADING_HWFN(cdev);
+               struct qed_mcp_link_state *p_link;
+               struct qed_ptt *p_ptt;
+
+               p_link = &p_lead->mcp_info->link_output;
+
+               p_ptt = qed_ptt_acquire(p_hwfn);
+               if (!p_ptt)
+                       return -EBUSY;
+
+               rc = __qed_configure_pf_max_bandwidth(p_hwfn, p_ptt,
+                                                     p_link, max_bw);
+
+               qed_ptt_release(p_hwfn, p_ptt);
+
+               if (rc)
+                       break;
+       }
+
+       return rc;
+}
+
+int __qed_configure_pf_min_bandwidth(struct qed_hwfn *p_hwfn,
+                                    struct qed_ptt *p_ptt,
+                                    struct qed_mcp_link_state *p_link,
+                                    u8 min_bw)
+{
+       int rc = 0;
+
+       p_hwfn->mcp_info->func_info.bandwidth_min = min_bw;
+       p_hwfn->qm_info.pf_wfq = min_bw;
+
+       if (!p_link->line_speed)
+               return rc;
+
+       p_link->min_pf_rate = (p_link->line_speed * min_bw) / 100;
+
+       rc = qed_init_pf_wfq(p_hwfn, p_ptt, p_hwfn->rel_pf_id, min_bw);
+
+       DP_VERBOSE(p_hwfn, NETIF_MSG_LINK,
+                  "Configured MIN bandwidth to be %d Mb/sec\n",
+                  p_link->min_pf_rate);
+
+       return rc;
+}
+
+/* Main API to configure PF min bandwidth where bw range is [1-100] */
+int qed_configure_pf_min_bandwidth(struct qed_dev *cdev, u8 min_bw)
+{
+       int i, rc = -EINVAL;
+
+       if (min_bw < 1 || min_bw > 100) {
+               DP_NOTICE(cdev, "PF min bw valid range is [1-100]\n");
+               return rc;
+       }
+
+       for_each_hwfn(cdev, i) {
+               struct qed_hwfn *p_hwfn = &cdev->hwfns[i];
+               struct qed_hwfn *p_lead = QED_LEADING_HWFN(cdev);
+               struct qed_mcp_link_state *p_link;
+               struct qed_ptt *p_ptt;
+
+               p_link = &p_lead->mcp_info->link_output;
+
+               p_ptt = qed_ptt_acquire(p_hwfn);
+               if (!p_ptt)
+                       return -EBUSY;
+
+               rc = __qed_configure_pf_min_bandwidth(p_hwfn, p_ptt,
+                                                     p_link, min_bw);
+               if (rc) {
+                       qed_ptt_release(p_hwfn, p_ptt);
+                       return rc;
+               }
+
+               if (p_link->min_pf_rate) {
+                       u32 min_rate = p_link->min_pf_rate;
+
+                       rc = __qed_configure_vp_wfq_on_link_change(p_hwfn,
+                                                                  p_ptt,
+                                                                  min_rate);
+               }
+
+               qed_ptt_release(p_hwfn, p_ptt);
+       }
+
+       return rc;
+}
index 15e02ab..c4fae71 100644 (file)
@@ -3837,7 +3837,7 @@ struct public_drv_mb {
 
 #define DRV_MSG_CODE_SET_LLDP                   0x24000000
 #define DRV_MSG_CODE_SET_DCBX                   0x25000000
-
+#define DRV_MSG_CODE_BW_UPDATE_ACK             0x32000000
 #define DRV_MSG_CODE_NIG_DRAIN                  0x30000000
 
 #define DRV_MSG_CODE_INITIATE_FLR               0x02000000
@@ -3857,6 +3857,7 @@ struct public_drv_mb {
 #define DRV_MSG_CODE_PHY_CORE_WRITE             0x000e0000
 #define DRV_MSG_CODE_SET_VERSION                0x000f0000
 
+#define DRV_MSG_CODE_BIST_TEST                  0x001e0000
 #define DRV_MSG_CODE_SET_LED_MODE               0x00200000
 
 #define DRV_MSG_SEQ_NUMBER_MASK                 0x0000ffff
@@ -3914,6 +3915,18 @@ struct public_drv_mb {
 #define DRV_MB_PARAM_SET_LED_MODE_ON            0x1
 #define DRV_MB_PARAM_SET_LED_MODE_OFF           0x2
 
+#define DRV_MB_PARAM_BIST_UNKNOWN_TEST          0
+#define DRV_MB_PARAM_BIST_REGISTER_TEST         1
+#define DRV_MB_PARAM_BIST_CLOCK_TEST            2
+
+#define DRV_MB_PARAM_BIST_RC_UNKNOWN            0
+#define DRV_MB_PARAM_BIST_RC_PASSED             1
+#define DRV_MB_PARAM_BIST_RC_FAILED             2
+#define DRV_MB_PARAM_BIST_RC_INVALID_PARAMETER          3
+
+#define DRV_MB_PARAM_BIST_TEST_INDEX_SHIFT      0
+#define DRV_MB_PARAM_BIST_TEST_INDEX_MASK       0x000000FF
+
        u32 fw_mb_header;
 #define FW_MSG_CODE_MASK                        0xffff0000
 #define FW_MSG_CODE_DRV_LOAD_ENGINE             0x10100000
@@ -5116,4 +5129,8 @@ struct hw_set_image {
        struct hw_set_info      hw_sets[1];
 };
 
+int qed_init_pf_wfq(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt,
+                   u8 pf_id, u16 pf_wfq);
+int qed_init_vport_wfq(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt,
+                      u16 first_tx_pq_id[NUM_OF_TCS], u16 vport_wfq);
 #endif
index 1dd5324..e8a3b9d 100644 (file)
@@ -712,6 +712,21 @@ int qed_qm_pf_rt_init(struct qed_hwfn *p_hwfn,
        return 0;
 }
 
+int qed_init_pf_wfq(struct qed_hwfn *p_hwfn,
+                   struct qed_ptt *p_ptt,
+                   u8 pf_id, u16 pf_wfq)
+{
+       u32 inc_val = QM_WFQ_INC_VAL(pf_wfq);
+
+       if (!inc_val || inc_val > QM_WFQ_MAX_INC_VAL) {
+               DP_NOTICE(p_hwfn, "Invalid PF WFQ weight configuration");
+               return -1;
+       }
+
+       qed_wr(p_hwfn, p_ptt, QM_REG_WFQPFWEIGHT + pf_id * 4, inc_val);
+       return 0;
+}
+
 int qed_init_pf_rl(struct qed_hwfn *p_hwfn,
                   struct qed_ptt *p_ptt,
                   u8 pf_id,
@@ -732,6 +747,31 @@ int qed_init_pf_rl(struct qed_hwfn *p_hwfn,
        return 0;
 }
 
+int qed_init_vport_wfq(struct qed_hwfn *p_hwfn,
+                      struct qed_ptt *p_ptt,
+                      u16 first_tx_pq_id[NUM_OF_TCS],
+                      u16 vport_wfq)
+{
+       u32 inc_val = QM_WFQ_INC_VAL(vport_wfq);
+       u8 tc;
+
+       if (!inc_val || inc_val > QM_WFQ_MAX_INC_VAL) {
+               DP_NOTICE(p_hwfn, "Invalid VPORT WFQ weight configuration");
+               return -1;
+       }
+
+       for (tc = 0; tc < NUM_OF_TCS; tc++) {
+               u16 vport_pq_id = first_tx_pq_id[tc];
+
+               if (vport_pq_id != QM_INVALID_PQ_ID)
+                       qed_wr(p_hwfn, p_ptt,
+                              QM_REG_WFQVPWEIGHT + vport_pq_id * 4,
+                              inc_val);
+       }
+
+       return 0;
+}
+
 int qed_init_vport_rl(struct qed_hwfn *p_hwfn,
                      struct qed_ptt *p_ptt,
                      u8 vport_id,
index fb5f3b8..31e1d51 100644 (file)
@@ -1415,16 +1415,16 @@ static void __qed_get_vport_port_stats(struct qed_hwfn *p_hwfn,
                        sizeof(port_stats));
 
        p_stats->rx_64_byte_packets             += port_stats.pmm.r64;
-       p_stats->rx_127_byte_packets            += port_stats.pmm.r127;
-       p_stats->rx_255_byte_packets            += port_stats.pmm.r255;
-       p_stats->rx_511_byte_packets            += port_stats.pmm.r511;
-       p_stats->rx_1023_byte_packets           += port_stats.pmm.r1023;
-       p_stats->rx_1518_byte_packets           += port_stats.pmm.r1518;
-       p_stats->rx_1522_byte_packets           += port_stats.pmm.r1522;
-       p_stats->rx_2047_byte_packets           += port_stats.pmm.r2047;
-       p_stats->rx_4095_byte_packets           += port_stats.pmm.r4095;
-       p_stats->rx_9216_byte_packets           += port_stats.pmm.r9216;
-       p_stats->rx_16383_byte_packets          += port_stats.pmm.r16383;
+       p_stats->rx_65_to_127_byte_packets      += port_stats.pmm.r127;
+       p_stats->rx_128_to_255_byte_packets     += port_stats.pmm.r255;
+       p_stats->rx_256_to_511_byte_packets     += port_stats.pmm.r511;
+       p_stats->rx_512_to_1023_byte_packets    += port_stats.pmm.r1023;
+       p_stats->rx_1024_to_1518_byte_packets   += port_stats.pmm.r1518;
+       p_stats->rx_1519_to_1522_byte_packets   += port_stats.pmm.r1522;
+       p_stats->rx_1519_to_2047_byte_packets   += port_stats.pmm.r2047;
+       p_stats->rx_2048_to_4095_byte_packets   += port_stats.pmm.r4095;
+       p_stats->rx_4096_to_9216_byte_packets   += port_stats.pmm.r9216;
+       p_stats->rx_9217_to_16383_byte_packets  += port_stats.pmm.r16383;
        p_stats->rx_crc_errors                  += port_stats.pmm.rfcs;
        p_stats->rx_mac_crtl_frames             += port_stats.pmm.rxcf;
        p_stats->rx_pause_frames                += port_stats.pmm.rxpf;
index 1e9f321..1b758bd 100644 (file)
@@ -28,6 +28,7 @@
 #include "qed_dev_api.h"
 #include "qed_mcp.h"
 #include "qed_hw.h"
+#include "qed_selftest.h"
 
 static char version[] =
        "QLogic FastLinQ 4xxxx Core Module qed " DRV_MODULE_VERSION "\n";
@@ -915,6 +916,11 @@ static u32 qed_sb_release(struct qed_dev *cdev,
        return rc;
 }
 
+static bool qed_can_link_change(struct qed_dev *cdev)
+{
+       return true;
+}
+
 static int qed_set_link(struct qed_dev *cdev,
                        struct qed_link_params *params)
 {
@@ -957,6 +963,39 @@ static int qed_set_link(struct qed_dev *cdev,
        }
        if (params->override_flags & QED_LINK_OVERRIDE_SPEED_FORCED_SPEED)
                link_params->speed.forced_speed = params->forced_speed;
+       if (params->override_flags & QED_LINK_OVERRIDE_PAUSE_CONFIG) {
+               if (params->pause_config & QED_LINK_PAUSE_AUTONEG_ENABLE)
+                       link_params->pause.autoneg = true;
+               else
+                       link_params->pause.autoneg = false;
+               if (params->pause_config & QED_LINK_PAUSE_RX_ENABLE)
+                       link_params->pause.forced_rx = true;
+               else
+                       link_params->pause.forced_rx = false;
+               if (params->pause_config & QED_LINK_PAUSE_TX_ENABLE)
+                       link_params->pause.forced_tx = true;
+               else
+                       link_params->pause.forced_tx = false;
+       }
+       if (params->override_flags & QED_LINK_OVERRIDE_LOOPBACK_MODE) {
+               switch (params->loopback_mode) {
+               case QED_LINK_LOOPBACK_INT_PHY:
+                       link_params->loopback_mode = PMM_LOOPBACK_INT_PHY;
+                       break;
+               case QED_LINK_LOOPBACK_EXT_PHY:
+                       link_params->loopback_mode = PMM_LOOPBACK_EXT_PHY;
+                       break;
+               case QED_LINK_LOOPBACK_EXT:
+                       link_params->loopback_mode = PMM_LOOPBACK_EXT;
+                       break;
+               case QED_LINK_LOOPBACK_MAC:
+                       link_params->loopback_mode = PMM_LOOPBACK_MAC;
+                       break;
+               default:
+                       link_params->loopback_mode = PMM_LOOPBACK_NONE;
+                       break;
+               }
+       }
 
        rc = qed_mcp_set_link(hwfn, ptt, params->link_up);
 
@@ -1163,7 +1202,15 @@ static int qed_set_led(struct qed_dev *cdev, enum qed_led_mode mode)
        return status;
 }
 
+struct qed_selftest_ops qed_selftest_ops_pass = {
+       .selftest_memory = &qed_selftest_memory,
+       .selftest_interrupt = &qed_selftest_interrupt,
+       .selftest_register = &qed_selftest_register,
+       .selftest_clock = &qed_selftest_clock,
+};
+
 const struct qed_common_ops qed_common_ops_pass = {
+       .selftest = &qed_selftest_ops_pass,
        .probe = &qed_probe,
        .remove = &qed_remove,
        .set_power_state = &qed_set_power_state,
@@ -1177,6 +1224,7 @@ const struct qed_common_ops qed_common_ops_pass = {
        .sb_release = &qed_sb_release,
        .simd_handler_config = &qed_simd_handler_config,
        .simd_handler_clean = &qed_simd_handler_clean,
+       .can_link_change = &qed_can_link_change,
        .set_link = &qed_set_link,
        .get_link = &qed_get_current_link,
        .drain = &qed_drain,
index b89c9a8..2f8309d 100644 (file)
@@ -472,6 +472,7 @@ static void qed_mcp_handle_link_change(struct qed_hwfn *p_hwfn,
                                       bool b_reset)
 {
        struct qed_mcp_link_state *p_link;
+       u8 max_bw, min_bw;
        u32 status = 0;
 
        p_link = &p_hwfn->mcp_info->link_output;
@@ -527,17 +528,20 @@ static void qed_mcp_handle_link_change(struct qed_hwfn *p_hwfn,
                p_link->speed = 0;
        }
 
-       /* Correct speed according to bandwidth allocation */
-       if (p_hwfn->mcp_info->func_info.bandwidth_max && p_link->speed) {
-               p_link->speed = p_link->speed *
-                               p_hwfn->mcp_info->func_info.bandwidth_max /
-                               100;
-               qed_init_pf_rl(p_hwfn, p_ptt, p_hwfn->rel_pf_id,
-                              p_link->speed);
-               DP_VERBOSE(p_hwfn, NETIF_MSG_LINK,
-                          "Configured MAX bandwidth to be %08x Mb/sec\n",
-                          p_link->speed);
-       }
+       if (p_link->link_up && p_link->speed)
+               p_link->line_speed = p_link->speed;
+       else
+               p_link->line_speed = 0;
+
+       max_bw = p_hwfn->mcp_info->func_info.bandwidth_max;
+       min_bw = p_hwfn->mcp_info->func_info.bandwidth_min;
+
+       /* Max bandwidth configuration */
+       __qed_configure_pf_max_bandwidth(p_hwfn, p_ptt, p_link, max_bw);
+
+       /* Min bandwidth configuration */
+       __qed_configure_pf_min_bandwidth(p_hwfn, p_ptt, p_link, min_bw);
+       qed_configure_vp_wfq_on_link_change(p_hwfn->cdev, p_link->min_pf_rate);
 
        p_link->an = !!(status & LINK_STATUS_AUTO_NEGOTIATE_ENABLED);
        p_link->an_complete = !!(status &
@@ -648,6 +652,77 @@ int qed_mcp_set_link(struct qed_hwfn *p_hwfn,
        return 0;
 }
 
+static void qed_read_pf_bandwidth(struct qed_hwfn *p_hwfn,
+                                 struct public_func *p_shmem_info)
+{
+       struct qed_mcp_function_info *p_info;
+
+       p_info = &p_hwfn->mcp_info->func_info;
+
+       p_info->bandwidth_min = (p_shmem_info->config &
+                                FUNC_MF_CFG_MIN_BW_MASK) >>
+                                       FUNC_MF_CFG_MIN_BW_SHIFT;
+       if (p_info->bandwidth_min < 1 || p_info->bandwidth_min > 100) {
+               DP_INFO(p_hwfn,
+                       "bandwidth minimum out of bounds [%02x]. Set to 1\n",
+                       p_info->bandwidth_min);
+               p_info->bandwidth_min = 1;
+       }
+
+       p_info->bandwidth_max = (p_shmem_info->config &
+                                FUNC_MF_CFG_MAX_BW_MASK) >>
+                                       FUNC_MF_CFG_MAX_BW_SHIFT;
+       if (p_info->bandwidth_max < 1 || p_info->bandwidth_max > 100) {
+               DP_INFO(p_hwfn,
+                       "bandwidth maximum out of bounds [%02x]. Set to 100\n",
+                       p_info->bandwidth_max);
+               p_info->bandwidth_max = 100;
+       }
+}
+
+static u32 qed_mcp_get_shmem_func(struct qed_hwfn *p_hwfn,
+                                 struct qed_ptt *p_ptt,
+                                 struct public_func *p_data,
+                                 int pfid)
+{
+       u32 addr = SECTION_OFFSIZE_ADDR(p_hwfn->mcp_info->public_base,
+                                       PUBLIC_FUNC);
+       u32 mfw_path_offsize = qed_rd(p_hwfn, p_ptt, addr);
+       u32 func_addr = SECTION_ADDR(mfw_path_offsize, pfid);
+       u32 i, size;
+
+       memset(p_data, 0, sizeof(*p_data));
+
+       size = min_t(u32, sizeof(*p_data),
+                    QED_SECTION_SIZE(mfw_path_offsize));
+       for (i = 0; i < size / sizeof(u32); i++)
+               ((u32 *)p_data)[i] = qed_rd(p_hwfn, p_ptt,
+                                           func_addr + (i << 2));
+       return size;
+}
+
+static void qed_mcp_update_bw(struct qed_hwfn *p_hwfn,
+                             struct qed_ptt *p_ptt)
+{
+       struct qed_mcp_function_info *p_info;
+       struct public_func shmem_info;
+       u32 resp = 0, param = 0;
+
+       qed_mcp_get_shmem_func(p_hwfn, p_ptt, &shmem_info,
+                              MCP_PF_ID(p_hwfn));
+
+       qed_read_pf_bandwidth(p_hwfn, &shmem_info);
+
+       p_info = &p_hwfn->mcp_info->func_info;
+
+       qed_configure_pf_min_bandwidth(p_hwfn->cdev, p_info->bandwidth_min);
+       qed_configure_pf_max_bandwidth(p_hwfn->cdev, p_info->bandwidth_max);
+
+       /* Acknowledge the MFW */
+       qed_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_BW_UPDATE_ACK, 0, &resp,
+                   &param);
+}
+
 int qed_mcp_handle_events(struct qed_hwfn *p_hwfn,
                          struct qed_ptt *p_ptt)
 {
@@ -679,6 +754,9 @@ int qed_mcp_handle_events(struct qed_hwfn *p_hwfn,
                case MFW_DRV_MSG_TRANSCEIVER_STATE_CHANGE:
                        qed_mcp_handle_transceiver_change(p_hwfn, p_ptt);
                        break;
+               case MFW_DRV_MSG_BW_UPDATE:
+                       qed_mcp_update_bw(p_hwfn, p_ptt);
+                       break;
                default:
                        DP_NOTICE(p_hwfn, "Unimplemented MFW message %d\n", i);
                        rc = -EINVAL;
@@ -758,28 +836,6 @@ int qed_mcp_get_media_type(struct qed_dev *cdev,
        return 0;
 }
 
-static u32 qed_mcp_get_shmem_func(struct qed_hwfn *p_hwfn,
-                                 struct qed_ptt *p_ptt,
-                                 struct public_func *p_data,
-                                 int pfid)
-{
-       u32 addr = SECTION_OFFSIZE_ADDR(p_hwfn->mcp_info->public_base,
-                                       PUBLIC_FUNC);
-       u32 mfw_path_offsize = qed_rd(p_hwfn, p_ptt, addr);
-       u32 func_addr = SECTION_ADDR(mfw_path_offsize, pfid);
-       u32 i, size;
-
-       memset(p_data, 0, sizeof(*p_data));
-
-       size = min_t(u32, sizeof(*p_data),
-                    QED_SECTION_SIZE(mfw_path_offsize));
-       for (i = 0; i < size / sizeof(u32); i++)
-               ((u32 *)p_data)[i] = qed_rd(p_hwfn, p_ptt,
-                                           func_addr + (i << 2));
-
-       return size;
-}
-
 static int
 qed_mcp_get_shmem_proto(struct qed_hwfn *p_hwfn,
                        struct public_func *p_info,
@@ -818,26 +874,7 @@ int qed_mcp_fill_shmem_func_info(struct qed_hwfn *p_hwfn,
                return -EINVAL;
        }
 
-
-       info->bandwidth_min = (shmem_info.config &
-                              FUNC_MF_CFG_MIN_BW_MASK) >>
-                             FUNC_MF_CFG_MIN_BW_SHIFT;
-       if (info->bandwidth_min < 1 || info->bandwidth_min > 100) {
-               DP_INFO(p_hwfn,
-                       "bandwidth minimum out of bounds [%02x]. Set to 1\n",
-                       info->bandwidth_min);
-               info->bandwidth_min = 1;
-       }
-
-       info->bandwidth_max = (shmem_info.config &
-                              FUNC_MF_CFG_MAX_BW_MASK) >>
-                             FUNC_MF_CFG_MAX_BW_SHIFT;
-       if (info->bandwidth_max < 1 || info->bandwidth_max > 100) {
-               DP_INFO(p_hwfn,
-                       "bandwidth maximum out of bounds [%02x]. Set to 100\n",
-                       info->bandwidth_max);
-               info->bandwidth_max = 100;
-       }
+       qed_read_pf_bandwidth(p_hwfn, &shmem_info);
 
        if (shmem_info.mac_upper || shmem_info.mac_lower) {
                info->mac[0] = (u8)(shmem_info.mac_upper >> 8);
@@ -938,9 +975,10 @@ qed_mcp_send_drv_version(struct qed_hwfn *p_hwfn,
 
        p_drv_version = &union_data.drv_version;
        p_drv_version->version = p_ver->version;
+
        for (i = 0; i < MCP_DRV_VER_STR_SIZE - 1; i += 4) {
                val = cpu_to_be32(p_ver->name[i]);
-               *(u32 *)&p_drv_version->name[i * sizeof(u32)] = val;
+               *(__be32 *)&p_drv_version->name[i * sizeof(u32)] = val;
        }
 
        memset(&mb_params, 0, sizeof(mb_params));
@@ -979,3 +1017,45 @@ int qed_mcp_set_led(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt,
 
        return rc;
 }
+
+int qed_mcp_bist_register_test(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
+{
+       u32 drv_mb_param = 0, rsp, param;
+       int rc = 0;
+
+       drv_mb_param = (DRV_MB_PARAM_BIST_REGISTER_TEST <<
+                       DRV_MB_PARAM_BIST_TEST_INDEX_SHIFT);
+
+       rc = qed_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_BIST_TEST,
+                        drv_mb_param, &rsp, &param);
+
+       if (rc)
+               return rc;
+
+       if (((rsp & FW_MSG_CODE_MASK) != FW_MSG_CODE_OK) ||
+           (param != DRV_MB_PARAM_BIST_RC_PASSED))
+               rc = -EAGAIN;
+
+       return rc;
+}
+
+int qed_mcp_bist_clock_test(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
+{
+       u32 drv_mb_param, rsp, param;
+       int rc = 0;
+
+       drv_mb_param = (DRV_MB_PARAM_BIST_CLOCK_TEST <<
+                       DRV_MB_PARAM_BIST_TEST_INDEX_SHIFT);
+
+       rc = qed_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_BIST_TEST,
+                        drv_mb_param, &rsp, &param);
+
+       if (rc)
+               return rc;
+
+       if (((rsp & FW_MSG_CODE_MASK) != FW_MSG_CODE_OK) ||
+           (param != DRV_MB_PARAM_BIST_RC_PASSED))
+               rc = -EAGAIN;
+
+       return rc;
+}
index 50917a2..5f218ee 100644 (file)
@@ -40,7 +40,15 @@ struct qed_mcp_link_capabilities {
 struct qed_mcp_link_state {
        bool    link_up;
 
-       u32     speed; /* In Mb/s */
+       u32     min_pf_rate;
+
+       /* Actual link speed in Mb/s */
+       u32     line_speed;
+
+       /* PF max speed in Mb/s, deduced from line_speed
+        * according to PF max bandwidth configuration.
+        */
+       u32     speed;
        bool    full_duplex;
 
        bool    an;
@@ -237,6 +245,28 @@ int qed_mcp_set_led(struct qed_hwfn *p_hwfn,
                    struct qed_ptt *p_ptt,
                    enum qed_led_mode mode);
 
+/**
+ * @brief Bist register test
+ *
+ *  @param p_hwfn    - hw function
+ *  @param p_ptt     - PTT required for register access
+ *
+ * @return int - 0 - operation was successful.
+ */
+int qed_mcp_bist_register_test(struct qed_hwfn *p_hwfn,
+                              struct qed_ptt *p_ptt);
+
+/**
+ * @brief Bist clock test
+ *
+ *  @param p_hwfn    - hw function
+ *  @param p_ptt     - PTT required for register access
+ *
+ * @return int - 0 - operation was successful.
+ */
+int qed_mcp_bist_clock_test(struct qed_hwfn *p_hwfn,
+                           struct qed_ptt *p_ptt);
+
 /* Using hwfn number (and not pf_num) is required since in CMT mode,
  * same pf_num may be used by two different hwfn
  * TODO - this shouldn't really be in .h file, but until all fields
@@ -388,5 +418,14 @@ int qed_mcp_reset(struct qed_hwfn *p_hwfn,
  * @return true iff MFW is running and mcp_info is initialized
  */
 bool qed_mcp_is_init(struct qed_hwfn *p_hwfn);
-
+int qed_configure_pf_min_bandwidth(struct qed_dev *cdev, u8 min_bw);
+int qed_configure_pf_max_bandwidth(struct qed_dev *cdev, u8 max_bw);
+int __qed_configure_pf_max_bandwidth(struct qed_hwfn *p_hwfn,
+                                    struct qed_ptt *p_ptt,
+                                    struct qed_mcp_link_state *p_link,
+                                    u8 max_bw);
+int __qed_configure_pf_min_bandwidth(struct qed_hwfn *p_hwfn,
+                                    struct qed_ptt *p_ptt,
+                                    struct qed_mcp_link_state *p_link,
+                                    u8 min_bw);
 #endif
index 55451a4..bf4d7cc 100644 (file)
 #define PBF_REG_NGE_COMP_VER                   0xd80524UL
 #define PRS_REG_NGE_COMP_VER                   0x1f0878UL
 
+#define QM_REG_WFQPFWEIGHT     0x2f4e80UL
+#define QM_REG_WFQVPWEIGHT     0x2fa000UL
 #endif
diff --git a/drivers/net/ethernet/qlogic/qed/qed_selftest.c b/drivers/net/ethernet/qlogic/qed/qed_selftest.c
new file mode 100644 (file)
index 0000000..a342bfe
--- /dev/null
@@ -0,0 +1,76 @@
+#include "qed.h"
+#include "qed_dev_api.h"
+#include "qed_mcp.h"
+#include "qed_sp.h"
+
+int qed_selftest_memory(struct qed_dev *cdev)
+{
+       int rc = 0, i;
+
+       for_each_hwfn(cdev, i) {
+               rc = qed_sp_heartbeat_ramrod(&cdev->hwfns[i]);
+               if (rc)
+                       return rc;
+       }
+
+       return rc;
+}
+
+int qed_selftest_interrupt(struct qed_dev *cdev)
+{
+       int rc = 0, i;
+
+       for_each_hwfn(cdev, i) {
+               rc = qed_sp_heartbeat_ramrod(&cdev->hwfns[i]);
+               if (rc)
+                       return rc;
+       }
+
+       return rc;
+}
+
+int qed_selftest_register(struct qed_dev *cdev)
+{
+       struct qed_hwfn *p_hwfn;
+       struct qed_ptt *p_ptt;
+       int rc = 0, i;
+
+       /* although performed by MCP, this test is per engine */
+       for_each_hwfn(cdev, i) {
+               p_hwfn = &cdev->hwfns[i];
+               p_ptt = qed_ptt_acquire(p_hwfn);
+               if (!p_ptt) {
+                       DP_ERR(p_hwfn, "failed to acquire ptt\n");
+                       return -EBUSY;
+               }
+               rc = qed_mcp_bist_register_test(p_hwfn, p_ptt);
+               qed_ptt_release(p_hwfn, p_ptt);
+               if (rc)
+                       break;
+       }
+
+       return rc;
+}
+
+int qed_selftest_clock(struct qed_dev *cdev)
+{
+       struct qed_hwfn *p_hwfn;
+       struct qed_ptt *p_ptt;
+       int rc = 0, i;
+
+       /* although performed by MCP, this test is per engine */
+       for_each_hwfn(cdev, i) {
+               p_hwfn = &cdev->hwfns[i];
+               p_ptt = qed_ptt_acquire(p_hwfn);
+               if (!p_ptt) {
+                       DP_ERR(p_hwfn, "failed to acquire ptt\n");
+                       return -EBUSY;
+               }
+               rc = qed_mcp_bist_clock_test(p_hwfn, p_ptt);
+               qed_ptt_release(p_hwfn, p_ptt);
+               if (rc)
+                       break;
+       }
+
+       return rc;
+}
diff --git a/drivers/net/ethernet/qlogic/qed/qed_selftest.h b/drivers/net/ethernet/qlogic/qed/qed_selftest.h
new file mode 100644 (file)
index 0000000..50eb0b4
--- /dev/null
@@ -0,0 +1,40 @@
+#ifndef _QED_SELFTEST_API_H
+#define _QED_SELFTEST_API_H
+#include <linux/types.h>
+
+/**
+ * @brief qed_selftest_memory - Perform memory test
+ *
+ * @param cdev
+ *
+ * @return int
+ */
+int qed_selftest_memory(struct qed_dev *cdev);
+
+/**
+ * @brief qed_selftest_interrupt - Perform interrupt test
+ *
+ * @param cdev
+ *
+ * @return int
+ */
+int qed_selftest_interrupt(struct qed_dev *cdev);
+
+/**
+ * @brief qed_selftest_register - Perform register test
+ *
+ * @param cdev
+ *
+ * @return int
+ */
+int qed_selftest_register(struct qed_dev *cdev);
+
+/**
+ * @brief qed_selftest_clock - Perform clock test
+ *
+ * @param cdev
+ *
+ * @return int
+ */
+int qed_selftest_clock(struct qed_dev *cdev);
+#endif
index 4b91cb3..eec137f 100644 (file)
@@ -369,4 +369,14 @@ int qed_sp_pf_update_tunn_cfg(struct qed_hwfn *p_hwfn,
                              struct qed_tunn_update_params *p_tunn,
                              enum spq_mode comp_mode,
                              struct qed_spq_comp_cb *p_comp_data);
+/**
+ * @brief qed_sp_heartbeat_ramrod - Send empty Ramrod
+ *
+ * @param p_hwfn
+ *
+ * @return int
+ */
+
+int qed_sp_heartbeat_ramrod(struct qed_hwfn *p_hwfn);
+
 #endif
index 7ccd96e..e1e2344 100644 (file)
@@ -362,7 +362,15 @@ int qed_sp_pf_start(struct qed_hwfn *p_hwfn,
                   sb, sb_index,
                   p_ramrod->outer_tag);
 
-       return qed_spq_post(p_hwfn, p_ent, NULL);
+       rc = qed_spq_post(p_hwfn, p_ent, NULL);
+
+       if (p_tunn) {
+               qed_set_hw_tunn_mode(p_hwfn, p_hwfn->p_main_ptt,
+                                    p_tunn->tunn_mode);
+               p_hwfn->cdev->tunn_mode = p_tunn->tunn_mode;
+       }
+
+       return rc;
 }
 
 /* Set pf update ramrod command params */
@@ -428,3 +436,24 @@ int qed_sp_pf_stop(struct qed_hwfn *p_hwfn)
 
        return qed_spq_post(p_hwfn, p_ent, NULL);
 }
+
+int qed_sp_heartbeat_ramrod(struct qed_hwfn *p_hwfn)
+{
+       struct qed_spq_entry *p_ent = NULL;
+       struct qed_sp_init_data init_data;
+       int rc;
+
+       /* Get SPQ entry */
+       memset(&init_data, 0, sizeof(init_data));
+       init_data.cid = qed_spq_get_cid(p_hwfn);
+       init_data.opaque_fid = p_hwfn->hw_info.opaque_fid;
+       init_data.comp_mode = QED_SPQ_MODE_EBLOCK;
+
+       rc = qed_sp_init_request(p_hwfn, &p_ent,
+                                COMMON_RAMROD_EMPTY, PROTOCOLID_COMMON,
+                                &init_data);
+       if (rc)
+               return rc;
+
+       return qed_spq_post(p_hwfn, p_ent, NULL);
+}
index 16df159..ff3ac0c 100644 (file)
@@ -59,16 +59,16 @@ struct qede_stats {
 
        /* port */
        u64 rx_64_byte_packets;
-       u64 rx_127_byte_packets;
-       u64 rx_255_byte_packets;
-       u64 rx_511_byte_packets;
-       u64 rx_1023_byte_packets;
-       u64 rx_1518_byte_packets;
-       u64 rx_1522_byte_packets;
-       u64 rx_2047_byte_packets;
-       u64 rx_4095_byte_packets;
-       u64 rx_9216_byte_packets;
-       u64 rx_16383_byte_packets;
+       u64 rx_65_to_127_byte_packets;
+       u64 rx_128_to_255_byte_packets;
+       u64 rx_256_to_511_byte_packets;
+       u64 rx_512_to_1023_byte_packets;
+       u64 rx_1024_to_1518_byte_packets;
+       u64 rx_1519_to_1522_byte_packets;
+       u64 rx_1519_to_2047_byte_packets;
+       u64 rx_2048_to_4095_byte_packets;
+       u64 rx_4096_to_9216_byte_packets;
+       u64 rx_9217_to_16383_byte_packets;
        u64 rx_crc_errors;
        u64 rx_mac_crtl_frames;
        u64 rx_pause_frames;
@@ -308,6 +308,10 @@ void qede_reload(struct qede_dev *edev,
                 union qede_reload_args *args);
 int qede_change_mtu(struct net_device *dev, int new_mtu);
 void qede_fill_by_demand_stats(struct qede_dev *edev);
+bool qede_has_rx_work(struct qede_rx_queue *rxq);
+int qede_txq_has_work(struct qede_tx_queue *txq);
+void qede_recycle_rx_bd_ring(struct qede_rx_queue *rxq, struct qede_dev *edev,
+                            u8 count);
 
 #define RX_RING_SIZE_POW       13
 #define RX_RING_SIZE           ((u16)BIT(RX_RING_SIZE_POW))
index f0982f1..0d04f16 100644 (file)
@@ -9,6 +9,7 @@
 #include <linux/version.h>
 #include <linux/types.h>
 #include <linux/netdevice.h>
+#include <linux/etherdevice.h>
 #include <linux/ethtool.h>
 #include <linux/string.h>
 #include <linux/pci.h>
@@ -27,6 +28,9 @@
 #define QEDE_RQSTAT_STRING(stat_name) (#stat_name)
 #define QEDE_RQSTAT(stat_name) \
         {QEDE_RQSTAT_OFFSET(stat_name), QEDE_RQSTAT_STRING(stat_name)}
+
+#define QEDE_SELFTEST_POLL_COUNT 100
+
 static const struct {
        u64 offset;
        char string[ETH_GSTRING_LEN];
@@ -59,16 +63,16 @@ static const struct {
        QEDE_STAT(tx_bcast_pkts),
 
        QEDE_PF_STAT(rx_64_byte_packets),
-       QEDE_PF_STAT(rx_127_byte_packets),
-       QEDE_PF_STAT(rx_255_byte_packets),
-       QEDE_PF_STAT(rx_511_byte_packets),
-       QEDE_PF_STAT(rx_1023_byte_packets),
-       QEDE_PF_STAT(rx_1518_byte_packets),
-       QEDE_PF_STAT(rx_1522_byte_packets),
-       QEDE_PF_STAT(rx_2047_byte_packets),
-       QEDE_PF_STAT(rx_4095_byte_packets),
-       QEDE_PF_STAT(rx_9216_byte_packets),
-       QEDE_PF_STAT(rx_16383_byte_packets),
+       QEDE_PF_STAT(rx_65_to_127_byte_packets),
+       QEDE_PF_STAT(rx_128_to_255_byte_packets),
+       QEDE_PF_STAT(rx_256_to_511_byte_packets),
+       QEDE_PF_STAT(rx_512_to_1023_byte_packets),
+       QEDE_PF_STAT(rx_1024_to_1518_byte_packets),
+       QEDE_PF_STAT(rx_1519_to_1522_byte_packets),
+       QEDE_PF_STAT(rx_1519_to_2047_byte_packets),
+       QEDE_PF_STAT(rx_2048_to_4095_byte_packets),
+       QEDE_PF_STAT(rx_4096_to_9216_byte_packets),
+       QEDE_PF_STAT(rx_9217_to_16383_byte_packets),
        QEDE_PF_STAT(tx_64_byte_packets),
        QEDE_PF_STAT(tx_65_to_127_byte_packets),
        QEDE_PF_STAT(tx_128_to_255_byte_packets),
@@ -116,6 +120,32 @@ static const struct {
 
 #define QEDE_NUM_STATS ARRAY_SIZE(qede_stats_arr)
 
+enum {
+       QEDE_PRI_FLAG_CMT,
+       QEDE_PRI_FLAG_LEN,
+};
+
+static const char qede_private_arr[QEDE_PRI_FLAG_LEN][ETH_GSTRING_LEN] = {
+       "Coupled-Function",
+};
+
+enum qede_ethtool_tests {
+       QEDE_ETHTOOL_INT_LOOPBACK,
+       QEDE_ETHTOOL_INTERRUPT_TEST,
+       QEDE_ETHTOOL_MEMORY_TEST,
+       QEDE_ETHTOOL_REGISTER_TEST,
+       QEDE_ETHTOOL_CLOCK_TEST,
+       QEDE_ETHTOOL_TEST_MAX
+};
+
+static const char qede_tests_str_arr[QEDE_ETHTOOL_TEST_MAX][ETH_GSTRING_LEN] = {
+       "Internal loopback (offline)",
+       "Interrupt (online)\t",
+       "Memory (online)\t\t",
+       "Register (online)\t",
+       "Clock (online)\t\t",
+};
+
 static void qede_get_strings_stats(struct qede_dev *edev, u8 *buf)
 {
        int i, j, k;
@@ -139,6 +169,14 @@ static void qede_get_strings(struct net_device *dev, u32 stringset, u8 *buf)
        case ETH_SS_STATS:
                qede_get_strings_stats(edev, buf);
                break;
+       case ETH_SS_PRIV_FLAGS:
+               memcpy(buf, qede_private_arr,
+                      ETH_GSTRING_LEN * QEDE_PRI_FLAG_LEN);
+               break;
+       case ETH_SS_TEST:
+               memcpy(buf, qede_tests_str_arr,
+                      ETH_GSTRING_LEN * QEDE_ETHTOOL_TEST_MAX);
+               break;
        default:
                DP_VERBOSE(edev, QED_MSG_DEBUG,
                           "Unsupported stringset 0x%08x\n", stringset);
@@ -177,7 +215,10 @@ static int qede_get_sset_count(struct net_device *dev, int stringset)
        switch (stringset) {
        case ETH_SS_STATS:
                return num_stats + QEDE_NUM_RQSTATS;
-
+       case ETH_SS_PRIV_FLAGS:
+               return QEDE_PRI_FLAG_LEN;
+       case ETH_SS_TEST:
+               return QEDE_ETHTOOL_TEST_MAX;
        default:
                DP_VERBOSE(edev, QED_MSG_DEBUG,
                           "Unsupported stringset 0x%08x\n", stringset);
@@ -185,6 +226,13 @@ static int qede_get_sset_count(struct net_device *dev, int stringset)
        }
 }
 
+static u32 qede_get_priv_flags(struct net_device *dev)
+{
+       struct qede_dev *edev = netdev_priv(dev);
+
+       return (!!(edev->dev_info.common.num_hwfns > 1)) << QEDE_PRI_FLAG_CMT;
+}
+
 static int qede_get_settings(struct net_device *dev, struct ethtool_cmd *cmd)
 {
        struct qede_dev *edev = netdev_priv(dev);
@@ -217,9 +265,9 @@ static int qede_set_settings(struct net_device *dev, struct ethtool_cmd *cmd)
        struct qed_link_params params;
        u32 speed;
 
-       if (!edev->dev_info.common.is_mf_default) {
+       if (!edev->ops || !edev->ops->common->can_link_change(edev->cdev)) {
                DP_INFO(edev,
-                       "Link parameters can not be changed in non-default mode\n");
+                       "Link settings are not allowed to be changed\n");
                return -EOPNOTSUPP;
        }
 
@@ -328,6 +376,12 @@ static int qede_nway_reset(struct net_device *dev)
        struct qed_link_output current_link;
        struct qed_link_params link_params;
 
+       if (!edev->ops || !edev->ops->common->can_link_change(edev->cdev)) {
+               DP_INFO(edev,
+                       "Link settings are not allowed to be changed\n");
+               return -EOPNOTSUPP;
+       }
+
        if (!netif_running(dev))
                return 0;
 
@@ -428,9 +482,9 @@ static int qede_set_pauseparam(struct net_device *dev,
        struct qed_link_params params;
        struct qed_link_output current_link;
 
-       if (!edev->dev_info.common.is_mf_default) {
+       if (!edev->ops || !edev->ops->common->can_link_change(edev->cdev)) {
                DP_INFO(edev,
-                       "Pause parameters can not be updated in non-default mode\n");
+                       "Pause settings are not allowed to be changed\n");
                return -EOPNOTSUPP;
        }
 
@@ -799,6 +853,267 @@ static int qede_set_rxfh(struct net_device *dev, const u32 *indir,
        return 0;
 }
 
+/* This function enables the interrupt generation and the NAPI on the device */
+static void qede_netif_start(struct qede_dev *edev)
+{
+       int i;
+
+       if (!netif_running(edev->ndev))
+               return;
+
+       for_each_rss(i) {
+               /* Update and reenable interrupts */
+               qed_sb_ack(edev->fp_array[i].sb_info, IGU_INT_ENABLE, 1);
+               napi_enable(&edev->fp_array[i].napi);
+       }
+}
+
+/* This function disables the NAPI and the interrupt generation on the device */
+static void qede_netif_stop(struct qede_dev *edev)
+{
+       int i;
+
+       for_each_rss(i) {
+               napi_disable(&edev->fp_array[i].napi);
+               /* Disable interrupts */
+               qed_sb_ack(edev->fp_array[i].sb_info, IGU_INT_DISABLE, 0);
+       }
+}
+
+static int qede_selftest_transmit_traffic(struct qede_dev *edev,
+                                         struct sk_buff *skb)
+{
+       struct qede_tx_queue *txq = &edev->fp_array[0].txqs[0];
+       struct eth_tx_1st_bd *first_bd;
+       dma_addr_t mapping;
+       int i, idx, val;
+
+       /* Fill the entry in the SW ring and the BDs in the FW ring */
+       idx = txq->sw_tx_prod & NUM_TX_BDS_MAX;
+       txq->sw_tx_ring[idx].skb = skb;
+       first_bd = qed_chain_produce(&txq->tx_pbl);
+       memset(first_bd, 0, sizeof(*first_bd));
+       val = 1 << ETH_TX_1ST_BD_FLAGS_START_BD_SHIFT;
+       first_bd->data.bd_flags.bitfields = val;
+
+       /* Map skb linear data for DMA and set in the first BD */
+       mapping = dma_map_single(&edev->pdev->dev, skb->data,
+                                skb_headlen(skb), DMA_TO_DEVICE);
+       if (unlikely(dma_mapping_error(&edev->pdev->dev, mapping))) {
+               DP_NOTICE(edev, "SKB mapping failed\n");
+               return -ENOMEM;
+       }
+       BD_SET_UNMAP_ADDR_LEN(first_bd, mapping, skb_headlen(skb));
+
+       /* update the first BD with the actual num BDs */
+       first_bd->data.nbds = 1;
+       txq->sw_tx_prod++;
+       /* 'next page' entries are counted in the producer value */
+       val = cpu_to_le16(qed_chain_get_prod_idx(&txq->tx_pbl));
+       txq->tx_db.data.bd_prod = val;
+
+       /* wmb makes sure that the BDs data is updated before updating the
+        * producer, otherwise FW may read old data from the BDs.
+        */
+       wmb();
+       barrier();
+       writel(txq->tx_db.raw, txq->doorbell_addr);
+
+       /* mmiowb is needed to synchronize doorbell writes from more than one
+        * processor. It guarantees that the write arrives to the device before
+        * the queue lock is released and another start_xmit is called (possibly
+        * on another CPU). Without this barrier, the next doorbell can bypass
+        * this doorbell. This is applicable to IA64/Altix systems.
+        */
+       mmiowb();
+
+       for (i = 0; i < QEDE_SELFTEST_POLL_COUNT; i++) {
+               if (qede_txq_has_work(txq))
+                       break;
+               usleep_range(100, 200);
+       }
+
+       if (!qede_txq_has_work(txq)) {
+               DP_NOTICE(edev, "Tx completion didn't happen\n");
+               return -1;
+       }
+
+       first_bd = (struct eth_tx_1st_bd *)qed_chain_consume(&txq->tx_pbl);
+       dma_unmap_page(&edev->pdev->dev, BD_UNMAP_ADDR(first_bd),
+                      BD_UNMAP_LEN(first_bd), DMA_TO_DEVICE);
+       txq->sw_tx_cons++;
+       txq->sw_tx_ring[idx].skb = NULL;
+
+       return 0;
+}
+
+static int qede_selftest_receive_traffic(struct qede_dev *edev)
+{
+       struct qede_rx_queue *rxq = edev->fp_array[0].rxq;
+       u16 hw_comp_cons, sw_comp_cons, sw_rx_index, len;
+       struct eth_fast_path_rx_reg_cqe *fp_cqe;
+       struct sw_rx_data *sw_rx_data;
+       union eth_rx_cqe *cqe;
+       u8 *data_ptr;
+       int i;
+
+       /* The packet is expected to receive on rx-queue 0 even though RSS is
+        * enabled. This is because the queue 0 is configured as the default
+        * queue and that the loopback traffic is not IP.
+        */
+       for (i = 0; i < QEDE_SELFTEST_POLL_COUNT; i++) {
+               if (qede_has_rx_work(rxq))
+                       break;
+               usleep_range(100, 200);
+       }
+
+       if (!qede_has_rx_work(rxq)) {
+               DP_NOTICE(edev, "Failed to receive the traffic\n");
+               return -1;
+       }
+
+       hw_comp_cons = le16_to_cpu(*rxq->hw_cons_ptr);
+       sw_comp_cons = qed_chain_get_cons_idx(&rxq->rx_comp_ring);
+
+       /* Memory barrier to prevent the CPU from doing speculative reads of CQE
+        * / BD before reading hw_comp_cons. If the CQE is read before it is
+        * written by FW, then FW writes CQE and SB, and then the CPU reads the
+        * hw_comp_cons, it will use an old CQE.
+        */
+       rmb();
+
+       /* Get the CQE from the completion ring */
+       cqe = (union eth_rx_cqe *)qed_chain_consume(&rxq->rx_comp_ring);
+
+       /* Get the data from the SW ring */
+       sw_rx_index = rxq->sw_rx_cons & NUM_RX_BDS_MAX;
+       sw_rx_data = &rxq->sw_rx_ring[sw_rx_index];
+       fp_cqe = &cqe->fast_path_regular;
+       len =  le16_to_cpu(fp_cqe->len_on_first_bd);
+       data_ptr = (u8 *)(page_address(sw_rx_data->data) +
+                    fp_cqe->placement_offset + sw_rx_data->page_offset);
+       for (i = ETH_HLEN; i < len; i++)
+               if (data_ptr[i] != (unsigned char)(i & 0xff)) {
+                       DP_NOTICE(edev, "Loopback test failed\n");
+                       qede_recycle_rx_bd_ring(rxq, edev, 1);
+                       return -1;
+               }
+
+       qede_recycle_rx_bd_ring(rxq, edev, 1);
+
+       return 0;
+}
+
+static int qede_selftest_run_loopback(struct qede_dev *edev, u32 loopback_mode)
+{
+       struct qed_link_params link_params;
+       struct sk_buff *skb = NULL;
+       int rc = 0, i;
+       u32 pkt_size;
+       u8 *packet;
+
+       if (!netif_running(edev->ndev)) {
+               DP_NOTICE(edev, "Interface is down\n");
+               return -EINVAL;
+       }
+
+       qede_netif_stop(edev);
+
+       /* Bring up the link in Loopback mode */
+       memset(&link_params, 0, sizeof(link_params));
+       link_params.link_up = true;
+       link_params.override_flags = QED_LINK_OVERRIDE_LOOPBACK_MODE;
+       link_params.loopback_mode = loopback_mode;
+       edev->ops->common->set_link(edev->cdev, &link_params);
+
+       /* Wait for loopback configuration to apply */
+       msleep_interruptible(500);
+
+       /* prepare the loopback packet */
+       pkt_size = edev->ndev->mtu + ETH_HLEN;
+
+       skb = netdev_alloc_skb(edev->ndev, pkt_size);
+       if (!skb) {
+               DP_INFO(edev, "Can't allocate skb\n");
+               rc = -ENOMEM;
+               goto test_loopback_exit;
+       }
+       packet = skb_put(skb, pkt_size);
+       ether_addr_copy(packet, edev->ndev->dev_addr);
+       ether_addr_copy(packet + ETH_ALEN, edev->ndev->dev_addr);
+       memset(packet + (2 * ETH_ALEN), 0x77, (ETH_HLEN - (2 * ETH_ALEN)));
+       for (i = ETH_HLEN; i < pkt_size; i++)
+               packet[i] = (unsigned char)(i & 0xff);
+
+       rc = qede_selftest_transmit_traffic(edev, skb);
+       if (rc)
+               goto test_loopback_exit;
+
+       rc = qede_selftest_receive_traffic(edev);
+       if (rc)
+               goto test_loopback_exit;
+
+       DP_VERBOSE(edev, NETIF_MSG_RX_STATUS, "Loopback test successful\n");
+
+test_loopback_exit:
+       dev_kfree_skb(skb);
+
+       /* Bring up the link in Normal mode */
+       memset(&link_params, 0, sizeof(link_params));
+       link_params.link_up = true;
+       link_params.override_flags = QED_LINK_OVERRIDE_LOOPBACK_MODE;
+       link_params.loopback_mode = QED_LINK_LOOPBACK_NONE;
+       edev->ops->common->set_link(edev->cdev, &link_params);
+
+       /* Wait for loopback configuration to apply */
+       msleep_interruptible(500);
+
+       qede_netif_start(edev);
+
+       return rc;
+}
+
+static void qede_self_test(struct net_device *dev,
+                          struct ethtool_test *etest, u64 *buf)
+{
+       struct qede_dev *edev = netdev_priv(dev);
+
+       DP_VERBOSE(edev, QED_MSG_DEBUG,
+                  "Self-test command parameters: offline = %d, external_lb = %d\n",
+                  (etest->flags & ETH_TEST_FL_OFFLINE),
+                  (etest->flags & ETH_TEST_FL_EXTERNAL_LB) >> 2);
+
+       memset(buf, 0, sizeof(u64) * QEDE_ETHTOOL_TEST_MAX);
+
+       if (etest->flags & ETH_TEST_FL_OFFLINE) {
+               if (qede_selftest_run_loopback(edev,
+                                              QED_LINK_LOOPBACK_INT_PHY)) {
+                       buf[QEDE_ETHTOOL_INT_LOOPBACK] = 1;
+                       etest->flags |= ETH_TEST_FL_FAILED;
+               }
+       }
+
+       if (edev->ops->common->selftest->selftest_interrupt(edev->cdev)) {
+               buf[QEDE_ETHTOOL_INTERRUPT_TEST] = 1;
+               etest->flags |= ETH_TEST_FL_FAILED;
+       }
+
+       if (edev->ops->common->selftest->selftest_memory(edev->cdev)) {
+               buf[QEDE_ETHTOOL_MEMORY_TEST] = 1;
+               etest->flags |= ETH_TEST_FL_FAILED;
+       }
+
+       if (edev->ops->common->selftest->selftest_register(edev->cdev)) {
+               buf[QEDE_ETHTOOL_REGISTER_TEST] = 1;
+               etest->flags |= ETH_TEST_FL_FAILED;
+       }
+
+       if (edev->ops->common->selftest->selftest_clock(edev->cdev)) {
+               buf[QEDE_ETHTOOL_CLOCK_TEST] = 1;
+               etest->flags |= ETH_TEST_FL_FAILED;
+       }
+}
+
 static const struct ethtool_ops qede_ethtool_ops = {
        .get_settings = qede_get_settings,
        .set_settings = qede_set_settings,
@@ -814,6 +1129,7 @@ static const struct ethtool_ops qede_ethtool_ops = {
        .get_strings = qede_get_strings,
        .set_phys_id = qede_set_phys_id,
        .get_ethtool_stats = qede_get_ethtool_stats,
+       .get_priv_flags = qede_get_priv_flags,
        .get_sset_count = qede_get_sset_count,
        .get_rxnfc = qede_get_rxnfc,
        .set_rxnfc = qede_set_rxnfc,
@@ -823,6 +1139,7 @@ static const struct ethtool_ops qede_ethtool_ops = {
        .set_rxfh = qede_set_rxfh,
        .get_channels = qede_get_channels,
        .set_channels = qede_set_channels,
+       .self_test = qede_self_test,
 };
 
 void qede_set_ethtool_ops(struct net_device *dev)
index 197ef85..82d85cc 100644 (file)
@@ -668,7 +668,7 @@ netdev_tx_t qede_start_xmit(struct sk_buff *skb,
        return NETDEV_TX_OK;
 }
 
-static int qede_txq_has_work(struct qede_tx_queue *txq)
+int qede_txq_has_work(struct qede_tx_queue *txq)
 {
        u16 hw_bd_cons;
 
@@ -751,7 +751,7 @@ static int qede_tx_int(struct qede_dev *edev,
        return 0;
 }
 
-static bool qede_has_rx_work(struct qede_rx_queue *rxq)
+bool qede_has_rx_work(struct qede_rx_queue *rxq)
 {
        u16 hw_comp_cons, sw_comp_cons;
 
@@ -806,8 +806,8 @@ static inline void qede_reuse_page(struct qede_dev *edev,
 /* In case of allocation failures reuse buffers
  * from consumer index to produce buffers for firmware
  */
-static void qede_recycle_rx_bd_ring(struct qede_rx_queue *rxq,
-                                   struct qede_dev *edev, u8 count)
+void qede_recycle_rx_bd_ring(struct qede_rx_queue *rxq,
+                            struct qede_dev *edev, u8 count)
 {
        struct sw_rx_data *curr_cons;
 
@@ -1638,16 +1638,25 @@ void qede_fill_by_demand_stats(struct qede_dev *edev)
        edev->stats.coalesced_bytes = stats.tpa_coalesced_bytes;
 
        edev->stats.rx_64_byte_packets = stats.rx_64_byte_packets;
-       edev->stats.rx_127_byte_packets = stats.rx_127_byte_packets;
-       edev->stats.rx_255_byte_packets = stats.rx_255_byte_packets;
-       edev->stats.rx_511_byte_packets = stats.rx_511_byte_packets;
-       edev->stats.rx_1023_byte_packets = stats.rx_1023_byte_packets;
-       edev->stats.rx_1518_byte_packets = stats.rx_1518_byte_packets;
-       edev->stats.rx_1522_byte_packets = stats.rx_1522_byte_packets;
-       edev->stats.rx_2047_byte_packets = stats.rx_2047_byte_packets;
-       edev->stats.rx_4095_byte_packets = stats.rx_4095_byte_packets;
-       edev->stats.rx_9216_byte_packets = stats.rx_9216_byte_packets;
-       edev->stats.rx_16383_byte_packets = stats.rx_16383_byte_packets;
+       edev->stats.rx_65_to_127_byte_packets = stats.rx_65_to_127_byte_packets;
+       edev->stats.rx_128_to_255_byte_packets =
+                               stats.rx_128_to_255_byte_packets;
+       edev->stats.rx_256_to_511_byte_packets =
+                               stats.rx_256_to_511_byte_packets;
+       edev->stats.rx_512_to_1023_byte_packets =
+                               stats.rx_512_to_1023_byte_packets;
+       edev->stats.rx_1024_to_1518_byte_packets =
+                               stats.rx_1024_to_1518_byte_packets;
+       edev->stats.rx_1519_to_1522_byte_packets =
+                               stats.rx_1519_to_1522_byte_packets;
+       edev->stats.rx_1519_to_2047_byte_packets =
+                               stats.rx_1519_to_2047_byte_packets;
+       edev->stats.rx_2048_to_4095_byte_packets =
+                               stats.rx_2048_to_4095_byte_packets;
+       edev->stats.rx_4096_to_9216_byte_packets =
+                               stats.rx_4096_to_9216_byte_packets;
+       edev->stats.rx_9217_to_16383_byte_packets =
+                               stats.rx_9217_to_16383_byte_packets;
        edev->stats.rx_crc_errors = stats.rx_crc_errors;
        edev->stats.rx_mac_crtl_frames = stats.rx_mac_crtl_frames;
        edev->stats.rx_pause_frames = stats.rx_pause_frames;
index 55007f1..caf6ddb 100644 (file)
@@ -37,8 +37,8 @@
 
 #define _QLCNIC_LINUX_MAJOR 5
 #define _QLCNIC_LINUX_MINOR 3
-#define _QLCNIC_LINUX_SUBVERSION 63
-#define QLCNIC_LINUX_VERSIONID  "5.3.63"
+#define _QLCNIC_LINUX_SUBVERSION 64
+#define QLCNIC_LINUX_VERSIONID  "5.3.64"
 #define QLCNIC_DRV_IDC_VER  0x01
 #define QLCNIC_DRIVER_VERSION  ((_QLCNIC_LINUX_MAJOR << 16) |\
                 (_QLCNIC_LINUX_MINOR << 8) | (_QLCNIC_LINUX_SUBVERSION))
index 1ef0393..6e2add9 100644 (file)
@@ -719,7 +719,7 @@ qcaspi_netdev_xmit(struct sk_buff *skb, struct net_device *dev)
                qca->stats.ring_full++;
        }
 
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
 
        if (qca->spi_thread &&
            qca->spi_thread->state != TASK_RUNNING)
@@ -734,7 +734,7 @@ qcaspi_netdev_tx_timeout(struct net_device *dev)
        struct qcaspi *qca = netdev_priv(dev);
 
        netdev_info(qca->net_dev, "Transmit timeout at %ld, latency %ld\n",
-                   jiffies, jiffies - dev->trans_start);
+                   jiffies, jiffies - dev_trans_start(dev));
        qca->net_dev->stats.tx_errors++;
        /* Trigger tx queue flush and QCA7000 reset */
        qca->sync = QCASPI_SYNC_UNKNOWN;
index d77d60e..5cb9678 100644 (file)
@@ -544,7 +544,7 @@ static void tx_timeout(struct net_device *dev)
        dev->stats.tx_errors++;
        /* Try to restart the adapter. */
        hardware_init(dev);
-       dev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(dev); /* prevent tx timeout */
        netif_wake_queue(dev);
        dev->stats.tx_errors++;
 }
index 238b56f..34066e0 100644 (file)
@@ -246,10 +246,9 @@ static void ravb_ring_format(struct net_device *ndev, int q)
        for (i = 0; i < priv->num_rx_ring[q]; i++) {
                /* RX descriptor */
                rx_desc = &priv->rx_ring[q][i];
-               /* The size of the buffer should be on 16-byte boundary. */
-               rx_desc->ds_cc = cpu_to_le16(ALIGN(PKT_BUF_SZ, 16));
+               rx_desc->ds_cc = cpu_to_le16(PKT_BUF_SZ);
                dma_addr = dma_map_single(ndev->dev.parent, priv->rx_skb[q][i]->data,
-                                         ALIGN(PKT_BUF_SZ, 16),
+                                         PKT_BUF_SZ,
                                          DMA_FROM_DEVICE);
                /* We just set the data size to 0 for a failed mapping which
                 * should prevent DMA from happening...
@@ -558,7 +557,7 @@ static bool ravb_rx(struct net_device *ndev, int *quota, int q)
                        skb = priv->rx_skb[q][entry];
                        priv->rx_skb[q][entry] = NULL;
                        dma_unmap_single(ndev->dev.parent, le32_to_cpu(desc->dptr),
-                                        ALIGN(PKT_BUF_SZ, 16),
+                                        PKT_BUF_SZ,
                                         DMA_FROM_DEVICE);
                        get_ts &= (q == RAVB_NC) ?
                                        RAVB_RXTSTAMP_TYPE_V2_L2_EVENT :
@@ -588,8 +587,7 @@ static bool ravb_rx(struct net_device *ndev, int *quota, int q)
        for (; priv->cur_rx[q] - priv->dirty_rx[q] > 0; priv->dirty_rx[q]++) {
                entry = priv->dirty_rx[q] % priv->num_rx_ring[q];
                desc = &priv->rx_ring[q][entry];
-               /* The size of the buffer should be on 16-byte boundary. */
-               desc->ds_cc = cpu_to_le16(ALIGN(PKT_BUF_SZ, 16));
+               desc->ds_cc = cpu_to_le16(PKT_BUF_SZ);
 
                if (!priv->rx_skb[q][entry]) {
                        skb = netdev_alloc_skb(ndev,
index ceea74c..04cd39f 100644 (file)
@@ -482,7 +482,7 @@ static void sh_eth_chip_reset(struct net_device *ndev)
        struct sh_eth_private *mdp = netdev_priv(ndev);
 
        /* reset device */
-       sh_eth_tsu_write(mdp, ARSTR_ARSTR, ARSTR);
+       sh_eth_tsu_write(mdp, ARSTR_ARST, ARSTR);
        mdelay(1);
 }
 
@@ -537,11 +537,7 @@ static struct sh_eth_cpu_data r7s72100_data = {
 
 static void sh_eth_chip_reset_r8a7740(struct net_device *ndev)
 {
-       struct sh_eth_private *mdp = netdev_priv(ndev);
-
-       /* reset device */
-       sh_eth_tsu_write(mdp, ARSTR_ARSTR, ARSTR);
-       mdelay(1);
+       sh_eth_chip_reset(ndev);
 
        sh_eth_select_mii(ndev);
 }
@@ -725,8 +721,8 @@ static struct sh_eth_cpu_data sh7757_data = {
 #define GIGA_MAHR(port)                (SH_GIGA_ETH_BASE + 0x800 * (port) + 0x05c0)
 static void sh_eth_chip_reset_giga(struct net_device *ndev)
 {
-       int i;
        u32 mahr[2], malr[2];
+       int i;
 
        /* save MAHR and MALR */
        for (i = 0; i < 2; i++) {
@@ -734,9 +730,7 @@ static void sh_eth_chip_reset_giga(struct net_device *ndev)
                mahr[i] = ioread32((void *)GIGA_MAHR(i));
        }
 
-       /* reset device */
-       iowrite32(ARSTR_ARSTR, (void *)(SH_GIGA_ETH_BASE + 0x1800));
-       mdelay(1);
+       sh_eth_chip_reset(ndev);
 
        /* restore MAHR and MALR */
        for (i = 0; i < 2; i++) {
@@ -899,7 +893,7 @@ static int sh_eth_check_reset(struct net_device *ndev)
        int cnt = 100;
 
        while (cnt > 0) {
-               if (!(sh_eth_read(ndev, EDMR) & 0x3))
+               if (!(sh_eth_read(ndev, EDMR) & EDMR_SRST_GETHER))
                        break;
                mdelay(1);
                cnt--;
@@ -1229,7 +1223,7 @@ ring_free:
        return -ENOMEM;
 }
 
-static int sh_eth_dev_init(struct net_device *ndev, bool start)
+static int sh_eth_dev_init(struct net_device *ndev)
 {
        struct sh_eth_private *mdp = netdev_priv(ndev);
        int ret;
@@ -1279,10 +1273,8 @@ static int sh_eth_dev_init(struct net_device *ndev, bool start)
                     RFLR);
 
        sh_eth_modify(ndev, EESR, 0, 0);
-       if (start) {
-               mdp->irq_enabled = true;
-               sh_eth_write(ndev, mdp->cd->eesipr_value, EESIPR);
-       }
+       mdp->irq_enabled = true;
+       sh_eth_write(ndev, mdp->cd->eesipr_value, EESIPR);
 
        /* PAUSE Prohibition */
        sh_eth_write(ndev, ECMR_ZPF | (mdp->duplex ? ECMR_DM : 0) |
@@ -1295,8 +1287,7 @@ static int sh_eth_dev_init(struct net_device *ndev, bool start)
        sh_eth_write(ndev, mdp->cd->ecsr_value, ECSR);
 
        /* E-MAC Interrupt Enable register */
-       if (start)
-               sh_eth_write(ndev, mdp->cd->ecsipr_value, ECSIPR);
+       sh_eth_write(ndev, mdp->cd->ecsipr_value, ECSIPR);
 
        /* Set MAC address */
        update_mac_address(ndev);
@@ -1309,10 +1300,8 @@ static int sh_eth_dev_init(struct net_device *ndev, bool start)
        if (mdp->cd->tpauser)
                sh_eth_write(ndev, TPAUSER_UNLIMITED, TPAUSER);
 
-       if (start) {
-               /* Setting the Rx mode will start the Rx process. */
-               sh_eth_write(ndev, EDRRR_R, EDRRR);
-       }
+       /* Setting the Rx mode will start the Rx process. */
+       sh_eth_write(ndev, EDRRR_R, EDRRR);
 
        return ret;
 }
@@ -2194,7 +2183,7 @@ static int sh_eth_set_ringparam(struct net_device *ndev,
                                   __func__);
                        return ret;
                }
-               ret = sh_eth_dev_init(ndev, true);
+               ret = sh_eth_dev_init(ndev);
                if (ret < 0) {
                        netdev_err(ndev, "%s: sh_eth_dev_init failed.\n",
                                   __func__);
@@ -2246,7 +2235,7 @@ static int sh_eth_open(struct net_device *ndev)
                goto out_free_irq;
 
        /* device init */
-       ret = sh_eth_dev_init(ndev, true);
+       ret = sh_eth_dev_init(ndev);
        if (ret)
                goto out_free_irq;
 
@@ -2299,7 +2288,7 @@ static void sh_eth_tx_timeout(struct net_device *ndev)
        }
 
        /* device init */
-       sh_eth_dev_init(ndev, true);
+       sh_eth_dev_init(ndev);
 
        netif_start_queue(ndev);
 }
index 8fa4ef3..c62380e 100644 (file)
@@ -394,7 +394,7 @@ enum RPADIR_BIT {
 #define DEFAULT_FDR_INIT       0x00000707
 
 /* ARSTR */
-enum ARSTR_BIT { ARSTR_ARSTR = 0x00000001, };
+enum ARSTR_BIT { ARSTR_ARST = 0x00000001, };
 
 /* TSU_FWEN0 */
 enum TSU_FWEN0_BIT {
index ca73366..c2bd537 100644 (file)
@@ -572,7 +572,7 @@ static inline int sgiseeq_reset(struct net_device *dev)
        if (err)
                return err;
 
-       dev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(dev); /* prevent tx timeout */
        netif_wake_queue(dev);
 
        return 0;
@@ -648,7 +648,7 @@ static void timeout(struct net_device *dev)
        printk(KERN_NOTICE "%s: transmit timed out, resetting\n", dev->name);
        sgiseeq_reset(dev);
 
-       dev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(dev); /* prevent tx timeout */
        netif_wake_queue(dev);
 }
 
index 98d33d4..1681084 100644 (file)
@@ -1920,6 +1920,10 @@ static int efx_ef10_alloc_rss_context(struct efx_nic *efx, u32 *context,
                return 0;
        }
 
+       if (nic_data->datapath_caps &
+           1 << MC_CMD_GET_CAPABILITIES_OUT_RX_RSS_LIMITED_LBN)
+               return -EOPNOTSUPP;
+
        MCDI_SET_DWORD(inbuf, RSS_CONTEXT_ALLOC_IN_UPSTREAM_PORT_ID,
                       nic_data->vport_id);
        MCDI_SET_DWORD(inbuf, RSS_CONTEXT_ALLOC_IN_TYPE, alloc_type);
@@ -2923,9 +2927,16 @@ static void efx_ef10_filter_push_prep(struct efx_nic *efx,
                                      bool replacing)
 {
        struct efx_ef10_nic_data *nic_data = efx->nic_data;
+       u32 flags = spec->flags;
 
        memset(inbuf, 0, MC_CMD_FILTER_OP_IN_LEN);
 
+       /* Remove RSS flag if we don't have an RSS context. */
+       if (flags & EFX_FILTER_FLAG_RX_RSS &&
+           spec->rss_context == EFX_FILTER_RSS_CONTEXT_DEFAULT &&
+           nic_data->rx_rss_context == EFX_EF10_RSS_CONTEXT_INVALID)
+               flags &= ~EFX_FILTER_FLAG_RX_RSS;
+
        if (replacing) {
                MCDI_SET_DWORD(inbuf, FILTER_OP_IN_OP,
                               MC_CMD_FILTER_OP_IN_OP_REPLACE);
@@ -2985,10 +2996,10 @@ static void efx_ef10_filter_push_prep(struct efx_nic *efx,
                       spec->dmaq_id == EFX_FILTER_RX_DMAQ_ID_DROP ?
                       0 : spec->dmaq_id);
        MCDI_SET_DWORD(inbuf, FILTER_OP_IN_RX_MODE,
-                      (spec->flags & EFX_FILTER_FLAG_RX_RSS) ?
+                      (flags & EFX_FILTER_FLAG_RX_RSS) ?
                       MC_CMD_FILTER_OP_IN_RX_MODE_RSS :
                       MC_CMD_FILTER_OP_IN_RX_MODE_SIMPLE);
-       if (spec->flags & EFX_FILTER_FLAG_RX_RSS)
+       if (flags & EFX_FILTER_FLAG_RX_RSS)
                MCDI_SET_DWORD(inbuf, FILTER_OP_IN_RX_CONTEXT,
                               spec->rss_context !=
                               EFX_FILTER_RSS_CONTEXT_DEFAULT ?
index 5eac523..aaa80f1 100644 (file)
@@ -708,7 +708,7 @@ static int meth_tx(struct sk_buff *skb, struct net_device *dev)
        mace->eth.dma_ctrl = priv->dma_ctrl;
 
        meth_add_to_tx_ring(priv, skb);
-       dev->trans_start = jiffies; /* save the timestamp */
+       netif_trans_update(dev); /* save the timestamp */
 
        /* If TX ring is full, tell the upper layer to stop sending packets */
        if (meth_tx_full(dev)) {
@@ -756,7 +756,7 @@ static void meth_tx_timeout(struct net_device *dev)
        /* Enable interrupt */
        spin_unlock_irqrestore(&priv->meth_lock, flags);
 
-       dev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(dev); /* prevent tx timeout */
        netif_wake_queue(dev);
 }
 
index fd812d2..95001ee 100644 (file)
@@ -1575,7 +1575,7 @@ static void sis900_tx_timeout(struct net_device *net_dev)
 
        spin_unlock_irqrestore(&sis_priv->lock, flags);
 
-       net_dev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(net_dev); /* prevent tx timeout */
 
        /* load Transmit Descriptor Register */
        sw32(txdp, sis_priv->tx_ring_dma);
index 443f1da..7186b89 100644 (file)
@@ -889,7 +889,7 @@ static void epic_tx_timeout(struct net_device *dev)
                ew32(COMMAND, TxQueued);
        }
 
-       dev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(dev); /* prevent tx timeout */
        dev->stats.tx_errors++;
        if (!ep->tx_full)
                netif_wake_queue(dev);
index a733868..cb49c96 100644 (file)
@@ -499,7 +499,7 @@ static void smc911x_hardware_send_pkt(struct net_device *dev)
        /* DMA complete IRQ will free buffer and set jiffies */
 #else
        SMC_PUSH_DATA(lp, buf, len);
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
        dev_kfree_skb_irq(skb);
 #endif
        if (!lp->tx_throttle) {
@@ -1189,7 +1189,7 @@ smc911x_tx_dma_irq(void *data)
        DBG(SMC_DEBUG_TX | SMC_DEBUG_DMA, dev, "TX DMA irq handler\n");
        BUG_ON(skb == NULL);
        dma_unmap_single(NULL, tx_dmabuf, tx_dmalen, DMA_TO_DEVICE);
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
        dev_kfree_skb_irq(skb);
        lp->current_tx_skb = NULL;
        if (lp->pending_tx_skb != NULL)
@@ -1283,7 +1283,7 @@ static void smc911x_timeout(struct net_device *dev)
                schedule_work(&lp->phy_configure);
 
        /* We can accept TX packets again */
-       dev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(dev); /* prevent tx timeout */
        netif_wake_queue(dev);
 }
 
index 664f596..d496888 100644 (file)
@@ -663,7 +663,7 @@ static void smc_hardware_send_packet( struct net_device * dev )
        lp->saved_skb = NULL;
        dev_kfree_skb_any (skb);
 
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
 
        /* we can send another packet */
        netif_wake_queue(dev);
@@ -1104,7 +1104,7 @@ static void smc_timeout(struct net_device *dev)
        /* "kick" the adaptor */
        smc_reset( dev->base_addr );
        smc_enable( dev->base_addr );
-       dev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(dev); /* prevent tx timeout */
        /* clear anything saved */
        ((struct smc_local *)netdev_priv(dev))->saved_skb = NULL;
        netif_wake_queue(dev);
index 3449893..db3c696 100644 (file)
@@ -1172,7 +1172,7 @@ static void smc_hardware_send_packet(struct net_device * dev)
 
     smc->saved_skb = NULL;
     dev_kfree_skb_irq(skb);
-    dev->trans_start = jiffies;
+    netif_trans_update(dev);
     netif_start_queue(dev);
 }
 
@@ -1187,7 +1187,7 @@ static void smc_tx_timeout(struct net_device *dev)
                  inw(ioaddr)&0xff, inw(ioaddr + 2));
     dev->stats.tx_errors++;
     smc_reset(dev);
-    dev->trans_start = jiffies; /* prevent tx timeout */
+    netif_trans_update(dev); /* prevent tx timeout */
     smc->saved_skb = NULL;
     netif_wake_queue(dev);
 }
index c5ed27c..18ac52d 100644 (file)
@@ -619,7 +619,7 @@ static void smc_hardware_send_pkt(unsigned long data)
        SMC_SET_MMU_CMD(lp, MC_ENQUEUE);
        smc_special_unlock(&lp->lock, flags);
 
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
        dev->stats.tx_packets++;
        dev->stats.tx_bytes += len;
 
@@ -1364,7 +1364,7 @@ static void smc_timeout(struct net_device *dev)
                schedule_work(&lp->phy_configure);
 
        /* We can accept TX packets again */
-       dev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(dev); /* prevent tx timeout */
        netif_wake_queue(dev);
 }
 
index 76d671e..cd9764a 100644 (file)
@@ -92,15 +92,6 @@ static int socfpga_dwmac_parse_data(struct socfpga_dwmac *dwmac, struct device *
        struct device_node *np_splitter;
        struct resource res_splitter;
 
-       dwmac->stmmac_rst = devm_reset_control_get(dev,
-                                                 STMMAC_RESOURCE_NAME);
-       if (IS_ERR(dwmac->stmmac_rst)) {
-               dev_info(dev, "Could not get reset control!\n");
-               if (PTR_ERR(dwmac->stmmac_rst) == -EPROBE_DEFER)
-                       return -EPROBE_DEFER;
-               dwmac->stmmac_rst = NULL;
-       }
-
        dwmac->interface = of_get_phy_mode(np);
 
        sys_mgr_base_addr = syscon_regmap_lookup_by_phandle(np, "altr,sysmgr-syscon");
@@ -145,7 +136,7 @@ static int socfpga_dwmac_parse_data(struct socfpga_dwmac *dwmac, struct device *
        return 0;
 }
 
-static int socfpga_dwmac_setup(struct socfpga_dwmac *dwmac)
+static int socfpga_dwmac_set_phy_mode(struct socfpga_dwmac *dwmac)
 {
        struct regmap *sys_mgr_base_addr = dwmac->sys_mgr_base_addr;
        int phymode = dwmac->interface;
@@ -174,6 +165,10 @@ static int socfpga_dwmac_setup(struct socfpga_dwmac *dwmac)
        if (dwmac->splitter_base)
                val = SYSMGR_EMACGRP_CTRL_PHYSEL_ENUM_GMII_MII;
 
+       /* Assert reset to the enet controller before changing the phy mode */
+       if (dwmac->stmmac_rst)
+               reset_control_assert(dwmac->stmmac_rst);
+
        regmap_read(sys_mgr_base_addr, reg_offset, &ctrl);
        ctrl &= ~(SYSMGR_EMACGRP_CTRL_PHYSEL_MASK << reg_shift);
        ctrl |= val << reg_shift;
@@ -191,64 +186,13 @@ static int socfpga_dwmac_setup(struct socfpga_dwmac *dwmac)
 
        regmap_write(sys_mgr_base_addr, reg_offset, ctrl);
 
-       return 0;
-}
-
-static void socfpga_dwmac_exit(struct platform_device *pdev, void *priv)
-{
-       struct socfpga_dwmac    *dwmac = priv;
-
-       /* On socfpga platform exit, assert and hold reset to the
-        * enet controller - the default state after a hard reset.
-        */
-       if (dwmac->stmmac_rst)
-               reset_control_assert(dwmac->stmmac_rst);
-}
-
-static int socfpga_dwmac_init(struct platform_device *pdev, void *priv)
-{
-       struct socfpga_dwmac    *dwmac = priv;
-       struct net_device *ndev = platform_get_drvdata(pdev);
-       struct stmmac_priv *stpriv = NULL;
-       int ret = 0;
-
-       if (ndev)
-               stpriv = netdev_priv(ndev);
-
-       /* Assert reset to the enet controller before changing the phy mode */
-       if (dwmac->stmmac_rst)
-               reset_control_assert(dwmac->stmmac_rst);
-
-       /* Setup the phy mode in the system manager registers according to
-        * devicetree configuration
-        */
-       ret = socfpga_dwmac_setup(dwmac);
-
        /* Deassert reset for the phy configuration to be sampled by
         * the enet controller, and operation to start in requested mode
         */
        if (dwmac->stmmac_rst)
                reset_control_deassert(dwmac->stmmac_rst);
 
-       /* Before the enet controller is suspended, the phy is suspended.
-        * This causes the phy clock to be gated. The enet controller is
-        * resumed before the phy, so the clock is still gated "off" when
-        * the enet controller is resumed. This code makes sure the phy
-        * is "resumed" before reinitializing the enet controller since
-        * the enet controller depends on an active phy clock to complete
-        * a DMA reset. A DMA reset will "time out" if executed
-        * with no phy clock input on the Synopsys enet controller.
-        * Verified through Synopsys Case #8000711656.
-        *
-        * Note that the phy clock is also gated when the phy is isolated.
-        * Phy "suspend" and "isolate" controls are located in phy basic
-        * control register 0, and can be modified by the phy driver
-        * framework.
-        */
-       if (stpriv && stpriv->phydev)
-               phy_resume(stpriv->phydev);
-
-       return ret;
+       return 0;
 }
 
 static int socfpga_dwmac_probe(struct platform_device *pdev)
@@ -278,17 +222,57 @@ static int socfpga_dwmac_probe(struct platform_device *pdev)
        }
 
        plat_dat->bsp_priv = dwmac;
-       plat_dat->init = socfpga_dwmac_init;
-       plat_dat->exit = socfpga_dwmac_exit;
        plat_dat->fix_mac_speed = socfpga_dwmac_fix_mac_speed;
 
-       ret = socfpga_dwmac_init(pdev, plat_dat->bsp_priv);
-       if (ret)
-               return ret;
+       ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res);
+       if (!ret) {
+               struct net_device *ndev = platform_get_drvdata(pdev);
+               struct stmmac_priv *stpriv = netdev_priv(ndev);
 
-       return stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res);
+               /* The socfpga driver needs to control the stmmac reset to
+                * set the phy mode. Create a copy of the core reset handel
+                * so it can be used by the driver later.
+                */
+               dwmac->stmmac_rst = stpriv->stmmac_rst;
+
+               ret = socfpga_dwmac_set_phy_mode(dwmac);
+       }
+
+       return ret;
 }
 
+#ifdef CONFIG_PM_SLEEP
+static int socfpga_dwmac_resume(struct device *dev)
+{
+       struct net_device *ndev = dev_get_drvdata(dev);
+       struct stmmac_priv *priv = netdev_priv(ndev);
+
+       socfpga_dwmac_set_phy_mode(priv->plat->bsp_priv);
+
+       /* Before the enet controller is suspended, the phy is suspended.
+        * This causes the phy clock to be gated. The enet controller is
+        * resumed before the phy, so the clock is still gated "off" when
+        * the enet controller is resumed. This code makes sure the phy
+        * is "resumed" before reinitializing the enet controller since
+        * the enet controller depends on an active phy clock to complete
+        * a DMA reset. A DMA reset will "time out" if executed
+        * with no phy clock input on the Synopsys enet controller.
+        * Verified through Synopsys Case #8000711656.
+        *
+        * Note that the phy clock is also gated when the phy is isolated.
+        * Phy "suspend" and "isolate" controls are located in phy basic
+        * control register 0, and can be modified by the phy driver
+        * framework.
+        */
+       if (priv->phydev)
+               phy_resume(priv->phydev);
+
+       return stmmac_resume(dev);
+}
+#endif /* CONFIG_PM_SLEEP */
+
+SIMPLE_DEV_PM_OPS(socfpga_dwmac_pm_ops, stmmac_suspend, socfpga_dwmac_resume);
+
 static const struct of_device_id socfpga_dwmac_match[] = {
        { .compatible = "altr,socfpga-stmmac" },
        { }
@@ -300,7 +284,7 @@ static struct platform_driver socfpga_dwmac_driver = {
        .remove = stmmac_pltfr_remove,
        .driver = {
                .name           = "socfpga-dwmac",
-               .pm             = &stmmac_pltfr_pm_ops,
+               .pm             = &socfpga_dwmac_pm_ops,
                .of_match_table = socfpga_dwmac_match,
        },
 };
index ff67506..59ae608 100644 (file)
@@ -148,9 +148,9 @@ void stmmac_set_ethtool_ops(struct net_device *netdev);
 
 int stmmac_ptp_register(struct stmmac_priv *priv);
 void stmmac_ptp_unregister(struct stmmac_priv *priv);
-int stmmac_resume(struct net_device *ndev);
-int stmmac_suspend(struct net_device *ndev);
-int stmmac_dvr_remove(struct net_device *ndev);
+int stmmac_resume(struct device *dev);
+int stmmac_suspend(struct device *dev);
+int stmmac_dvr_remove(struct device *dev);
 int stmmac_dvr_probe(struct device *device,
                     struct plat_stmmacenet_data *plat_dat,
                     struct stmmac_resources *res);
index b87edb7..fd5ab7b 100644 (file)
@@ -3350,12 +3350,13 @@ EXPORT_SYMBOL_GPL(stmmac_dvr_probe);
 
 /**
  * stmmac_dvr_remove
- * @ndev: net device pointer
+ * @dev: device pointer
  * Description: this function resets the TX/RX processes, disables the MAC RX/TX
  * changes the link status, releases the DMA descriptor rings.
  */
-int stmmac_dvr_remove(struct net_device *ndev)
+int stmmac_dvr_remove(struct device *dev)
 {
+       struct net_device *ndev = dev_get_drvdata(dev);
        struct stmmac_priv *priv = netdev_priv(ndev);
 
        pr_info("%s:\n\tremoving driver", __func__);
@@ -3381,13 +3382,14 @@ EXPORT_SYMBOL_GPL(stmmac_dvr_remove);
 
 /**
  * stmmac_suspend - suspend callback
- * @ndev: net device pointer
+ * @dev: device pointer
  * Description: this is the function to suspend the device and it is called
  * by the platform driver to stop the network queue, release the resources,
  * program the PMT register (for WoL), clean and release driver resources.
  */
-int stmmac_suspend(struct net_device *ndev)
+int stmmac_suspend(struct device *dev)
 {
+       struct net_device *ndev = dev_get_drvdata(dev);
        struct stmmac_priv *priv = netdev_priv(ndev);
        unsigned long flags;
 
@@ -3430,12 +3432,13 @@ EXPORT_SYMBOL_GPL(stmmac_suspend);
 
 /**
  * stmmac_resume - resume callback
- * @ndev: net device pointer
+ * @dev: device pointer
  * Description: when resume this function is invoked to setup the DMA and CORE
  * in a usable state.
  */
-int stmmac_resume(struct net_device *ndev)
+int stmmac_resume(struct device *dev)
 {
+       struct net_device *ndev = dev_get_drvdata(dev);
        struct stmmac_priv *priv = netdev_priv(ndev);
        unsigned long flags;
 
index 06704ca..3f83c36 100644 (file)
 #define MII_BUSY 0x00000001
 #define MII_WRITE 0x00000002
 
+/* GMAC4 defines */
+#define MII_GMAC4_GOC_SHIFT            2
+#define MII_GMAC4_WRITE                        (1 << MII_GMAC4_GOC_SHIFT)
+#define MII_GMAC4_READ                 (3 << MII_GMAC4_GOC_SHIFT)
+
+#define MII_PHY_ADDR_GMAC4_SHIFT       21
+#define MII_PHY_ADDR_GMAC4_MASK                GENMASK(25, 21)
+#define MII_PHY_REG_GMAC4_SHIFT                16
+#define MII_PHY_REG_GMAC4_MASK         GENMASK(20, 16)
+#define MII_CSR_CLK_GMAC4_SHIFT                8
+#define MII_CSR_CLK_GMAC4_MASK         GENMASK(11, 8)
+
 static int stmmac_mdio_busy_wait(void __iomem *ioaddr, unsigned int mii_addr)
 {
        unsigned long curr;
@@ -123,6 +135,80 @@ static int stmmac_mdio_write(struct mii_bus *bus, int phyaddr, int phyreg,
        return stmmac_mdio_busy_wait(priv->ioaddr, mii_address);
 }
 
+/**
+ * stmmac_mdio_read_gmac4
+ * @bus: points to the mii_bus structure
+ * @phyaddr: MII addr reg bits 25-21
+ * @phyreg: MII addr reg bits 20-16
+ * Description: it reads data from the MII register of GMAC4 from within
+ * the phy device.
+ */
+static int stmmac_mdio_read_gmac4(struct mii_bus *bus, int phyaddr, int phyreg)
+{
+       struct net_device *ndev = bus->priv;
+       struct stmmac_priv *priv = netdev_priv(ndev);
+       unsigned int mii_address = priv->hw->mii.addr;
+       unsigned int mii_data = priv->hw->mii.data;
+       int data;
+       u32 value = (((phyaddr << MII_PHY_ADDR_GMAC4_SHIFT) &
+                    (MII_PHY_ADDR_GMAC4_MASK)) |
+                    ((phyreg << MII_PHY_REG_GMAC4_SHIFT) &
+                    (MII_PHY_REG_GMAC4_MASK))) | MII_GMAC4_READ;
+
+       value |= MII_BUSY | ((priv->clk_csr & MII_CSR_CLK_GMAC4_MASK)
+                << MII_CSR_CLK_GMAC4_SHIFT);
+
+       if (stmmac_mdio_busy_wait(priv->ioaddr, mii_address))
+               return -EBUSY;
+
+       writel(value, priv->ioaddr + mii_address);
+
+       if (stmmac_mdio_busy_wait(priv->ioaddr, mii_address))
+               return -EBUSY;
+
+       /* Read the data from the MII data register */
+       data = (int)readl(priv->ioaddr + mii_data);
+
+       return data;
+}
+
+/**
+ * stmmac_mdio_write_gmac4
+ * @bus: points to the mii_bus structure
+ * @phyaddr: MII addr reg bits 25-21
+ * @phyreg: MII addr reg bits 20-16
+ * @phydata: phy data
+ * Description: it writes the data into the MII register of GMAC4 from within
+ * the device.
+ */
+static int stmmac_mdio_write_gmac4(struct mii_bus *bus, int phyaddr, int phyreg,
+                                  u16 phydata)
+{
+       struct net_device *ndev = bus->priv;
+       struct stmmac_priv *priv = netdev_priv(ndev);
+       unsigned int mii_address = priv->hw->mii.addr;
+       unsigned int mii_data = priv->hw->mii.data;
+
+       u32 value = (((phyaddr << MII_PHY_ADDR_GMAC4_SHIFT) &
+                    (MII_PHY_ADDR_GMAC4_MASK)) |
+                    ((phyreg << MII_PHY_REG_GMAC4_SHIFT) &
+                    (MII_PHY_REG_GMAC4_MASK))) | MII_GMAC4_WRITE;
+
+       value |= MII_BUSY | ((priv->clk_csr & MII_CSR_CLK_GMAC4_MASK)
+                << MII_CSR_CLK_GMAC4_SHIFT);
+
+       /* Wait until any existing MII operation is complete */
+       if (stmmac_mdio_busy_wait(priv->ioaddr, mii_address))
+               return -EBUSY;
+
+       /* Set the MII address register to write */
+       writel(phydata, priv->ioaddr + mii_data);
+       writel(value, priv->ioaddr + mii_address);
+
+       /* Wait until any existing MII operation is complete */
+       return stmmac_mdio_busy_wait(priv->ioaddr, mii_address);
+}
+
 /**
  * stmmac_mdio_reset
  * @bus: points to the mii_bus structure
@@ -180,9 +266,11 @@ int stmmac_mdio_reset(struct mii_bus *bus)
 
        /* This is a workaround for problems with the STE101P PHY.
         * It doesn't complete its reset until at least one clock cycle
-        * on MDC, so perform a dummy mdio read.
+        * on MDC, so perform a dummy mdio read. To be upadted for GMAC4
+        * if needed.
         */
-       writel(0, priv->ioaddr + mii_address);
+       if (!priv->plat->has_gmac4)
+               writel(0, priv->ioaddr + mii_address);
 #endif
        return 0;
 }
@@ -217,8 +305,14 @@ int stmmac_mdio_register(struct net_device *ndev)
 #endif
 
        new_bus->name = "stmmac";
-       new_bus->read = &stmmac_mdio_read;
-       new_bus->write = &stmmac_mdio_write;
+       if (priv->plat->has_gmac4) {
+               new_bus->read = &stmmac_mdio_read_gmac4;
+               new_bus->write = &stmmac_mdio_write_gmac4;
+       } else {
+               new_bus->read = &stmmac_mdio_read;
+               new_bus->write = &stmmac_mdio_write;
+       }
+
        new_bus->reset = &stmmac_mdio_reset;
        snprintf(new_bus->id, MII_BUS_ID_SIZE, "%s-%x",
                 new_bus->name, priv->plat->bus_id);
index ae43887..56c8a23 100644 (file)
@@ -231,30 +231,10 @@ static int stmmac_pci_probe(struct pci_dev *pdev,
  */
 static void stmmac_pci_remove(struct pci_dev *pdev)
 {
-       struct net_device *ndev = pci_get_drvdata(pdev);
-
-       stmmac_dvr_remove(ndev);
-}
-
-#ifdef CONFIG_PM_SLEEP
-static int stmmac_pci_suspend(struct device *dev)
-{
-       struct pci_dev *pdev = to_pci_dev(dev);
-       struct net_device *ndev = pci_get_drvdata(pdev);
-
-       return stmmac_suspend(ndev);
-}
-
-static int stmmac_pci_resume(struct device *dev)
-{
-       struct pci_dev *pdev = to_pci_dev(dev);
-       struct net_device *ndev = pci_get_drvdata(pdev);
-
-       return stmmac_resume(ndev);
+       stmmac_dvr_remove(&pdev->dev);
 }
-#endif
 
-static SIMPLE_DEV_PM_OPS(stmmac_pm_ops, stmmac_pci_suspend, stmmac_pci_resume);
+static SIMPLE_DEV_PM_OPS(stmmac_pm_ops, stmmac_suspend, stmmac_resume);
 
 #define STMMAC_VENDOR_ID 0x700
 #define STMMAC_QUARK_ID  0x0937
index effaa4f..409db91 100644 (file)
@@ -386,7 +386,7 @@ int stmmac_pltfr_remove(struct platform_device *pdev)
 {
        struct net_device *ndev = platform_get_drvdata(pdev);
        struct stmmac_priv *priv = netdev_priv(ndev);
-       int ret = stmmac_dvr_remove(ndev);
+       int ret = stmmac_dvr_remove(&pdev->dev);
 
        if (priv->plat->exit)
                priv->plat->exit(pdev, priv->plat->bsp_priv);
@@ -410,7 +410,7 @@ static int stmmac_pltfr_suspend(struct device *dev)
        struct stmmac_priv *priv = netdev_priv(ndev);
        struct platform_device *pdev = to_platform_device(dev);
 
-       ret = stmmac_suspend(ndev);
+       ret = stmmac_suspend(dev);
        if (priv->plat->exit)
                priv->plat->exit(pdev, priv->plat->bsp_priv);
 
@@ -433,7 +433,7 @@ static int stmmac_pltfr_resume(struct device *dev)
        if (priv->plat->init)
                priv->plat->init(pdev, priv->plat->bsp_priv);
 
-       return stmmac_resume(ndev);
+       return stmmac_resume(dev);
 }
 #endif /* CONFIG_PM_SLEEP */
 
index 9cc4564..a2371aa 100644 (file)
@@ -6431,7 +6431,7 @@ static int niu_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
 
 static void niu_netif_stop(struct niu *np)
 {
-       np->dev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(np->dev);    /* prevent tx timeout */
 
        niu_disable_napi(np);
 
index 2437227..d6ad0fb 100644 (file)
@@ -226,7 +226,7 @@ static void gem_put_cell(struct gem *gp)
 
 static inline void gem_netif_stop(struct gem *gp)
 {
-       gp->dev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(gp->dev);    /* prevent tx timeout */
        napi_disable(&gp->napi);
        netif_tx_disable(gp->dev);
 }
index af11ed1..158213c 100644 (file)
@@ -949,7 +949,7 @@ static void dwceqos_adjust_link(struct net_device *ndev)
 
        if (status_change) {
                if (phydev->link) {
-                       lp->ndev->trans_start = jiffies;
+                       netif_trans_update(lp->ndev);
                        dwceqos_link_up(lp);
                } else {
                        dwceqos_link_down(lp);
@@ -2203,7 +2203,7 @@ static int dwceqos_start_xmit(struct sk_buff *skb, struct net_device *ndev)
        netdev_sent_queue(ndev, skb->len);
        spin_unlock_bh(&lp->tx_lock);
 
-       ndev->trans_start = jiffies;
+       netif_trans_update(ndev);
        return 0;
 
 tx_error:
index 14c9d1b..7452b5f 100644 (file)
@@ -1610,7 +1610,6 @@ static inline int bdx_tx_space(struct bdx_priv *priv)
  * o NETDEV_TX_BUSY Cannot transmit packet, try later
  *   Usually a bug, means queue start/stop flow control is broken in
  *   the driver. Note: the driver must NOT put the skb in its DMA ring.
- * o NETDEV_TX_LOCKED Locking failed, please retry quickly.
  */
 static netdev_tx_t bdx_tx_transmit(struct sk_buff *skb,
                                   struct net_device *ndev)
@@ -1630,12 +1629,7 @@ static netdev_tx_t bdx_tx_transmit(struct sk_buff *skb,
 
        ENTER;
        local_irq_save(flags);
-       if (!spin_trylock(&priv->tx_lock)) {
-               local_irq_restore(flags);
-               DBG("%s[%s]: TX locked, returning NETDEV_TX_LOCKED\n",
-                   BDX_DRV_NAME, ndev->name);
-               return NETDEV_TX_LOCKED;
-       }
+       spin_lock(&priv->tx_lock);
 
        /* build tx descriptor */
        BDX_ASSERT(f->m.wptr >= f->m.memsz);    /* started with valid wptr */
@@ -1707,7 +1701,7 @@ static netdev_tx_t bdx_tx_transmit(struct sk_buff *skb,
 
 #endif
 #ifdef BDX_LLTX
-       ndev->trans_start = jiffies; /* NETIF_F_LLTX driver :( */
+       netif_trans_update(ndev); /* NETIF_F_LLTX driver :( */
 #endif
        ndev->stats.tx_packets++;
        ndev->stats.tx_bytes += skb->len;
index 0fa75a8..4b08a2f 100644 (file)
@@ -367,7 +367,6 @@ struct cpsw_priv {
        spinlock_t                      lock;
        struct platform_device          *pdev;
        struct net_device               *ndev;
-       struct device_node              *phy_node;
        struct napi_struct              napi_rx;
        struct napi_struct              napi_tx;
        struct device                   *dev;
@@ -1142,25 +1141,34 @@ static void cpsw_slave_open(struct cpsw_slave *slave, struct cpsw_priv *priv)
                cpsw_ale_add_mcast(priv->ale, priv->ndev->broadcast,
                                   1 << slave_port, 0, 0, ALE_MCAST_FWD_2);
 
-       if (priv->phy_node)
-               slave->phy = of_phy_connect(priv->ndev, priv->phy_node,
+       if (slave->data->phy_node) {
+               slave->phy = of_phy_connect(priv->ndev, slave->data->phy_node,
                                 &cpsw_adjust_link, 0, slave->data->phy_if);
-       else
+               if (!slave->phy) {
+                       dev_err(priv->dev, "phy \"%s\" not found on slave %d\n",
+                               slave->data->phy_node->full_name,
+                               slave->slave_num);
+                       return;
+               }
+       } else {
                slave->phy = phy_connect(priv->ndev, slave->data->phy_id,
                                 &cpsw_adjust_link, slave->data->phy_if);
-       if (IS_ERR(slave->phy)) {
-               dev_err(priv->dev, "phy %s not found on slave %d\n",
-                       slave->data->phy_id, slave->slave_num);
-               slave->phy = NULL;
-       } else {
-               phy_attached_info(slave->phy);
+               if (IS_ERR(slave->phy)) {
+                       dev_err(priv->dev,
+                               "phy \"%s\" not found on slave %d, err %ld\n",
+                               slave->data->phy_id, slave->slave_num,
+                               PTR_ERR(slave->phy));
+                       slave->phy = NULL;
+                       return;
+               }
+       }
 
-               phy_start(slave->phy);
+       phy_attached_info(slave->phy);
 
-               /* Configure GMII_SEL register */
-               cpsw_phy_sel(&priv->pdev->dev, slave->phy->interface,
-                            slave->slave_num);
-       }
+       phy_start(slave->phy);
+
+       /* Configure GMII_SEL register */
+       cpsw_phy_sel(&priv->pdev->dev, slave->phy->interface, slave->slave_num);
 }
 
 static inline void cpsw_add_default_vlan(struct cpsw_priv *priv)
@@ -1381,7 +1389,7 @@ static netdev_tx_t cpsw_ndo_start_xmit(struct sk_buff *skb,
        struct cpsw_priv *priv = netdev_priv(ndev);
        int ret;
 
-       ndev->trans_start = jiffies;
+       netif_trans_update(ndev);
 
        if (skb_padto(skb, CPSW_MIN_PACKET_SIZE)) {
                cpsw_err(priv, tx_err, "packet pad failed\n");
@@ -1932,12 +1940,11 @@ static void cpsw_slave_init(struct cpsw_slave *slave, struct cpsw_priv *priv,
        slave->port_vlan = data->dual_emac_res_vlan;
 }
 
-static int cpsw_probe_dt(struct cpsw_priv *priv,
+static int cpsw_probe_dt(struct cpsw_platform_data *data,
                         struct platform_device *pdev)
 {
        struct device_node *node = pdev->dev.of_node;
        struct device_node *slave_node;
-       struct cpsw_platform_data *data = &priv->data;
        int i = 0, ret;
        u32 prop;
 
@@ -2025,25 +2032,21 @@ static int cpsw_probe_dt(struct cpsw_priv *priv,
                if (strcmp(slave_node->name, "slave"))
                        continue;
 
-               priv->phy_node = of_parse_phandle(slave_node, "phy-handle", 0);
+               slave_data->phy_node = of_parse_phandle(slave_node,
+                                                       "phy-handle", 0);
                parp = of_get_property(slave_node, "phy_id", &lenp);
-               if (of_phy_is_fixed_link(slave_node)) {
-                       struct device_node *phy_node;
-                       struct phy_device *phy_dev;
-
+               if (slave_data->phy_node) {
+                       dev_dbg(&pdev->dev,
+                               "slave[%d] using phy-handle=\"%s\"\n",
+                               i, slave_data->phy_node->full_name);
+               } else if (of_phy_is_fixed_link(slave_node)) {
                        /* In the case of a fixed PHY, the DT node associated
                         * to the PHY is the Ethernet MAC DT node.
                         */
                        ret = of_phy_register_fixed_link(slave_node);
                        if (ret)
                                return ret;
-                       phy_node = of_node_get(slave_node);
-                       phy_dev = of_phy_find_device(phy_node);
-                       if (!phy_dev)
-                               return -ENODEV;
-                       snprintf(slave_data->phy_id, sizeof(slave_data->phy_id),
-                                PHY_ID_FMT, phy_dev->mdio.bus->id,
-                                phy_dev->mdio.addr);
+                       slave_data->phy_node = of_node_get(slave_node);
                } else if (parp) {
                        u32 phyid;
                        struct device_node *mdio_node;
@@ -2064,7 +2067,9 @@ static int cpsw_probe_dt(struct cpsw_priv *priv,
                        snprintf(slave_data->phy_id, sizeof(slave_data->phy_id),
                                 PHY_ID_FMT, mdio->name, phyid);
                } else {
-                       dev_err(&pdev->dev, "No slave[%d] phy_id or fixed-link property\n", i);
+                       dev_err(&pdev->dev,
+                               "No slave[%d] phy_id, phy-handle, or fixed-link property\n",
+                               i);
                        goto no_phy_slave;
                }
                slave_data->phy_if = of_get_phy_mode(slave_node);
@@ -2266,7 +2271,7 @@ static int cpsw_probe(struct platform_device *pdev)
        /* Select default pin state */
        pinctrl_pm_select_default_state(&pdev->dev);
 
-       if (cpsw_probe_dt(priv, pdev)) {
+       if (cpsw_probe_dt(&priv->data, pdev)) {
                dev_err(&pdev->dev, "cpsw: platform data missing\n");
                ret = -ENODEV;
                goto clean_runtime_disable_ret;
index 442a703..e50afd1 100644 (file)
@@ -18,6 +18,7 @@
 #include <linux/phy.h>
 
 struct cpsw_slave_data {
+       struct device_node *phy_node;
        char            phy_id[MII_BUS_ID_SIZE];
        int             phy_if;
        u8              mac_addr[ETH_ALEN];
index 58d58f0..f56d66e 100644 (file)
@@ -1512,7 +1512,10 @@ static int emac_devioctl(struct net_device *ndev, struct ifreq *ifrq, int cmd)
 
        /* TODO: Add phy read and write and private statistics get feature */
 
-       return phy_mii_ioctl(priv->phydev, ifrq, cmd);
+       if (priv->phydev)
+               return phy_mii_ioctl(priv->phydev, ifrq, cmd);
+       else
+               return -EOPNOTSUPP;
 }
 
 static int match_first_device(struct device *dev, void *data)
index 1d0942c..3251666 100644 (file)
@@ -1272,7 +1272,7 @@ static int netcp_ndo_start_xmit(struct sk_buff *skb, struct net_device *ndev)
        if (ret)
                goto drop;
 
-       ndev->trans_start = jiffies;
+       netif_trans_update(ndev);
 
        /* Check Tx pool count & stop subqueue if needed */
        desc_count = knav_pool_count(netcp->tx_pool);
@@ -1788,7 +1788,7 @@ static void netcp_ndo_tx_timeout(struct net_device *ndev)
 
        dev_err(netcp->ndev_dev, "transmit timed out tx descs(%d)\n", descs);
        netcp_process_tx_compl_packets(netcp, netcp->tx_pool_size);
-       ndev->trans_start = jiffies;
+       netif_trans_update(ndev);
        netif_tx_wake_all_queues(ndev);
 }
 
index a274cd4..5617033 100644 (file)
@@ -1007,7 +1007,7 @@ static void tlan_tx_timeout(struct net_device *dev)
        tlan_reset_lists(dev);
        tlan_read_and_clear_stats(dev, TLAN_IGNORE);
        tlan_reset_adapter(dev);
-       dev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(dev); /* prevent tx timeout */
        netif_wake_queue(dev);
 
 }
index 298e059..922a443 100644 (file)
@@ -1883,7 +1883,7 @@ static int tile_net_tx(struct sk_buff *skb, struct net_device *dev)
 
 
        /* Save the timestamp. */
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
 
 
 #ifdef TILE_NET_PARANOIA
@@ -2026,7 +2026,7 @@ static void tile_net_tx_timeout(struct net_device *dev)
 {
        PDEBUG("tile_net_tx_timeout()\n");
        PDEBUG("Transmit timeout at %ld, latency %ld\n", jiffies,
-              jiffies - dev->trans_start);
+              jiffies - dev_trans_start(dev));
 
        /* XXX: ISSUE: This doesn't seem useful for us. */
        netif_wake_queue(dev);
index 13214a6..743b182 100644 (file)
@@ -1622,7 +1622,7 @@ static void gelic_wl_scan_complete_event(struct gelic_wl_info *wl)
                        continue;
 
                /* copy hw scan info */
-               memcpy(target->hwinfo, scan_info, scan_info->size);
+               memcpy(target->hwinfo, scan_info, be16_to_cpu(scan_info->size));
                target->essid_len = strnlen(scan_info->essid,
                                            sizeof(scan_info->essid));
                target->rate_len = 0;
index 6761027..36a6e8b 100644 (file)
@@ -705,7 +705,7 @@ spider_net_prepare_tx_descr(struct spider_net_card *card,
        wmb();
        descr->prev->hwdescr->next_descr_addr = descr->bus_addr;
 
-       card->netdev->trans_start = jiffies; /* set netdev watchdog timer */
+       netif_trans_update(card->netdev); /* set netdev watchdog timer */
        return 0;
 }
 
index 520cf50..01a7714 100644 (file)
@@ -1314,7 +1314,8 @@ static int tsi108_open(struct net_device *dev)
        data->txring = dma_zalloc_coherent(NULL, txring_size, &data->txdma,
                                           GFP_KERNEL);
        if (!data->txring) {
-               pci_free_consistent(0, rxring_size, data->rxring, data->rxdma);
+               pci_free_consistent(NULL, rxring_size, data->rxring,
+                                   data->rxdma);
                return -ENOMEM;
        }
 
index 2b7550c..9d14731 100644 (file)
@@ -1758,7 +1758,7 @@ static void rhine_reset_task(struct work_struct *work)
 
        spin_unlock_bh(&rp->lock);
 
-       dev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(dev); /* prevent tx timeout */
        dev->stats.tx_errors++;
        netif_wake_queue(dev);
 
index f3385a1..1981e88 100644 (file)
@@ -70,7 +70,7 @@ config WIZNET_BUS_ANY
 endchoice
 
 config WIZNET_W5100_SPI
-       tristate "WIZnet W5100/W5200 Ethernet support for SPI mode"
+       tristate "WIZnet W5100/W5200/W5500 Ethernet support for SPI mode"
        depends on WIZNET_BUS_ANY && WIZNET_W5100
        depends on SPI
        ---help---
index 598a7b0..b868e45 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Ethernet driver for the WIZnet W5100/W5200 chip.
+ * Ethernet driver for the WIZnet W5100/W5200/W5500 chip.
  *
  * Copyright (C) 2016 Akinobu Mita <akinobu.mita@gmail.com>
  *
@@ -8,6 +8,7 @@
  * Datasheet:
  * http://www.wiznet.co.kr/wp-content/uploads/wiznethome/Chip/W5100/Document/W5100_Datasheet_v1.2.6.pdf
  * http://wiznethome.cafe24.com/wp-content/uploads/wiznethome/Chip/W5200/Documents/W5200_DS_V140E.pdf
+ * http://wizwiki.net/wiki/lib/exe/fetch.php?media=products:w5500:w5500_ds_v106e_141230.pdf
  */
 
 #include <linux/kernel.h>
@@ -21,7 +22,7 @@
 #define W5100_SPI_WRITE_OPCODE 0xf0
 #define W5100_SPI_READ_OPCODE 0x0f
 
-static int w5100_spi_read(struct net_device *ndev, u16 addr)
+static int w5100_spi_read(struct net_device *ndev, u32 addr)
 {
        struct spi_device *spi = to_spi_device(ndev->dev.parent);
        u8 cmd[3] = { W5100_SPI_READ_OPCODE, addr >> 8, addr & 0xff };
@@ -33,7 +34,7 @@ static int w5100_spi_read(struct net_device *ndev, u16 addr)
        return ret ? ret : data;
 }
 
-static int w5100_spi_write(struct net_device *ndev, u16 addr, u8 data)
+static int w5100_spi_write(struct net_device *ndev, u32 addr, u8 data)
 {
        struct spi_device *spi = to_spi_device(ndev->dev.parent);
        u8 cmd[4] = { W5100_SPI_WRITE_OPCODE, addr >> 8, addr & 0xff, data};
@@ -41,7 +42,7 @@ static int w5100_spi_write(struct net_device *ndev, u16 addr, u8 data)
        return spi_write_then_read(spi, cmd, sizeof(cmd), NULL, 0);
 }
 
-static int w5100_spi_read16(struct net_device *ndev, u16 addr)
+static int w5100_spi_read16(struct net_device *ndev, u32 addr)
 {
        u16 data;
        int ret;
@@ -55,7 +56,7 @@ static int w5100_spi_read16(struct net_device *ndev, u16 addr)
        return ret < 0 ? ret : data | ret;
 }
 
-static int w5100_spi_write16(struct net_device *ndev, u16 addr, u16 data)
+static int w5100_spi_write16(struct net_device *ndev, u32 addr, u16 data)
 {
        int ret;
 
@@ -66,7 +67,7 @@ static int w5100_spi_write16(struct net_device *ndev, u16 addr, u16 data)
        return w5100_spi_write(ndev, addr + 1, data & 0xff);
 }
 
-static int w5100_spi_readbulk(struct net_device *ndev, u16 addr, u8 *buf,
+static int w5100_spi_readbulk(struct net_device *ndev, u32 addr, u8 *buf,
                              int len)
 {
        int i;
@@ -82,7 +83,7 @@ static int w5100_spi_readbulk(struct net_device *ndev, u16 addr, u8 *buf,
        return 0;
 }
 
-static int w5100_spi_writebulk(struct net_device *ndev, u16 addr, const u8 *buf,
+static int w5100_spi_writebulk(struct net_device *ndev, u32 addr, const u8 *buf,
                               int len)
 {
        int i;
@@ -134,7 +135,7 @@ static int w5200_spi_init(struct net_device *ndev)
        return 0;
 }
 
-static int w5200_spi_read(struct net_device *ndev, u16 addr)
+static int w5200_spi_read(struct net_device *ndev, u32 addr)
 {
        struct spi_device *spi = to_spi_device(ndev->dev.parent);
        u8 cmd[4] = { addr >> 8, addr & 0xff, 0, 1 };
@@ -146,7 +147,7 @@ static int w5200_spi_read(struct net_device *ndev, u16 addr)
        return ret ? ret : data;
 }
 
-static int w5200_spi_write(struct net_device *ndev, u16 addr, u8 data)
+static int w5200_spi_write(struct net_device *ndev, u32 addr, u8 data)
 {
        struct spi_device *spi = to_spi_device(ndev->dev.parent);
        u8 cmd[5] = { addr >> 8, addr & 0xff, W5200_SPI_WRITE_OPCODE, 1, data };
@@ -154,7 +155,7 @@ static int w5200_spi_write(struct net_device *ndev, u16 addr, u8 data)
        return spi_write_then_read(spi, cmd, sizeof(cmd), NULL, 0);
 }
 
-static int w5200_spi_read16(struct net_device *ndev, u16 addr)
+static int w5200_spi_read16(struct net_device *ndev, u32 addr)
 {
        struct spi_device *spi = to_spi_device(ndev->dev.parent);
        u8 cmd[4] = { addr >> 8, addr & 0xff, 0, 2 };
@@ -166,7 +167,7 @@ static int w5200_spi_read16(struct net_device *ndev, u16 addr)
        return ret ? ret : be16_to_cpu(data);
 }
 
-static int w5200_spi_write16(struct net_device *ndev, u16 addr, u16 data)
+static int w5200_spi_write16(struct net_device *ndev, u32 addr, u16 data)
 {
        struct spi_device *spi = to_spi_device(ndev->dev.parent);
        u8 cmd[6] = {
@@ -178,7 +179,7 @@ static int w5200_spi_write16(struct net_device *ndev, u16 addr, u16 data)
        return spi_write_then_read(spi, cmd, sizeof(cmd), NULL, 0);
 }
 
-static int w5200_spi_readbulk(struct net_device *ndev, u16 addr, u8 *buf,
+static int w5200_spi_readbulk(struct net_device *ndev, u32 addr, u8 *buf,
                              int len)
 {
        struct spi_device *spi = to_spi_device(ndev->dev.parent);
@@ -208,7 +209,7 @@ static int w5200_spi_readbulk(struct net_device *ndev, u16 addr, u8 *buf,
        return ret;
 }
 
-static int w5200_spi_writebulk(struct net_device *ndev, u16 addr, const u8 *buf,
+static int w5200_spi_writebulk(struct net_device *ndev, u32 addr, const u8 *buf,
                               int len)
 {
        struct spi_device *spi = to_spi_device(ndev->dev.parent);
@@ -250,6 +251,164 @@ static const struct w5100_ops w5200_ops = {
        .init = w5200_spi_init,
 };
 
+#define W5500_SPI_BLOCK_SELECT(addr) (((addr) >> 16) & 0x1f)
+#define W5500_SPI_READ_CONTROL(addr) (W5500_SPI_BLOCK_SELECT(addr) << 3)
+#define W5500_SPI_WRITE_CONTROL(addr)  \
+       ((W5500_SPI_BLOCK_SELECT(addr) << 3) | BIT(2))
+
+struct w5500_spi_priv {
+       /* Serialize access to cmd_buf */
+       struct mutex cmd_lock;
+
+       /* DMA (thus cache coherency maintenance) requires the
+        * transfer buffers to live in their own cache lines.
+        */
+       u8 cmd_buf[3] ____cacheline_aligned;
+};
+
+static struct w5500_spi_priv *w5500_spi_priv(struct net_device *ndev)
+{
+       return w5100_ops_priv(ndev);
+}
+
+static int w5500_spi_init(struct net_device *ndev)
+{
+       struct w5500_spi_priv *spi_priv = w5500_spi_priv(ndev);
+
+       mutex_init(&spi_priv->cmd_lock);
+
+       return 0;
+}
+
+static int w5500_spi_read(struct net_device *ndev, u32 addr)
+{
+       struct spi_device *spi = to_spi_device(ndev->dev.parent);
+       u8 cmd[3] = {
+               addr >> 8,
+               addr,
+               W5500_SPI_READ_CONTROL(addr)
+       };
+       u8 data;
+       int ret;
+
+       ret = spi_write_then_read(spi, cmd, sizeof(cmd), &data, 1);
+
+       return ret ? ret : data;
+}
+
+static int w5500_spi_write(struct net_device *ndev, u32 addr, u8 data)
+{
+       struct spi_device *spi = to_spi_device(ndev->dev.parent);
+       u8 cmd[4] = {
+               addr >> 8,
+               addr,
+               W5500_SPI_WRITE_CONTROL(addr),
+               data
+       };
+
+       return spi_write_then_read(spi, cmd, sizeof(cmd), NULL, 0);
+}
+
+static int w5500_spi_read16(struct net_device *ndev, u32 addr)
+{
+       struct spi_device *spi = to_spi_device(ndev->dev.parent);
+       u8 cmd[3] = {
+               addr >> 8,
+               addr,
+               W5500_SPI_READ_CONTROL(addr)
+       };
+       __be16 data;
+       int ret;
+
+       ret = spi_write_then_read(spi, cmd, sizeof(cmd), &data, sizeof(data));
+
+       return ret ? ret : be16_to_cpu(data);
+}
+
+static int w5500_spi_write16(struct net_device *ndev, u32 addr, u16 data)
+{
+       struct spi_device *spi = to_spi_device(ndev->dev.parent);
+       u8 cmd[5] = {
+               addr >> 8,
+               addr,
+               W5500_SPI_WRITE_CONTROL(addr),
+               data >> 8,
+               data
+       };
+
+       return spi_write_then_read(spi, cmd, sizeof(cmd), NULL, 0);
+}
+
+static int w5500_spi_readbulk(struct net_device *ndev, u32 addr, u8 *buf,
+                             int len)
+{
+       struct spi_device *spi = to_spi_device(ndev->dev.parent);
+       struct w5500_spi_priv *spi_priv = w5500_spi_priv(ndev);
+       struct spi_transfer xfer[] = {
+               {
+                       .tx_buf = spi_priv->cmd_buf,
+                       .len = sizeof(spi_priv->cmd_buf),
+               },
+               {
+                       .rx_buf = buf,
+                       .len = len,
+               },
+       };
+       int ret;
+
+       mutex_lock(&spi_priv->cmd_lock);
+
+       spi_priv->cmd_buf[0] = addr >> 8;
+       spi_priv->cmd_buf[1] = addr;
+       spi_priv->cmd_buf[2] = W5500_SPI_READ_CONTROL(addr);
+       ret = spi_sync_transfer(spi, xfer, ARRAY_SIZE(xfer));
+
+       mutex_unlock(&spi_priv->cmd_lock);
+
+       return ret;
+}
+
+static int w5500_spi_writebulk(struct net_device *ndev, u32 addr, const u8 *buf,
+                              int len)
+{
+       struct spi_device *spi = to_spi_device(ndev->dev.parent);
+       struct w5500_spi_priv *spi_priv = w5500_spi_priv(ndev);
+       struct spi_transfer xfer[] = {
+               {
+                       .tx_buf = spi_priv->cmd_buf,
+                       .len = sizeof(spi_priv->cmd_buf),
+               },
+               {
+                       .tx_buf = buf,
+                       .len = len,
+               },
+       };
+       int ret;
+
+       mutex_lock(&spi_priv->cmd_lock);
+
+       spi_priv->cmd_buf[0] = addr >> 8;
+       spi_priv->cmd_buf[1] = addr;
+       spi_priv->cmd_buf[2] = W5500_SPI_WRITE_CONTROL(addr);
+       ret = spi_sync_transfer(spi, xfer, ARRAY_SIZE(xfer));
+
+       mutex_unlock(&spi_priv->cmd_lock);
+
+       return ret;
+}
+
+static const struct w5100_ops w5500_ops = {
+       .may_sleep = true,
+       .chip_id = W5500,
+       .read = w5500_spi_read,
+       .write = w5500_spi_write,
+       .read16 = w5500_spi_read16,
+       .write16 = w5500_spi_write16,
+       .readbulk = w5500_spi_readbulk,
+       .writebulk = w5500_spi_writebulk,
+       .init = w5500_spi_init,
+};
+
 static int w5100_spi_probe(struct spi_device *spi)
 {
        const struct spi_device_id *id = spi_get_device_id(spi);
@@ -265,6 +424,10 @@ static int w5100_spi_probe(struct spi_device *spi)
                ops = &w5200_ops;
                priv_size = sizeof(struct w5200_spi_priv);
                break;
+       case W5500:
+               ops = &w5500_ops;
+               priv_size = sizeof(struct w5500_spi_priv);
+               break;
        default:
                return -EINVAL;
        }
@@ -280,6 +443,7 @@ static int w5100_spi_remove(struct spi_device *spi)
 static const struct spi_device_id w5100_spi_ids[] = {
        { "w5100", W5100 },
        { "w5200", W5200 },
+       { "w5500", W5500 },
        {}
 };
 MODULE_DEVICE_TABLE(spi, w5100_spi_ids);
@@ -295,6 +459,6 @@ static struct spi_driver w5100_spi_driver = {
 };
 module_spi_driver(w5100_spi_driver);
 
-MODULE_DESCRIPTION("WIZnet W5100/W5200 Ethernet driver for SPI mode");
+MODULE_DESCRIPTION("WIZnet W5100/W5200/W5500 Ethernet driver for SPI mode");
 MODULE_AUTHOR("Akinobu Mita <akinobu.mita@gmail.com>");
 MODULE_LICENSE("GPL");
index 09149c9..ec1889c 100644 (file)
@@ -38,7 +38,7 @@ MODULE_ALIAS("platform:"DRV_NAME);
 MODULE_LICENSE("GPL");
 
 /*
- * W5100 and W5100 common registers
+ * W5100/W5200/W5500 common registers
  */
 #define W5100_COMMON_REGS      0x0000
 #define W5100_MR               0x0000 /* Mode Register */
@@ -48,10 +48,6 @@ MODULE_LICENSE("GPL");
 #define   MR_IND                 0x01 /* Indirect mode */
 #define W5100_SHAR             0x0009 /* Source MAC address */
 #define W5100_IR               0x0015 /* Interrupt Register */
-#define W5100_IMR              0x0016 /* Interrupt Mask Register */
-#define   IR_S0                          0x01 /* S0 interrupt */
-#define W5100_RTR              0x0017 /* Retry Time-value Register */
-#define   RTR_DEFAULT            2000 /* =0x07d0 (2000) */
 #define W5100_COMMON_REGS_LEN  0x0040
 
 #define W5100_Sn_MR            0x0000 /* Sn Mode Register */
@@ -64,7 +60,7 @@ MODULE_LICENSE("GPL");
 #define W5100_Sn_RX_RSR                0x0026 /* Sn Receive free memory size */
 #define W5100_Sn_RX_RD         0x0028 /* Sn Receive memory read pointer */
 
-#define S0_REGS(priv)          (is_w5200(priv) ? W5200_S0_REGS : W5100_S0_REGS)
+#define S0_REGS(priv)          ((priv)->s0_regs)
 
 #define W5100_S0_MR(priv)      (S0_REGS(priv) + W5100_Sn_MR)
 #define   S0_MR_MACRAW           0x04 /* MAC RAW mode (promiscuous) */
@@ -88,7 +84,15 @@ MODULE_LICENSE("GPL");
 #define W5100_S0_REGS_LEN      0x0040
 
 /*
- * W5100 specific registers
+ * W5100 and W5200 common registers
+ */
+#define W5100_IMR              0x0016 /* Interrupt Mask Register */
+#define   IR_S0                          0x01 /* S0 interrupt */
+#define W5100_RTR              0x0017 /* Retry Time-value Register */
+#define   RTR_DEFAULT            2000 /* =0x07d0 (2000) */
+
+/*
+ * W5100 specific register and memory
  */
 #define W5100_RMSR             0x001a /* Receive Memory Size */
 #define W5100_TMSR             0x001b /* Transmit Memory Size */
@@ -101,25 +105,57 @@ MODULE_LICENSE("GPL");
 #define W5100_RX_MEM_SIZE      0x2000
 
 /*
- * W5200 specific registers
+ * W5200 specific register and memory
  */
 #define W5200_S0_REGS          0x4000
 
 #define W5200_Sn_RXMEM_SIZE(n) (0x401e + (n) * 0x0100) /* Sn RX Memory Size */
 #define W5200_Sn_TXMEM_SIZE(n) (0x401f + (n) * 0x0100) /* Sn TX Memory Size */
-#define W5200_S0_IMR           0x402c /* S0 Interrupt Mask Register */
 
 #define W5200_TX_MEM_START     0x8000
 #define W5200_TX_MEM_SIZE      0x4000
 #define W5200_RX_MEM_START     0xc000
 #define W5200_RX_MEM_SIZE      0x4000
 
+/*
+ * W5500 specific register and memory
+ *
+ * W5500 register and memory are organized by multiple blocks.  Each one is
+ * selected by 16bits offset address and 5bits block select bits.  So we
+ * encode it into 32bits address. (lower 16bits is offset address and
+ * upper 16bits is block select bits)
+ */
+#define W5500_SIMR             0x0018 /* Socket Interrupt Mask Register */
+#define W5500_RTR              0x0019 /* Retry Time-value Register */
+
+#define W5500_S0_REGS          0x10000
+
+#define W5500_Sn_RXMEM_SIZE(n) \
+               (0x1001e + (n) * 0x40000) /* Sn RX Memory Size */
+#define W5500_Sn_TXMEM_SIZE(n) \
+               (0x1001f + (n) * 0x40000) /* Sn TX Memory Size */
+
+#define W5500_TX_MEM_START     0x20000
+#define W5500_TX_MEM_SIZE      0x04000
+#define W5500_RX_MEM_START     0x30000
+#define W5500_RX_MEM_SIZE      0x04000
+
 /*
  * Device driver private data structure
  */
 
 struct w5100_priv {
        const struct w5100_ops *ops;
+
+       /* Socket 0 register offset address */
+       u32 s0_regs;
+       /* Socket 0 TX buffer offset address and size */
+       u32 s0_tx_buf;
+       u16 s0_tx_buf_size;
+       /* Socket 0 RX buffer offset address and size */
+       u32 s0_rx_buf;
+       u16 s0_rx_buf_size;
+
        int irq;
        int link_irq;
        int link_gpio;
@@ -172,12 +208,12 @@ static inline void __iomem *w5100_mmio(struct net_device *ndev)
  *
  * 0x8000 bytes are required for memory space.
  */
-static inline int w5100_read_direct(struct net_device *ndev, u16 addr)
+static inline int w5100_read_direct(struct net_device *ndev, u32 addr)
 {
        return ioread8(w5100_mmio(ndev) + (addr << CONFIG_WIZNET_BUS_SHIFT));
 }
 
-static inline int __w5100_write_direct(struct net_device *ndev, u16 addr,
+static inline int __w5100_write_direct(struct net_device *ndev, u32 addr,
                                       u8 data)
 {
        iowrite8(data, w5100_mmio(ndev) + (addr << CONFIG_WIZNET_BUS_SHIFT));
@@ -185,7 +221,7 @@ static inline int __w5100_write_direct(struct net_device *ndev, u16 addr,
        return 0;
 }
 
-static inline int w5100_write_direct(struct net_device *ndev, u16 addr, u8 data)
+static inline int w5100_write_direct(struct net_device *ndev, u32 addr, u8 data)
 {
        __w5100_write_direct(ndev, addr, data);
        mmiowb();
@@ -193,7 +229,7 @@ static inline int w5100_write_direct(struct net_device *ndev, u16 addr, u8 data)
        return 0;
 }
 
-static int w5100_read16_direct(struct net_device *ndev, u16 addr)
+static int w5100_read16_direct(struct net_device *ndev, u32 addr)
 {
        u16 data;
        data  = w5100_read_direct(ndev, addr) << 8;
@@ -201,7 +237,7 @@ static int w5100_read16_direct(struct net_device *ndev, u16 addr)
        return data;
 }
 
-static int w5100_write16_direct(struct net_device *ndev, u16 addr, u16 data)
+static int w5100_write16_direct(struct net_device *ndev, u32 addr, u16 data)
 {
        __w5100_write_direct(ndev, addr, data >> 8);
        __w5100_write_direct(ndev, addr + 1, data);
@@ -210,7 +246,7 @@ static int w5100_write16_direct(struct net_device *ndev, u16 addr, u16 data)
        return 0;
 }
 
-static int w5100_readbulk_direct(struct net_device *ndev, u16 addr, u8 *buf,
+static int w5100_readbulk_direct(struct net_device *ndev, u32 addr, u8 *buf,
                                 int len)
 {
        int i;
@@ -221,7 +257,7 @@ static int w5100_readbulk_direct(struct net_device *ndev, u16 addr, u8 *buf,
        return 0;
 }
 
-static int w5100_writebulk_direct(struct net_device *ndev, u16 addr,
+static int w5100_writebulk_direct(struct net_device *ndev, u32 addr,
                                  const u8 *buf, int len)
 {
        int i;
@@ -275,7 +311,7 @@ static const struct w5100_ops w5100_mmio_direct_ops = {
 #define W5100_IDM_AR           0x01   /* Indirect Mode Address Register */
 #define W5100_IDM_DR           0x03   /* Indirect Mode Data Register */
 
-static int w5100_read_indirect(struct net_device *ndev, u16 addr)
+static int w5100_read_indirect(struct net_device *ndev, u32 addr)
 {
        struct w5100_mmio_priv *mmio_priv = w5100_mmio_priv(ndev);
        unsigned long flags;
@@ -289,7 +325,7 @@ static int w5100_read_indirect(struct net_device *ndev, u16 addr)
        return data;
 }
 
-static int w5100_write_indirect(struct net_device *ndev, u16 addr, u8 data)
+static int w5100_write_indirect(struct net_device *ndev, u32 addr, u8 data)
 {
        struct w5100_mmio_priv *mmio_priv = w5100_mmio_priv(ndev);
        unsigned long flags;
@@ -302,7 +338,7 @@ static int w5100_write_indirect(struct net_device *ndev, u16 addr, u8 data)
        return 0;
 }
 
-static int w5100_read16_indirect(struct net_device *ndev, u16 addr)
+static int w5100_read16_indirect(struct net_device *ndev, u32 addr)
 {
        struct w5100_mmio_priv *mmio_priv = w5100_mmio_priv(ndev);
        unsigned long flags;
@@ -317,7 +353,7 @@ static int w5100_read16_indirect(struct net_device *ndev, u16 addr)
        return data;
 }
 
-static int w5100_write16_indirect(struct net_device *ndev, u16 addr, u16 data)
+static int w5100_write16_indirect(struct net_device *ndev, u32 addr, u16 data)
 {
        struct w5100_mmio_priv *mmio_priv = w5100_mmio_priv(ndev);
        unsigned long flags;
@@ -331,7 +367,7 @@ static int w5100_write16_indirect(struct net_device *ndev, u16 addr, u16 data)
        return 0;
 }
 
-static int w5100_readbulk_indirect(struct net_device *ndev, u16 addr, u8 *buf,
+static int w5100_readbulk_indirect(struct net_device *ndev, u32 addr, u8 *buf,
                                   int len)
 {
        struct w5100_mmio_priv *mmio_priv = w5100_mmio_priv(ndev);
@@ -350,7 +386,7 @@ static int w5100_readbulk_indirect(struct net_device *ndev, u16 addr, u8 *buf,
        return 0;
 }
 
-static int w5100_writebulk_indirect(struct net_device *ndev, u16 addr,
+static int w5100_writebulk_indirect(struct net_device *ndev, u32 addr,
                                    const u8 *buf, int len)
 {
        struct w5100_mmio_priv *mmio_priv = w5100_mmio_priv(ndev);
@@ -392,32 +428,32 @@ static const struct w5100_ops w5100_mmio_indirect_ops = {
 
 #if defined(CONFIG_WIZNET_BUS_DIRECT)
 
-static int w5100_read(struct w5100_priv *priv, u16 addr)
+static int w5100_read(struct w5100_priv *priv, u32 addr)
 {
        return w5100_read_direct(priv->ndev, addr);
 }
 
-static int w5100_write(struct w5100_priv *priv, u16 addr, u8 data)
+static int w5100_write(struct w5100_priv *priv, u32 addr, u8 data)
 {
        return w5100_write_direct(priv->ndev, addr, data);
 }
 
-static int w5100_read16(struct w5100_priv *priv, u16 addr)
+static int w5100_read16(struct w5100_priv *priv, u32 addr)
 {
        return w5100_read16_direct(priv->ndev, addr);
 }
 
-static int w5100_write16(struct w5100_priv *priv, u16 addr, u16 data)
+static int w5100_write16(struct w5100_priv *priv, u32 addr, u16 data)
 {
        return w5100_write16_direct(priv->ndev, addr, data);
 }
 
-static int w5100_readbulk(struct w5100_priv *priv, u16 addr, u8 *buf, int len)
+static int w5100_readbulk(struct w5100_priv *priv, u32 addr, u8 *buf, int len)
 {
        return w5100_readbulk_direct(priv->ndev, addr, buf, len);
 }
 
-static int w5100_writebulk(struct w5100_priv *priv, u16 addr, const u8 *buf,
+static int w5100_writebulk(struct w5100_priv *priv, u32 addr, const u8 *buf,
                           int len)
 {
        return w5100_writebulk_direct(priv->ndev, addr, buf, len);
@@ -425,32 +461,32 @@ static int w5100_writebulk(struct w5100_priv *priv, u16 addr, const u8 *buf,
 
 #elif defined(CONFIG_WIZNET_BUS_INDIRECT)
 
-static int w5100_read(struct w5100_priv *priv, u16 addr)
+static int w5100_read(struct w5100_priv *priv, u32 addr)
 {
        return w5100_read_indirect(priv->ndev, addr);
 }
 
-static int w5100_write(struct w5100_priv *priv, u16 addr, u8 data)
+static int w5100_write(struct w5100_priv *priv, u32 addr, u8 data)
 {
        return w5100_write_indirect(priv->ndev, addr, data);
 }
 
-static int w5100_read16(struct w5100_priv *priv, u16 addr)
+static int w5100_read16(struct w5100_priv *priv, u32 addr)
 {
        return w5100_read16_indirect(priv->ndev, addr);
 }
 
-static int w5100_write16(struct w5100_priv *priv, u16 addr, u16 data)
+static int w5100_write16(struct w5100_priv *priv, u32 addr, u16 data)
 {
        return w5100_write16_indirect(priv->ndev, addr, data);
 }
 
-static int w5100_readbulk(struct w5100_priv *priv, u16 addr, u8 *buf, int len)
+static int w5100_readbulk(struct w5100_priv *priv, u32 addr, u8 *buf, int len)
 {
        return w5100_readbulk_indirect(priv->ndev, addr, buf, len);
 }
 
-static int w5100_writebulk(struct w5100_priv *priv, u16 addr, const u8 *buf,
+static int w5100_writebulk(struct w5100_priv *priv, u32 addr, const u8 *buf,
                           int len)
 {
        return w5100_writebulk_indirect(priv->ndev, addr, buf, len);
@@ -458,32 +494,32 @@ static int w5100_writebulk(struct w5100_priv *priv, u16 addr, const u8 *buf,
 
 #else /* CONFIG_WIZNET_BUS_ANY */
 
-static int w5100_read(struct w5100_priv *priv, u16 addr)
+static int w5100_read(struct w5100_priv *priv, u32 addr)
 {
        return priv->ops->read(priv->ndev, addr);
 }
 
-static int w5100_write(struct w5100_priv *priv, u16 addr, u8 data)
+static int w5100_write(struct w5100_priv *priv, u32 addr, u8 data)
 {
        return priv->ops->write(priv->ndev, addr, data);
 }
 
-static int w5100_read16(struct w5100_priv *priv, u16 addr)
+static int w5100_read16(struct w5100_priv *priv, u32 addr)
 {
        return priv->ops->read16(priv->ndev, addr);
 }
 
-static int w5100_write16(struct w5100_priv *priv, u16 addr, u16 data)
+static int w5100_write16(struct w5100_priv *priv, u32 addr, u16 data)
 {
        return priv->ops->write16(priv->ndev, addr, data);
 }
 
-static int w5100_readbulk(struct w5100_priv *priv, u16 addr, u8 *buf, int len)
+static int w5100_readbulk(struct w5100_priv *priv, u32 addr, u8 *buf, int len)
 {
        return priv->ops->readbulk(priv->ndev, addr, buf, len);
 }
 
-static int w5100_writebulk(struct w5100_priv *priv, u16 addr, const u8 *buf,
+static int w5100_writebulk(struct w5100_priv *priv, u32 addr, const u8 *buf,
                           int len)
 {
        return priv->ops->writebulk(priv->ndev, addr, buf, len);
@@ -493,13 +529,11 @@ static int w5100_writebulk(struct w5100_priv *priv, u16 addr, const u8 *buf,
 
 static int w5100_readbuf(struct w5100_priv *priv, u16 offset, u8 *buf, int len)
 {
-       u16 addr;
+       u32 addr;
        int remain = 0;
        int ret;
-       const u16 mem_start =
-               is_w5200(priv) ? W5200_RX_MEM_START : W5100_RX_MEM_START;
-       const u16 mem_size =
-               is_w5200(priv) ? W5200_RX_MEM_SIZE : W5100_RX_MEM_SIZE;
+       const u32 mem_start = priv->s0_rx_buf;
+       const u16 mem_size = priv->s0_rx_buf_size;
 
        offset %= mem_size;
        addr = mem_start + offset;
@@ -519,13 +553,11 @@ static int w5100_readbuf(struct w5100_priv *priv, u16 offset, u8 *buf, int len)
 static int w5100_writebuf(struct w5100_priv *priv, u16 offset, const u8 *buf,
                          int len)
 {
-       u16 addr;
+       u32 addr;
        int ret;
        int remain = 0;
-       const u16 mem_start =
-               is_w5200(priv) ? W5200_TX_MEM_START : W5100_TX_MEM_START;
-       const u16 mem_size =
-               is_w5200(priv) ? W5200_TX_MEM_SIZE : W5100_TX_MEM_SIZE;
+       const u32 mem_start = priv->s0_tx_buf;
+       const u16 mem_size = priv->s0_tx_buf_size;
 
        offset %= mem_size;
        addr = mem_start + offset;
@@ -578,6 +610,28 @@ static void w5100_write_macaddr(struct w5100_priv *priv)
        w5100_writebulk(priv, W5100_SHAR, ndev->dev_addr, ETH_ALEN);
 }
 
+static void w5100_socket_intr_mask(struct w5100_priv *priv, u8 mask)
+{
+       u32 imr;
+
+       if (priv->ops->chip_id == W5500)
+               imr = W5500_SIMR;
+       else
+               imr = W5100_IMR;
+
+       w5100_write(priv, imr, mask);
+}
+
+static void w5100_enable_intr(struct w5100_priv *priv)
+{
+       w5100_socket_intr_mask(priv, IR_S0);
+}
+
+static void w5100_disable_intr(struct w5100_priv *priv)
+{
+       w5100_socket_intr_mask(priv, 0);
+}
+
 static void w5100_memory_configure(struct w5100_priv *priv)
 {
        /* Configure 16K of internal memory
@@ -603,17 +657,52 @@ static void w5200_memory_configure(struct w5100_priv *priv)
        }
 }
 
-static void w5100_hw_reset(struct w5100_priv *priv)
+static void w5500_memory_configure(struct w5100_priv *priv)
 {
+       int i;
+
+       /* Configure internal RX memory as 16K RX buffer and
+        * internal TX memory as 16K TX buffer
+        */
+       w5100_write(priv, W5500_Sn_RXMEM_SIZE(0), 0x10);
+       w5100_write(priv, W5500_Sn_TXMEM_SIZE(0), 0x10);
+
+       for (i = 1; i < 8; i++) {
+               w5100_write(priv, W5500_Sn_RXMEM_SIZE(i), 0);
+               w5100_write(priv, W5500_Sn_TXMEM_SIZE(i), 0);
+       }
+}
+
+static int w5100_hw_reset(struct w5100_priv *priv)
+{
+       u32 rtr;
+
        w5100_reset(priv);
 
-       w5100_write(priv, W5100_IMR, 0);
+       w5100_disable_intr(priv);
        w5100_write_macaddr(priv);
 
-       if (is_w5200(priv))
-               w5200_memory_configure(priv);
-       else
+       switch (priv->ops->chip_id) {
+       case W5100:
                w5100_memory_configure(priv);
+               rtr = W5100_RTR;
+               break;
+       case W5200:
+               w5200_memory_configure(priv);
+               rtr = W5100_RTR;
+               break;
+       case W5500:
+               w5500_memory_configure(priv);
+               rtr = W5500_RTR;
+               break;
+       default:
+               return -EINVAL;
+       }
+
+       if (w5100_read16(priv, rtr) != RTR_DEFAULT)
+               return -ENODEV;
+
+       return 0;
 }
 
 static void w5100_hw_start(struct w5100_priv *priv)
@@ -621,12 +710,12 @@ static void w5100_hw_start(struct w5100_priv *priv)
        w5100_write(priv, W5100_S0_MR(priv), priv->promisc ?
                          S0_MR_MACRAW : S0_MR_MACRAW_MF);
        w5100_command(priv, S0_CR_OPEN);
-       w5100_write(priv, W5100_IMR, IR_S0);
+       w5100_enable_intr(priv);
 }
 
 static void w5100_hw_close(struct w5100_priv *priv)
 {
-       w5100_write(priv, W5100_IMR, 0);
+       w5100_disable_intr(priv);
        w5100_command(priv, S0_CR_CLOSE);
 }
 
@@ -693,7 +782,7 @@ static void w5100_restart(struct net_device *ndev)
        w5100_hw_reset(priv);
        w5100_hw_start(priv);
        ndev->stats.tx_errors++;
-       ndev->trans_start = jiffies;
+       netif_trans_update(ndev);
        netif_wake_queue(ndev);
 }
 
@@ -805,7 +894,7 @@ static void w5100_rx_work(struct work_struct *work)
        while ((skb = w5100_rx_skb(priv->ndev)))
                netif_rx_ni(skb);
 
-       w5100_write(priv, W5100_IMR, IR_S0);
+       w5100_enable_intr(priv);
 }
 
 static int w5100_napi_poll(struct napi_struct *napi, int budget)
@@ -824,7 +913,7 @@ static int w5100_napi_poll(struct napi_struct *napi, int budget)
 
        if (rx_count < budget) {
                napi_complete(napi);
-               w5100_write(priv, W5100_IMR, IR_S0);
+               w5100_enable_intr(priv);
        }
 
        return rx_count;
@@ -846,7 +935,7 @@ static irqreturn_t w5100_interrupt(int irq, void *ndev_instance)
        }
 
        if (ir & S0_IR_RECV) {
-               w5100_write(priv, W5100_IMR, 0);
+               w5100_disable_intr(priv);
 
                if (priv->ops->may_sleep)
                        queue_work(priv->xfer_wq, &priv->rx_work);
@@ -1014,6 +1103,34 @@ int w5100_probe(struct device *dev, const struct w5100_ops *ops,
        SET_NETDEV_DEV(ndev, dev);
        dev_set_drvdata(dev, ndev);
        priv = netdev_priv(ndev);
+
+       switch (ops->chip_id) {
+       case W5100:
+               priv->s0_regs = W5100_S0_REGS;
+               priv->s0_tx_buf = W5100_TX_MEM_START;
+               priv->s0_tx_buf_size = W5100_TX_MEM_SIZE;
+               priv->s0_rx_buf = W5100_RX_MEM_START;
+               priv->s0_rx_buf_size = W5100_RX_MEM_SIZE;
+               break;
+       case W5200:
+               priv->s0_regs = W5200_S0_REGS;
+               priv->s0_tx_buf = W5200_TX_MEM_START;
+               priv->s0_tx_buf_size = W5200_TX_MEM_SIZE;
+               priv->s0_rx_buf = W5200_RX_MEM_START;
+               priv->s0_rx_buf_size = W5200_RX_MEM_SIZE;
+               break;
+       case W5500:
+               priv->s0_regs = W5500_S0_REGS;
+               priv->s0_tx_buf = W5500_TX_MEM_START;
+               priv->s0_tx_buf_size = W5500_TX_MEM_SIZE;
+               priv->s0_rx_buf = W5500_RX_MEM_START;
+               priv->s0_rx_buf_size = W5500_RX_MEM_SIZE;
+               break;
+       default:
+               err = -EINVAL;
+               goto err_register;
+       }
+
        priv->ndev = ndev;
        priv->ops = ops;
        priv->irq = irq;
@@ -1055,11 +1172,9 @@ int w5100_probe(struct device *dev, const struct w5100_ops *ops,
                        goto err_hw;
        }
 
-       w5100_hw_reset(priv);
-       if (w5100_read16(priv, W5100_RTR) != RTR_DEFAULT) {
-               err = -ENODEV;
+       err = w5100_hw_reset(priv);
+       if (err)
                goto err_hw;
-       }
 
        if (ops->may_sleep) {
                err = request_threaded_irq(priv->irq, NULL, w5100_interrupt,
index 9b1fa23..f8a16fa 100644 (file)
 enum {
        W5100,
        W5200,
+       W5500,
 };
 
 struct w5100_ops {
        bool may_sleep;
        int chip_id;
-       int (*read)(struct net_device *ndev, u16 addr);
-       int (*write)(struct net_device *ndev, u16 addr, u8 data);
-       int (*read16)(struct net_device *ndev, u16 addr);
-       int (*write16)(struct net_device *ndev, u16 addr, u16 data);
-       int (*readbulk)(struct net_device *ndev, u16 addr, u8 *buf, int len);
-       int (*writebulk)(struct net_device *ndev, u16 addr, const u8 *buf,
+       int (*read)(struct net_device *ndev, u32 addr);
+       int (*write)(struct net_device *ndev, u32 addr, u8 data);
+       int (*read16)(struct net_device *ndev, u32 addr);
+       int (*write16)(struct net_device *ndev, u32 addr, u16 data);
+       int (*readbulk)(struct net_device *ndev, u32 addr, u8 *buf, int len);
+       int (*writebulk)(struct net_device *ndev, u32 addr, const u8 *buf,
                         int len);
        int (*reset)(struct net_device *ndev);
        int (*init)(struct net_device *ndev);
index 8da7b93..0b37ce9 100644 (file)
@@ -362,7 +362,7 @@ static void w5300_tx_timeout(struct net_device *ndev)
        w5300_hw_reset(priv);
        w5300_hw_start(priv);
        ndev->stats.tx_errors++;
-       ndev->trans_start = jiffies;
+       netif_trans_update(ndev);
        netif_wake_queue(ndev);
 }
 
index 5a1068d..7397087 100644 (file)
@@ -584,7 +584,7 @@ static void temac_device_reset(struct net_device *ndev)
                dev_err(&ndev->dev, "Error setting TEMAC options\n");
 
        /* Init Driver variable */
-       ndev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(ndev); /* prevent tx timeout */
 }
 
 static void temac_adjust_link(struct net_device *ndev)
index 4684644..8c7f5be 100644 (file)
@@ -508,7 +508,7 @@ static void axienet_device_reset(struct net_device *ndev)
        axienet_set_multicast_list(ndev);
        axienet_setoptions(ndev, lp->options);
 
-       ndev->trans_start = jiffies;
+       netif_trans_update(ndev);
 }
 
 /**
index e324b30..3cee84a 100644 (file)
@@ -531,7 +531,7 @@ static void xemaclite_tx_timeout(struct net_device *dev)
        }
 
        /* To exclude tx timeout */
-       dev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(dev); /* prevent tx timeout */
 
        /* We're all ready to go. Start the queue */
        netif_wake_queue(dev);
@@ -563,7 +563,7 @@ static void xemaclite_tx_handler(struct net_device *dev)
                        dev->stats.tx_bytes += lp->deferred_skb->len;
                        dev_kfree_skb_irq(lp->deferred_skb);
                        lp->deferred_skb = NULL;
-                       dev->trans_start = jiffies; /* prevent tx timeout */
+                       netif_trans_update(dev); /* prevent tx timeout */
                        netif_wake_queue(dev);
                }
        }
index d56f869..7b44968 100644 (file)
@@ -1199,7 +1199,7 @@ xirc2ps_tx_timeout_task(struct work_struct *work)
        struct net_device *dev = local->dev;
     /* reset the card */
     do_reset(dev,1);
-    dev->trans_start = jiffies; /* prevent tx timeout */
+    netif_trans_update(dev); /* prevent tx timeout */
     netif_wake_queue(dev);
 }
 
index bb7e903..f4e6926 100644 (file)
@@ -471,7 +471,7 @@ static void fjes_tx_stall_task(struct work_struct *work)
        int i;
 
        if (((long)jiffies -
-               (long)(netdev->trans_start)) > FJES_TX_TX_STALL_TIMEOUT) {
+               dev_trans_start(netdev)) > FJES_TX_TX_STALL_TIMEOUT) {
                netif_wake_queue(netdev);
                return;
        }
@@ -718,7 +718,7 @@ fjes_xmit_frame(struct sk_buff *skb, struct net_device *netdev)
 
                                        ret = NETDEV_TX_OK;
                                } else {
-                                       netdev->trans_start = jiffies;
+                                       netif_trans_update(netdev);
                                        netif_tx_stop_queue(cur_queue);
 
                                        if (!work_pending(&adapter->tx_stall_task))
index 72c9f1f..eb66638 100644 (file)
@@ -780,8 +780,10 @@ static int baycom_send_packet(struct sk_buff *skb, struct net_device *dev)
                dev_kfree_skb(skb);
                return NETDEV_TX_OK;
        }
-       if (bc->skb)
-               return NETDEV_TX_LOCKED;
+       if (bc->skb) {
+               dev_kfree_skb(skb);
+               return NETDEV_TX_OK;
+       }
        /* strip KISS byte */
        if (skb->len >= HDLCDRV_MAXFLEN+1 || skb->len < 3) {
                dev_kfree_skb(skb);
index 49fe59b..4bad0b8 100644 (file)
@@ -412,8 +412,10 @@ static netdev_tx_t hdlcdrv_send_packet(struct sk_buff *skb,
                dev_kfree_skb(skb);
                return NETDEV_TX_OK;
        }
-       if (sm->skb)
-               return NETDEV_TX_LOCKED;
+       if (sm->skb) {
+               dev_kfree_skb(skb);
+               return NETDEV_TX_OK;
+       }
        netif_stop_queue(dev);
        sm->skb = skb;
        return NETDEV_TX_OK;
index 85828f1..1dfe230 100644 (file)
@@ -519,7 +519,7 @@ static void ax_encaps(struct net_device *dev, unsigned char *icp, int len)
        dev->stats.tx_packets++;
        dev->stats.tx_bytes += actual;
 
-       ax->dev->trans_start = jiffies;
+       netif_trans_update(ax->dev);
        ax->xleft = count - actual;
        ax->xhead = ax->xbuff + actual;
 }
@@ -542,7 +542,7 @@ static netdev_tx_t ax_xmit(struct sk_buff *skb, struct net_device *dev)
                 * May be we must check transmitter timeout here ?
                 *      14 Oct 1994 Dmitry Gorodchanin.
                 */
-               if (time_before(jiffies, dev->trans_start + 20 * HZ)) {
+               if (time_before(jiffies, dev_trans_start(dev) + 20 * HZ)) {
                        /* 20 sec timeout not reached */
                        return NETDEV_TX_BUSY;
                }
index ce88df3..b808316 100644 (file)
@@ -1669,7 +1669,7 @@ static netdev_tx_t scc_net_tx(struct sk_buff *skb, struct net_device *dev)
                dev_kfree_skb(skb_del);
        }
        skb_queue_tail(&scc->tx_queue, skb);
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
        
 
        /*
index 1a4729c..aaff07c 100644 (file)
@@ -601,7 +601,7 @@ static netdev_tx_t yam_send_packet(struct sk_buff *skb,
                return ax25_ip_xmit(skb);
 
        skb_queue_tail(&yp->send_queue, skb);
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
        return NETDEV_TX_OK;
 }
 
index cb9e9fe..9f10da6 100644 (file)
@@ -1340,7 +1340,7 @@ static struct at86rf2xx_chip_data at86rf233_data = {
        .t_off_to_aack = 80,
        .t_off_to_tx_on = 80,
        .t_off_to_sleep = 35,
-       .t_sleep_to_off = 210,
+       .t_sleep_to_off = 1000,
        .t_frame = 4096,
        .t_p_ack = 545,
        .rssi_base_val = -91,
@@ -1355,7 +1355,7 @@ static struct at86rf2xx_chip_data at86rf231_data = {
        .t_off_to_aack = 110,
        .t_off_to_tx_on = 110,
        .t_off_to_sleep = 35,
-       .t_sleep_to_off = 380,
+       .t_sleep_to_off = 1000,
        .t_frame = 4096,
        .t_p_ack = 545,
        .rssi_base_val = -91,
@@ -1370,7 +1370,7 @@ static struct at86rf2xx_chip_data at86rf212_data = {
        .t_off_to_aack = 200,
        .t_off_to_tx_on = 200,
        .t_off_to_sleep = 35,
-       .t_sleep_to_off = 380,
+       .t_sleep_to_off = 1000,
        .t_frame = 4096,
        .t_p_ack = 545,
        .rssi_base_val = -100,
index b1cd865..52c9051 100644 (file)
@@ -3,6 +3,8 @@
  *
  * Written 2013 by Werner Almesberger <werner@almesberger.net>
  *
+ * Copyright (c) 2015 - 2016 Stefan Schmidt <stefan@datenfreihafen.org>
+ *
  * This program is free software; you can redistribute it and/or
  * modify it under the terms of the GNU General Public License as
  * published by the Free Software Foundation, version 2
@@ -472,6 +474,76 @@ atusb_set_txpower(struct ieee802154_hw *hw, s32 mbm)
        return -EINVAL;
 }
 
+#define ATUSB_MAX_ED_LEVELS 0xF
+static const s32 atusb_ed_levels[ATUSB_MAX_ED_LEVELS + 1] = {
+       -9100, -8900, -8700, -8500, -8300, -8100, -7900, -7700, -7500, -7300,
+       -7100, -6900, -6700, -6500, -6300, -6100,
+};
+
+static int
+atusb_set_cca_mode(struct ieee802154_hw *hw, const struct wpan_phy_cca *cca)
+{
+       struct atusb *atusb = hw->priv;
+       u8 val;
+
+       /* mapping 802.15.4 to driver spec */
+       switch (cca->mode) {
+       case NL802154_CCA_ENERGY:
+               val = 1;
+               break;
+       case NL802154_CCA_CARRIER:
+               val = 2;
+               break;
+       case NL802154_CCA_ENERGY_CARRIER:
+               switch (cca->opt) {
+               case NL802154_CCA_OPT_ENERGY_CARRIER_AND:
+                       val = 3;
+                       break;
+               case NL802154_CCA_OPT_ENERGY_CARRIER_OR:
+                       val = 0;
+                       break;
+               default:
+                       return -EINVAL;
+               }
+               break;
+       default:
+               return -EINVAL;
+       }
+
+       return atusb_write_subreg(atusb, SR_CCA_MODE, val);
+}
+
+static int
+atusb_set_cca_ed_level(struct ieee802154_hw *hw, s32 mbm)
+{
+       struct atusb *atusb = hw->priv;
+       u32 i;
+
+       for (i = 0; i < hw->phy->supported.cca_ed_levels_size; i++) {
+               if (hw->phy->supported.cca_ed_levels[i] == mbm)
+                       return atusb_write_subreg(atusb, SR_CCA_ED_THRES, i);
+       }
+
+       return -EINVAL;
+}
+
+static int
+atusb_set_csma_params(struct ieee802154_hw *hw, u8 min_be, u8 max_be, u8 retries)
+{
+       struct atusb *atusb = hw->priv;
+       int ret;
+
+       ret = atusb_write_subreg(atusb, SR_MIN_BE, min_be);
+       if (ret)
+               return ret;
+
+       ret = atusb_write_subreg(atusb, SR_MAX_BE, max_be);
+       if (ret)
+               return ret;
+
+       return atusb_write_subreg(atusb, SR_MAX_CSMA_RETRIES, retries);
+}
+
 static int
 atusb_set_promiscuous_mode(struct ieee802154_hw *hw, const bool on)
 {
@@ -508,6 +580,9 @@ static struct ieee802154_ops atusb_ops = {
        .stop                   = atusb_stop,
        .set_hw_addr_filt       = atusb_set_hw_addr_filt,
        .set_txpower            = atusb_set_txpower,
+       .set_cca_mode           = atusb_set_cca_mode,
+       .set_cca_ed_level       = atusb_set_cca_ed_level,
+       .set_csma_params        = atusb_set_csma_params,
        .set_promiscuous_mode   = atusb_set_promiscuous_mode,
 };
 
@@ -636,9 +711,20 @@ static int atusb_probe(struct usb_interface *interface,
 
        hw->parent = &usb_dev->dev;
        hw->flags = IEEE802154_HW_TX_OMIT_CKSUM | IEEE802154_HW_AFILT |
-                   IEEE802154_HW_PROMISCUOUS;
+                   IEEE802154_HW_PROMISCUOUS | IEEE802154_HW_CSMA_PARAMS;
+
+       hw->phy->flags = WPAN_PHY_FLAG_TXPOWER | WPAN_PHY_FLAG_CCA_ED_LEVEL |
+                        WPAN_PHY_FLAG_CCA_MODE;
+
+       hw->phy->supported.cca_modes = BIT(NL802154_CCA_ENERGY) |
+               BIT(NL802154_CCA_CARRIER) | BIT(NL802154_CCA_ENERGY_CARRIER);
+       hw->phy->supported.cca_opts = BIT(NL802154_CCA_OPT_ENERGY_CARRIER_AND) |
+               BIT(NL802154_CCA_OPT_ENERGY_CARRIER_OR);
+
+       hw->phy->supported.cca_ed_levels = atusb_ed_levels;
+       hw->phy->supported.cca_ed_levels_size = ARRAY_SIZE(atusb_ed_levels);
 
-       hw->phy->flags = WPAN_PHY_FLAG_TXPOWER;
+       hw->phy->cca.mode = NL802154_CCA_ENERGY;
 
        hw->phy->current_page = 0;
        hw->phy->current_channel = 11;  /* reset default */
@@ -647,6 +733,7 @@ static int atusb_probe(struct usb_interface *interface,
        hw->phy->supported.tx_powers_size = ARRAY_SIZE(atusb_powers);
        hw->phy->transmit_power = hw->phy->supported.tx_powers[0];
        ieee802154_random_extended_addr(&hw->phy->perm_extended_addr);
+       hw->phy->cca_ed_level = hw->phy->supported.cca_ed_levels[7];
 
        atusb_command(atusb, ATUSB_RF_RESET, 0);
        atusb_get_and_show_chip(atusb);
index 764a2bd..f446db8 100644 (file)
@@ -61,6 +61,7 @@
 #define REG_TXBCON0    0x1A
 #define REG_TXNCON     0x1B  /* Transmit Normal FIFO Control */
 #define BIT_TXNTRIG    BIT(0)
+#define BIT_TXNSECEN   BIT(1)
 #define BIT_TXNACKREQ  BIT(2)
 
 #define REG_TXG1CON    0x1C
 #define REG_INTSTAT    0x31  /* Interrupt Status */
 #define BIT_TXNIF      BIT(0)
 #define BIT_RXIF       BIT(3)
+#define BIT_SECIF      BIT(4)
+#define BIT_SECIGNORE  BIT(7)
 
 #define REG_INTCON     0x32  /* Interrupt Control */
 #define BIT_TXNIE      BIT(0)
 #define BIT_RXIE       BIT(3)
+#define BIT_SECIE      BIT(4)
 
 #define REG_GPIO       0x33  /* GPIO */
 #define REG_TRISGPIO   0x34  /* GPIO direction */
@@ -548,6 +552,9 @@ static void write_tx_buf_complete(void *context)
        u8 val = BIT_TXNTRIG;
        int ret;
 
+       if (ieee802154_is_secen(fc))
+               val |= BIT_TXNSECEN;
+
        if (ieee802154_is_ackreq(fc))
                val |= BIT_TXNACKREQ;
 
@@ -616,7 +623,7 @@ static int mrf24j40_start(struct ieee802154_hw *hw)
 
        /* Clear TXNIE and RXIE. Enable interrupts */
        return regmap_update_bits(devrec->regmap_short, REG_INTCON,
-                                 BIT_TXNIE | BIT_RXIE, 0);
+                                 BIT_TXNIE | BIT_RXIE | BIT_SECIE, 0);
 }
 
 static void mrf24j40_stop(struct ieee802154_hw *hw)
@@ -1025,6 +1032,11 @@ static void mrf24j40_intstat_complete(void *context)
 
        enable_irq(devrec->spi->irq);
 
+       /* Ignore Rx security decryption */
+       if (intstat & BIT_SECIF)
+               regmap_write_async(devrec->regmap_short, REG_SECCON0,
+                                  BIT_SECIGNORE);
+
        /* Check for TX complete */
        if (intstat & BIT_TXNIF)
                ieee802154_xmit_complete(devrec->hw, devrec->tx_skb, false);
index cc56fac..66c0eea 100644 (file)
@@ -196,6 +196,7 @@ static const struct net_device_ops ifb_netdev_ops = {
 
 #define IFB_FEATURES (NETIF_F_HW_CSUM | NETIF_F_SG  | NETIF_F_FRAGLIST | \
                      NETIF_F_TSO_ECN | NETIF_F_TSO | NETIF_F_TSO6      | \
+                     NETIF_F_GSO_ENCAP_ALL                             | \
                      NETIF_F_HIGHDMA | NETIF_F_HW_VLAN_CTAG_TX         | \
                      NETIF_F_HW_VLAN_STAG_TX)
 
@@ -224,6 +225,8 @@ static void ifb_setup(struct net_device *dev)
        dev->tx_queue_len = TX_Q_LIMIT;
 
        dev->features |= IFB_FEATURES;
+       dev->hw_features |= dev->features;
+       dev->hw_enc_features |= dev->features;
        dev->vlan_features |= IFB_FEATURES & ~(NETIF_F_HW_VLAN_CTAG_TX |
                                               NETIF_F_HW_VLAN_STAG_TX);
 
index 57941d3..1c4d395 100644 (file)
@@ -113,6 +113,7 @@ static int ipvlan_init(struct net_device *dev)
 {
        struct ipvl_dev *ipvlan = netdev_priv(dev);
        const struct net_device *phy_dev = ipvlan->phy_dev;
+       struct ipvl_port *port = ipvlan->port;
 
        dev->state = (dev->state & ~IPVLAN_STATE_MASK) |
                     (phy_dev->state & IPVLAN_STATE_MASK);
@@ -128,6 +129,8 @@ static int ipvlan_init(struct net_device *dev)
        if (!ipvlan->pcpu_stats)
                return -ENOMEM;
 
+       port->count += 1;
+
        return 0;
 }
 
@@ -481,27 +484,21 @@ static int ipvlan_link_new(struct net *src_net, struct net_device *dev,
 
        dev->priv_flags |= IFF_IPVLAN_SLAVE;
 
-       port->count += 1;
        err = register_netdevice(dev);
        if (err < 0)
-               goto ipvlan_destroy_port;
+               return err;
 
        err = netdev_upper_dev_link(phy_dev, dev);
-       if (err)
-               goto ipvlan_destroy_port;
+       if (err) {
+               unregister_netdevice(dev);
+               return err;
+       }
 
        list_add_tail_rcu(&ipvlan->pnode, &port->ipvlans);
        ipvlan_set_port_mode(port, mode);
 
        netif_stacked_transfer_operstate(phy_dev, dev);
        return 0;
-
-ipvlan_destroy_port:
-       port->count -= 1;
-       if (!port->count)
-               ipvlan_port_destroy(phy_dev);
-
-       return err;
 }
 
 static void ipvlan_link_delete(struct net_device *dev, struct list_head *head)
index 64bb44d..c285eaf 100644 (file)
@@ -1427,7 +1427,7 @@ static netdev_tx_t ali_ircc_fir_hard_xmit(struct sk_buff *skb,
                /* Check for empty frame */
                if (!skb->len) {
                        ali_ircc_change_speed(self, speed); 
-                       dev->trans_start = jiffies;
+                       netif_trans_update(dev);
                        spin_unlock_irqrestore(&self->lock, flags);
                        dev_kfree_skb(skb);
                        return NETDEV_TX_OK;
@@ -1533,7 +1533,7 @@ static netdev_tx_t ali_ircc_fir_hard_xmit(struct sk_buff *skb,
        /* Restore bank register */
        switch_bank(iobase, BANK0);
 
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
        spin_unlock_irqrestore(&self->lock, flags);
        dev_kfree_skb(skb);
 
@@ -1946,7 +1946,7 @@ static netdev_tx_t ali_ircc_sir_hard_xmit(struct sk_buff *skb,
                /* Check for empty frame */
                if (!skb->len) {
                        ali_ircc_change_speed(self, speed); 
-                       dev->trans_start = jiffies;
+                       netif_trans_update(dev);
                        spin_unlock_irqrestore(&self->lock, flags);
                        dev_kfree_skb(skb);
                        return NETDEV_TX_OK;
@@ -1966,7 +1966,7 @@ static netdev_tx_t ali_ircc_sir_hard_xmit(struct sk_buff *skb,
        /* Turn on transmit finished interrupt. Will fire immediately!  */
        outb(UART_IER_THRI, iobase+UART_IER); 
 
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
        spin_unlock_irqrestore(&self->lock, flags);
 
        dev_kfree_skb(skb);
index 303c4bd..be5bb0b 100644 (file)
@@ -531,7 +531,7 @@ static void bfin_sir_send_work(struct work_struct *work)
        bfin_sir_dma_tx_chars(dev);
 #endif
        bfin_sir_enable_tx(port);
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
 }
 
 static int bfin_sir_hard_xmit(struct sk_buff *skb, struct net_device *dev)
index 25f2196..a198946 100644 (file)
@@ -429,7 +429,7 @@ static netdev_tx_t irda_usb_hard_xmit(struct sk_buff *skb,
                         * do an extra memcpy and increment packet counters...
                         * Jean II */
                        irda_usb_change_speed_xbofs(self);
-                       netdev->trans_start = jiffies;
+                       netif_trans_update(netdev);
                        /* Will netif_wake_queue() in callback */
                        goto drop;
                }
@@ -526,7 +526,7 @@ static netdev_tx_t irda_usb_hard_xmit(struct sk_buff *skb,
                netdev->stats.tx_packets++;
                 netdev->stats.tx_bytes += skb->len;
                
-               netdev->trans_start = jiffies;
+               netif_trans_update(netdev);
        }
        spin_unlock_irqrestore(&self->lock, flags);
        
index dc0dbd8..9ef13d8 100644 (file)
@@ -1399,7 +1399,7 @@ static netdev_tx_t nsc_ircc_hard_xmit_sir(struct sk_buff *skb,
                                 * to make sure packets gets through the
                                 * proper xmit handler - Jean II */
                        }
-                       dev->trans_start = jiffies;
+                       netif_trans_update(dev);
                        spin_unlock_irqrestore(&self->lock, flags);
                        dev_kfree_skb(skb);
                        return NETDEV_TX_OK;
@@ -1424,7 +1424,7 @@ static netdev_tx_t nsc_ircc_hard_xmit_sir(struct sk_buff *skb,
        /* Restore bank register */
        outb(bank, iobase+BSR);
 
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
        spin_unlock_irqrestore(&self->lock, flags);
 
        dev_kfree_skb(skb);
@@ -1470,7 +1470,7 @@ static netdev_tx_t nsc_ircc_hard_xmit_fir(struct sk_buff *skb,
                                 * the speed change has been done.
                                 * Jean II */
                        }
-                       dev->trans_start = jiffies;
+                       netif_trans_update(dev);
                        spin_unlock_irqrestore(&self->lock, flags);
                        dev_kfree_skb(skb);
                        return NETDEV_TX_OK;
@@ -1553,7 +1553,7 @@ static netdev_tx_t nsc_ircc_hard_xmit_fir(struct sk_buff *skb,
        /* Restore bank register */
        outb(bank, iobase+BSR);
 
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
        spin_unlock_irqrestore(&self->lock, flags);
        dev_kfree_skb(skb);
 
index b455ffe..dcf92ba 100644 (file)
@@ -862,7 +862,7 @@ static void smsc_ircc_timeout(struct net_device *dev)
        spin_lock_irqsave(&self->lock, flags);
        smsc_ircc_sir_start(self);
        smsc_ircc_change_speed(self, self->io.speed);
-       dev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(dev); /* prevent tx timeout */
        netif_wake_queue(dev);
        spin_unlock_irqrestore(&self->lock, flags);
 }
index 83cc48a..42da094 100644 (file)
@@ -718,7 +718,7 @@ static void stir_send(struct stir_cb *stir, struct sk_buff *skb)
 
        stir->netdev->stats.tx_packets++;
        stir->netdev->stats.tx_bytes += skb->len;
-       stir->netdev->trans_start = jiffies;
+       netif_trans_update(stir->netdev);
        pr_debug("send %d (%d)\n", skb->len, wraplen);
 
        if (usb_bulk_msg(stir->usbdev, usb_sndbulkpipe(stir->usbdev, 1),
index 6960d4c..ca4442a 100644 (file)
@@ -774,7 +774,7 @@ static netdev_tx_t via_ircc_hard_xmit_sir(struct sk_buff *skb,
                /* Check for empty frame */
                if (!skb->len) {
                        via_ircc_change_speed(self, speed);
-                       dev->trans_start = jiffies;
+                       netif_trans_update(dev);
                        dev_kfree_skb(skb);
                        return NETDEV_TX_OK;
                } else
@@ -821,7 +821,7 @@ static netdev_tx_t via_ircc_hard_xmit_sir(struct sk_buff *skb,
        RXStart(iobase, OFF);
        TXStart(iobase, ON);
 
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
        spin_unlock_irqrestore(&self->lock, flags);
        dev_kfree_skb(skb);
        return NETDEV_TX_OK;
@@ -849,7 +849,7 @@ static netdev_tx_t via_ircc_hard_xmit_fir(struct sk_buff *skb,
        if ((speed != self->io.speed) && (speed != -1)) {
                if (!skb->len) {
                        via_ircc_change_speed(self, speed);
-                       dev->trans_start = jiffies;
+                       netif_trans_update(dev);
                        dev_kfree_skb(skb);
                        return NETDEV_TX_OK;
                } else
@@ -869,7 +869,7 @@ static netdev_tx_t via_ircc_hard_xmit_fir(struct sk_buff *skb,
        via_ircc_dma_xmit(self, iobase);
 //F01   }
 //F01   if (self->tx_fifo.free < (MAX_TX_WINDOW -1 )) netif_wake_queue(self->netdev);
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
        dev_kfree_skb(skb);
        spin_unlock_irqrestore(&self->lock, flags);
        return NETDEV_TX_OK;
index 84d3e5c..3add2c4 100644 (file)
@@ -880,12 +880,12 @@ static struct sk_buff *macsec_decrypt(struct sk_buff *skb,
        macsec_skb_cb(skb)->valid = false;
        skb = skb_share_check(skb, GFP_ATOMIC);
        if (!skb)
-               return NULL;
+               return ERR_PTR(-ENOMEM);
 
        req = aead_request_alloc(rx_sa->key.tfm, GFP_ATOMIC);
        if (!req) {
                kfree_skb(skb);
-               return NULL;
+               return ERR_PTR(-ENOMEM);
        }
 
        hdr = (struct macsec_eth_header *)skb->data;
@@ -905,7 +905,7 @@ static struct sk_buff *macsec_decrypt(struct sk_buff *skb,
                skb = skb_unshare(skb, GFP_ATOMIC);
                if (!skb) {
                        aead_request_free(req);
-                       return NULL;
+                       return ERR_PTR(-ENOMEM);
                }
        } else {
                /* integrity only: all headers + data authenticated */
@@ -921,14 +921,14 @@ static struct sk_buff *macsec_decrypt(struct sk_buff *skb,
        dev_hold(dev);
        ret = crypto_aead_decrypt(req);
        if (ret == -EINPROGRESS) {
-               return NULL;
+               return ERR_PTR(ret);
        } else if (ret != 0) {
                /* decryption/authentication failed
                 * 10.6 if validateFrames is disabled, deliver anyway
                 */
                if (ret != -EBADMSG) {
                        kfree_skb(skb);
-                       skb = NULL;
+                       skb = ERR_PTR(ret);
                }
        } else {
                macsec_skb_cb(skb)->valid = true;
@@ -1146,8 +1146,10 @@ static rx_handler_result_t macsec_handle_frame(struct sk_buff **pskb)
            secy->validate_frames != MACSEC_VALIDATE_DISABLED)
                skb = macsec_decrypt(skb, dev, rx_sa, sci, secy);
 
-       if (!skb) {
-               macsec_rxsa_put(rx_sa);
+       if (IS_ERR(skb)) {
+               /* the decrypt callback needs the reference */
+               if (PTR_ERR(skb) != -EINPROGRESS)
+                       macsec_rxsa_put(rx_sa);
                rcu_read_unlock();
                *pskb = NULL;
                return RX_HANDLER_CONSUMED;
@@ -1161,7 +1163,8 @@ deliver:
                            macsec_extra_len(macsec_skb_cb(skb)->has_sci));
        macsec_reset_skb(skb, secy->netdev);
 
-       macsec_rxsa_put(rx_sa);
+       if (rx_sa)
+               macsec_rxsa_put(rx_sa);
        count_rx(dev, skb->len);
 
        rcu_read_unlock();
@@ -1405,9 +1408,10 @@ static sci_t nla_get_sci(const struct nlattr *nla)
        return (__force sci_t)nla_get_u64(nla);
 }
 
-static int nla_put_sci(struct sk_buff *skb, int attrtype, sci_t value)
+static int nla_put_sci(struct sk_buff *skb, int attrtype, sci_t value,
+                      int padattr)
 {
-       return nla_put_u64(skb, attrtype, (__force u64)value);
+       return nla_put_u64_64bit(skb, attrtype, (__force u64)value, padattr);
 }
 
 static struct macsec_tx_sa *get_txsa_from_nl(struct net *net,
@@ -1622,8 +1626,9 @@ static int macsec_add_rxsa(struct sk_buff *skb, struct genl_info *info)
        }
 
        rx_sa = kmalloc(sizeof(*rx_sa), GFP_KERNEL);
-       if (init_rx_sa(rx_sa, nla_data(tb_sa[MACSEC_SA_ATTR_KEY]), secy->key_len,
-                      secy->icv_len)) {
+       if (!rx_sa || init_rx_sa(rx_sa, nla_data(tb_sa[MACSEC_SA_ATTR_KEY]),
+                                secy->key_len, secy->icv_len)) {
+               kfree(rx_sa);
                rtnl_unlock();
                return -ENOMEM;
        }
@@ -1768,6 +1773,7 @@ static int macsec_add_txsa(struct sk_buff *skb, struct genl_info *info)
        tx_sa = kmalloc(sizeof(*tx_sa), GFP_KERNEL);
        if (!tx_sa || init_tx_sa(tx_sa, nla_data(tb_sa[MACSEC_SA_ATTR_KEY]),
                                 secy->key_len, secy->icv_len)) {
+               kfree(tx_sa);
                rtnl_unlock();
                return -ENOMEM;
        }
@@ -2131,16 +2137,36 @@ static int copy_rx_sc_stats(struct sk_buff *skb,
                sum.InPktsUnusedSA    += tmp.InPktsUnusedSA;
        }
 
-       if (nla_put_u64(skb, MACSEC_RXSC_STATS_ATTR_IN_OCTETS_VALIDATED, sum.InOctetsValidated) ||
-           nla_put_u64(skb, MACSEC_RXSC_STATS_ATTR_IN_OCTETS_DECRYPTED, sum.InOctetsDecrypted) ||
-           nla_put_u64(skb, MACSEC_RXSC_STATS_ATTR_IN_PKTS_UNCHECKED, sum.InPktsUnchecked) ||
-           nla_put_u64(skb, MACSEC_RXSC_STATS_ATTR_IN_PKTS_DELAYED, sum.InPktsDelayed) ||
-           nla_put_u64(skb, MACSEC_RXSC_STATS_ATTR_IN_PKTS_OK, sum.InPktsOK) ||
-           nla_put_u64(skb, MACSEC_RXSC_STATS_ATTR_IN_PKTS_INVALID, sum.InPktsInvalid) ||
-           nla_put_u64(skb, MACSEC_RXSC_STATS_ATTR_IN_PKTS_LATE, sum.InPktsLate) ||
-           nla_put_u64(skb, MACSEC_RXSC_STATS_ATTR_IN_PKTS_NOT_VALID, sum.InPktsNotValid) ||
-           nla_put_u64(skb, MACSEC_RXSC_STATS_ATTR_IN_PKTS_NOT_USING_SA, sum.InPktsNotUsingSA) ||
-           nla_put_u64(skb, MACSEC_RXSC_STATS_ATTR_IN_PKTS_UNUSED_SA, sum.InPktsUnusedSA))
+       if (nla_put_u64_64bit(skb, MACSEC_RXSC_STATS_ATTR_IN_OCTETS_VALIDATED,
+                             sum.InOctetsValidated,
+                             MACSEC_RXSC_STATS_ATTR_PAD) ||
+           nla_put_u64_64bit(skb, MACSEC_RXSC_STATS_ATTR_IN_OCTETS_DECRYPTED,
+                             sum.InOctetsDecrypted,
+                             MACSEC_RXSC_STATS_ATTR_PAD) ||
+           nla_put_u64_64bit(skb, MACSEC_RXSC_STATS_ATTR_IN_PKTS_UNCHECKED,
+                             sum.InPktsUnchecked,
+                             MACSEC_RXSC_STATS_ATTR_PAD) ||
+           nla_put_u64_64bit(skb, MACSEC_RXSC_STATS_ATTR_IN_PKTS_DELAYED,
+                             sum.InPktsDelayed,
+                             MACSEC_RXSC_STATS_ATTR_PAD) ||
+           nla_put_u64_64bit(skb, MACSEC_RXSC_STATS_ATTR_IN_PKTS_OK,
+                             sum.InPktsOK,
+                             MACSEC_RXSC_STATS_ATTR_PAD) ||
+           nla_put_u64_64bit(skb, MACSEC_RXSC_STATS_ATTR_IN_PKTS_INVALID,
+                             sum.InPktsInvalid,
+                             MACSEC_RXSC_STATS_ATTR_PAD) ||
+           nla_put_u64_64bit(skb, MACSEC_RXSC_STATS_ATTR_IN_PKTS_LATE,
+                             sum.InPktsLate,
+                             MACSEC_RXSC_STATS_ATTR_PAD) ||
+           nla_put_u64_64bit(skb, MACSEC_RXSC_STATS_ATTR_IN_PKTS_NOT_VALID,
+                             sum.InPktsNotValid,
+                             MACSEC_RXSC_STATS_ATTR_PAD) ||
+           nla_put_u64_64bit(skb, MACSEC_RXSC_STATS_ATTR_IN_PKTS_NOT_USING_SA,
+                             sum.InPktsNotUsingSA,
+                             MACSEC_RXSC_STATS_ATTR_PAD) ||
+           nla_put_u64_64bit(skb, MACSEC_RXSC_STATS_ATTR_IN_PKTS_UNUSED_SA,
+                             sum.InPktsUnusedSA,
+                             MACSEC_RXSC_STATS_ATTR_PAD))
                return -EMSGSIZE;
 
        return 0;
@@ -2169,10 +2195,18 @@ static int copy_tx_sc_stats(struct sk_buff *skb,
                sum.OutOctetsEncrypted += tmp.OutOctetsEncrypted;
        }
 
-       if (nla_put_u64(skb, MACSEC_TXSC_STATS_ATTR_OUT_PKTS_PROTECTED, sum.OutPktsProtected) ||
-           nla_put_u64(skb, MACSEC_TXSC_STATS_ATTR_OUT_PKTS_ENCRYPTED, sum.OutPktsEncrypted) ||
-           nla_put_u64(skb, MACSEC_TXSC_STATS_ATTR_OUT_OCTETS_PROTECTED, sum.OutOctetsProtected) ||
-           nla_put_u64(skb, MACSEC_TXSC_STATS_ATTR_OUT_OCTETS_ENCRYPTED, sum.OutOctetsEncrypted))
+       if (nla_put_u64_64bit(skb, MACSEC_TXSC_STATS_ATTR_OUT_PKTS_PROTECTED,
+                             sum.OutPktsProtected,
+                             MACSEC_TXSC_STATS_ATTR_PAD) ||
+           nla_put_u64_64bit(skb, MACSEC_TXSC_STATS_ATTR_OUT_PKTS_ENCRYPTED,
+                             sum.OutPktsEncrypted,
+                             MACSEC_TXSC_STATS_ATTR_PAD) ||
+           nla_put_u64_64bit(skb, MACSEC_TXSC_STATS_ATTR_OUT_OCTETS_PROTECTED,
+                             sum.OutOctetsProtected,
+                             MACSEC_TXSC_STATS_ATTR_PAD) ||
+           nla_put_u64_64bit(skb, MACSEC_TXSC_STATS_ATTR_OUT_OCTETS_ENCRYPTED,
+                             sum.OutOctetsEncrypted,
+                             MACSEC_TXSC_STATS_ATTR_PAD))
                return -EMSGSIZE;
 
        return 0;
@@ -2205,14 +2239,30 @@ static int copy_secy_stats(struct sk_buff *skb,
                sum.InPktsOverrun    += tmp.InPktsOverrun;
        }
 
-       if (nla_put_u64(skb, MACSEC_SECY_STATS_ATTR_OUT_PKTS_UNTAGGED, sum.OutPktsUntagged) ||
-           nla_put_u64(skb, MACSEC_SECY_STATS_ATTR_IN_PKTS_UNTAGGED, sum.InPktsUntagged) ||
-           nla_put_u64(skb, MACSEC_SECY_STATS_ATTR_OUT_PKTS_TOO_LONG, sum.OutPktsTooLong) ||
-           nla_put_u64(skb, MACSEC_SECY_STATS_ATTR_IN_PKTS_NO_TAG, sum.InPktsNoTag) ||
-           nla_put_u64(skb, MACSEC_SECY_STATS_ATTR_IN_PKTS_BAD_TAG, sum.InPktsBadTag) ||
-           nla_put_u64(skb, MACSEC_SECY_STATS_ATTR_IN_PKTS_UNKNOWN_SCI, sum.InPktsUnknownSCI) ||
-           nla_put_u64(skb, MACSEC_SECY_STATS_ATTR_IN_PKTS_NO_SCI, sum.InPktsNoSCI) ||
-           nla_put_u64(skb, MACSEC_SECY_STATS_ATTR_IN_PKTS_OVERRUN, sum.InPktsOverrun))
+       if (nla_put_u64_64bit(skb, MACSEC_SECY_STATS_ATTR_OUT_PKTS_UNTAGGED,
+                             sum.OutPktsUntagged,
+                             MACSEC_SECY_STATS_ATTR_PAD) ||
+           nla_put_u64_64bit(skb, MACSEC_SECY_STATS_ATTR_IN_PKTS_UNTAGGED,
+                             sum.InPktsUntagged,
+                             MACSEC_SECY_STATS_ATTR_PAD) ||
+           nla_put_u64_64bit(skb, MACSEC_SECY_STATS_ATTR_OUT_PKTS_TOO_LONG,
+                             sum.OutPktsTooLong,
+                             MACSEC_SECY_STATS_ATTR_PAD) ||
+           nla_put_u64_64bit(skb, MACSEC_SECY_STATS_ATTR_IN_PKTS_NO_TAG,
+                             sum.InPktsNoTag,
+                             MACSEC_SECY_STATS_ATTR_PAD) ||
+           nla_put_u64_64bit(skb, MACSEC_SECY_STATS_ATTR_IN_PKTS_BAD_TAG,
+                             sum.InPktsBadTag,
+                             MACSEC_SECY_STATS_ATTR_PAD) ||
+           nla_put_u64_64bit(skb, MACSEC_SECY_STATS_ATTR_IN_PKTS_UNKNOWN_SCI,
+                             sum.InPktsUnknownSCI,
+                             MACSEC_SECY_STATS_ATTR_PAD) ||
+           nla_put_u64_64bit(skb, MACSEC_SECY_STATS_ATTR_IN_PKTS_NO_SCI,
+                             sum.InPktsNoSCI,
+                             MACSEC_SECY_STATS_ATTR_PAD) ||
+           nla_put_u64_64bit(skb, MACSEC_SECY_STATS_ATTR_IN_PKTS_OVERRUN,
+                             sum.InPktsOverrun,
+                             MACSEC_SECY_STATS_ATTR_PAD))
                return -EMSGSIZE;
 
        return 0;
@@ -2226,8 +2276,11 @@ static int nla_put_secy(struct macsec_secy *secy, struct sk_buff *skb)
        if (!secy_nest)
                return 1;
 
-       if (nla_put_sci(skb, MACSEC_SECY_ATTR_SCI, secy->sci) ||
-           nla_put_u64(skb, MACSEC_SECY_ATTR_CIPHER_SUITE, DEFAULT_CIPHER_ID) ||
+       if (nla_put_sci(skb, MACSEC_SECY_ATTR_SCI, secy->sci,
+                       MACSEC_SECY_ATTR_PAD) ||
+           nla_put_u64_64bit(skb, MACSEC_SECY_ATTR_CIPHER_SUITE,
+                             MACSEC_DEFAULT_CIPHER_ID,
+                             MACSEC_SECY_ATTR_PAD) ||
            nla_put_u8(skb, MACSEC_SECY_ATTR_ICV_LEN, secy->icv_len) ||
            nla_put_u8(skb, MACSEC_SECY_ATTR_OPER, secy->operational) ||
            nla_put_u8(skb, MACSEC_SECY_ATTR_PROTECT, secy->protect_frames) ||
@@ -2268,7 +2321,7 @@ static int dump_secy(struct macsec_secy *secy, struct net_device *dev,
        if (!hdr)
                return -EMSGSIZE;
 
-       rtnl_lock();
+       genl_dump_check_consistent(cb, hdr, &macsec_fam);
 
        if (nla_put_u32(skb, MACSEC_ATTR_IFINDEX, dev->ifindex))
                goto nla_put_failure;
@@ -2312,7 +2365,9 @@ static int dump_secy(struct macsec_secy *secy, struct net_device *dev,
 
                if (nla_put_u8(skb, MACSEC_SA_ATTR_AN, i) ||
                    nla_put_u32(skb, MACSEC_SA_ATTR_PN, tx_sa->next_pn) ||
-                   nla_put_u64(skb, MACSEC_SA_ATTR_KEYID, tx_sa->key.id) ||
+                   nla_put_u64_64bit(skb, MACSEC_SA_ATTR_KEYID,
+                                     tx_sa->key.id,
+                                     MACSEC_SA_ATTR_PAD) ||
                    nla_put_u8(skb, MACSEC_SA_ATTR_ACTIVE, tx_sa->active)) {
                        nla_nest_cancel(skb, txsa_nest);
                        nla_nest_cancel(skb, txsa_list);
@@ -2353,7 +2408,8 @@ static int dump_secy(struct macsec_secy *secy, struct net_device *dev,
                }
 
                if (nla_put_u8(skb, MACSEC_RXSC_ATTR_ACTIVE, rx_sc->active) ||
-                   nla_put_sci(skb, MACSEC_RXSC_ATTR_SCI, rx_sc->sci)) {
+                   nla_put_sci(skb, MACSEC_RXSC_ATTR_SCI, rx_sc->sci,
+                               MACSEC_RXSC_ATTR_PAD)) {
                        nla_nest_cancel(skb, rxsc_nest);
                        nla_nest_cancel(skb, rxsc_list);
                        goto nla_put_failure;
@@ -2413,7 +2469,9 @@ static int dump_secy(struct macsec_secy *secy, struct net_device *dev,
 
                        if (nla_put_u8(skb, MACSEC_SA_ATTR_AN, i) ||
                            nla_put_u32(skb, MACSEC_SA_ATTR_PN, rx_sa->next_pn) ||
-                           nla_put_u64(skb, MACSEC_SA_ATTR_KEYID, rx_sa->key.id) ||
+                           nla_put_u64_64bit(skb, MACSEC_SA_ATTR_KEYID,
+                                             rx_sa->key.id,
+                                             MACSEC_SA_ATTR_PAD) ||
                            nla_put_u8(skb, MACSEC_SA_ATTR_ACTIVE, rx_sa->active)) {
                                nla_nest_cancel(skb, rxsa_nest);
                                nla_nest_cancel(skb, rxsc_nest);
@@ -2429,18 +2487,17 @@ static int dump_secy(struct macsec_secy *secy, struct net_device *dev,
 
        nla_nest_end(skb, rxsc_list);
 
-       rtnl_unlock();
-
        genlmsg_end(skb, hdr);
 
        return 0;
 
 nla_put_failure:
-       rtnl_unlock();
        genlmsg_cancel(skb, hdr);
        return -EMSGSIZE;
 }
 
+static int macsec_generation = 1; /* protected by RTNL */
+
 static int macsec_dump_txsc(struct sk_buff *skb, struct netlink_callback *cb)
 {
        struct net *net = sock_net(skb->sk);
@@ -2450,6 +2507,10 @@ static int macsec_dump_txsc(struct sk_buff *skb, struct netlink_callback *cb)
        dev_idx = cb->args[0];
 
        d = 0;
+       rtnl_lock();
+
+       cb->seq = macsec_generation;
+
        for_each_netdev(net, dev) {
                struct macsec_secy *secy;
 
@@ -2467,6 +2528,7 @@ next:
        }
 
 done:
+       rtnl_unlock();
        cb->args[0] = d;
        return skb->len;
 }
@@ -2826,7 +2888,7 @@ static void macsec_free_netdev(struct net_device *dev)
 static void macsec_setup(struct net_device *dev)
 {
        ether_setup(dev);
-       dev->tx_queue_len = 0;
+       dev->priv_flags |= IFF_NO_QUEUE;
        dev->netdev_ops = &macsec_netdev_ops;
        dev->destructor = macsec_free_netdev;
 
@@ -2920,10 +2982,14 @@ static void macsec_dellink(struct net_device *dev, struct list_head *head)
        struct net_device *real_dev = macsec->real_dev;
        struct macsec_rxh_data *rxd = macsec_data_rtnl(real_dev);
 
+       macsec_generation++;
+
        unregister_netdevice_queue(dev, head);
        list_del_rcu(&macsec->secys);
-       if (list_empty(&rxd->secys))
+       if (list_empty(&rxd->secys)) {
                netdev_rx_handler_unregister(real_dev);
+               kfree(rxd);
+       }
 
        macsec_del_dev(macsec);
 }
@@ -2945,8 +3011,10 @@ static int register_macsec_dev(struct net_device *real_dev,
 
                err = netdev_rx_handler_register(real_dev, macsec_handle_frame,
                                                 rxd);
-               if (err < 0)
+               if (err < 0) {
+                       kfree(rxd);
                        return err;
+               }
        }
 
        list_add_tail_rcu(&macsec->secys, &rxd->secys);
@@ -3066,6 +3134,8 @@ static int macsec_newlink(struct net *net, struct net_device *dev,
        if (err < 0)
                goto del_dev;
 
+       macsec_generation++;
+
        dev_hold(real_dev);
 
        return 0;
@@ -3079,7 +3149,7 @@ unregister:
 
 static int macsec_validate_attr(struct nlattr *tb[], struct nlattr *data[])
 {
-       u64 csid = DEFAULT_CIPHER_ID;
+       u64 csid = MACSEC_DEFAULT_CIPHER_ID;
        u8 icv_len = DEFAULT_ICV_LEN;
        int flag;
        bool es, scb, sci;
@@ -3094,8 +3164,8 @@ static int macsec_validate_attr(struct nlattr *tb[], struct nlattr *data[])
                icv_len = nla_get_u8(data[IFLA_MACSEC_ICV_LEN]);
 
        switch (csid) {
-       case DEFAULT_CIPHER_ID:
-       case DEFAULT_CIPHER_ALT:
+       case MACSEC_DEFAULT_CIPHER_ID:
+       case MACSEC_DEFAULT_CIPHER_ALT:
                if (icv_len < MACSEC_MIN_ICV_LEN ||
                    icv_len > MACSEC_MAX_ICV_LEN)
                        return -EINVAL;
@@ -3129,8 +3199,8 @@ static int macsec_validate_attr(struct nlattr *tb[], struct nlattr *data[])
            nla_get_u8(data[IFLA_MACSEC_VALIDATION]) > MACSEC_VALIDATE_MAX)
                return -EINVAL;
 
-       if ((data[IFLA_MACSEC_PROTECT] &&
-            nla_get_u8(data[IFLA_MACSEC_PROTECT])) &&
+       if ((data[IFLA_MACSEC_REPLAY_PROTECT] &&
+            nla_get_u8(data[IFLA_MACSEC_REPLAY_PROTECT])) &&
            !data[IFLA_MACSEC_WINDOW])
                return -EINVAL;
 
@@ -3145,9 +3215,9 @@ static struct net *macsec_get_link_net(const struct net_device *dev)
 static size_t macsec_get_size(const struct net_device *dev)
 {
        return 0 +
-               nla_total_size(8) + /* SCI */
+               nla_total_size_64bit(8) + /* SCI */
                nla_total_size(1) + /* ICV_LEN */
-               nla_total_size(8) + /* CIPHER_SUITE */
+               nla_total_size_64bit(8) + /* CIPHER_SUITE */
                nla_total_size(4) + /* WINDOW */
                nla_total_size(1) + /* ENCODING_SA */
                nla_total_size(1) + /* ENCRYPT */
@@ -3166,9 +3236,11 @@ static int macsec_fill_info(struct sk_buff *skb,
        struct macsec_secy *secy = &macsec_priv(dev)->secy;
        struct macsec_tx_sc *tx_sc = &secy->tx_sc;
 
-       if (nla_put_sci(skb, IFLA_MACSEC_SCI, secy->sci) ||
+       if (nla_put_sci(skb, IFLA_MACSEC_SCI, secy->sci,
+                       IFLA_MACSEC_PAD) ||
            nla_put_u8(skb, IFLA_MACSEC_ICV_LEN, secy->icv_len) ||
-           nla_put_u64(skb, IFLA_MACSEC_CIPHER_SUITE, DEFAULT_CIPHER_ID) ||
+           nla_put_u64_64bit(skb, IFLA_MACSEC_CIPHER_SUITE,
+                             MACSEC_DEFAULT_CIPHER_ID, IFLA_MACSEC_PAD) ||
            nla_put_u8(skb, IFLA_MACSEC_ENCODING_SA, tx_sc->encoding_sa) ||
            nla_put_u8(skb, IFLA_MACSEC_ENCRYPT, tx_sc->encrypt) ||
            nla_put_u8(skb, IFLA_MACSEC_PROTECT, secy->protect_frames) ||
index 2bcf1f3..cb01023 100644 (file)
@@ -795,6 +795,7 @@ static int macvlan_init(struct net_device *dev)
 {
        struct macvlan_dev *vlan = netdev_priv(dev);
        const struct net_device *lowerdev = vlan->lowerdev;
+       struct macvlan_port *port = vlan->port;
 
        dev->state              = (dev->state & ~MACVLAN_STATE_MASK) |
                                  (lowerdev->state & MACVLAN_STATE_MASK);
@@ -812,6 +813,8 @@ static int macvlan_init(struct net_device *dev)
        if (!vlan->pcpu_stats)
                return -ENOMEM;
 
+       port->count += 1;
+
        return 0;
 }
 
@@ -1312,10 +1315,9 @@ int macvlan_common_newlink(struct net *src_net, struct net_device *dev,
                        return err;
        }
 
-       port->count += 1;
        err = register_netdevice(dev);
        if (err < 0)
-               goto destroy_port;
+               return err;
 
        dev->priv_flags |= IFF_MACVLAN;
        err = netdev_upper_dev_link(lowerdev, dev);
@@ -1330,10 +1332,6 @@ int macvlan_common_newlink(struct net *src_net, struct net_device *dev,
 
 unregister_netdev:
        unregister_netdevice(dev);
-destroy_port:
-       port->count -= 1;
-       if (!port->count)
-               macvlan_port_destroy(lowerdev);
 
        return err;
 }
index 95394ed..22b85b0 100644 (file)
@@ -129,7 +129,18 @@ static DEFINE_MUTEX(minor_lock);
 static DEFINE_IDR(minor_idr);
 
 #define GOODCOPY_LEN 128
-static struct class *macvtap_class;
+static const void *macvtap_net_namespace(struct device *d)
+{
+       struct net_device *dev = to_net_dev(d->parent);
+       return dev_net(dev);
+}
+
+static struct class macvtap_class = {
+       .name = "macvtap",
+       .owner = THIS_MODULE,
+       .ns_type = &net_ns_type_operations,
+       .namespace = macvtap_net_namespace,
+};
 static struct cdev macvtap_cdev;
 
 static const struct proto_ops macvtap_socket_ops;
@@ -1278,10 +1289,12 @@ static int macvtap_device_event(struct notifier_block *unused,
        struct device *classdev;
        dev_t devt;
        int err;
+       char tap_name[IFNAMSIZ];
 
        if (dev->rtnl_link_ops != &macvtap_link_ops)
                return NOTIFY_DONE;
 
+       snprintf(tap_name, IFNAMSIZ, "tap%d", dev->ifindex);
        vlan = netdev_priv(dev);
 
        switch (event) {
@@ -1295,16 +1308,24 @@ static int macvtap_device_event(struct notifier_block *unused,
                        return notifier_from_errno(err);
 
                devt = MKDEV(MAJOR(macvtap_major), vlan->minor);
-               classdev = device_create(macvtap_class, &dev->dev, devt,
-                                        dev, "tap%d", dev->ifindex);
+               classdev = device_create(&macvtap_class, &dev->dev, devt,
+                                        dev, tap_name);
                if (IS_ERR(classdev)) {
                        macvtap_free_minor(vlan);
                        return notifier_from_errno(PTR_ERR(classdev));
                }
+               err = sysfs_create_link(&dev->dev.kobj, &classdev->kobj,
+                                       tap_name);
+               if (err)
+                       return notifier_from_errno(err);
                break;
        case NETDEV_UNREGISTER:
+               /* vlan->minor == 0 if NETDEV_REGISTER above failed */
+               if (vlan->minor == 0)
+                       break;
+               sysfs_remove_link(&dev->dev.kobj, tap_name);
                devt = MKDEV(MAJOR(macvtap_major), vlan->minor);
-               device_destroy(macvtap_class, devt);
+               device_destroy(&macvtap_class, devt);
                macvtap_free_minor(vlan);
                break;
        }
@@ -1330,11 +1351,9 @@ static int macvtap_init(void)
        if (err)
                goto out2;
 
-       macvtap_class = class_create(THIS_MODULE, "macvtap");
-       if (IS_ERR(macvtap_class)) {
-               err = PTR_ERR(macvtap_class);
+       err = class_register(&macvtap_class);
+       if (err)
                goto out3;
-       }
 
        err = register_netdevice_notifier(&macvtap_notifier_block);
        if (err)
@@ -1349,7 +1368,7 @@ static int macvtap_init(void)
 out5:
        unregister_netdevice_notifier(&macvtap_notifier_block);
 out4:
-       class_unregister(macvtap_class);
+       class_unregister(&macvtap_class);
 out3:
        cdev_del(&macvtap_cdev);
 out2:
@@ -1363,7 +1382,7 @@ static void macvtap_exit(void)
 {
        rtnl_link_unregister(&macvtap_link_ops);
        unregister_netdevice_notifier(&macvtap_notifier_block);
-       class_unregister(macvtap_class);
+       class_unregister(&macvtap_class);
        cdev_del(&macvtap_cdev);
        unregister_chrdev_region(macvtap_major, MACVTAP_NUM_DEVS);
        idr_destroy(&minor_idr);
index b3ffaee..f279a89 100644 (file)
@@ -359,27 +359,25 @@ static void at803x_link_change_notify(struct phy_device *phydev)
         * in the FIFO. In such cases, the FIFO enters an error mode it
         * cannot recover from by software.
         */
-       if (phydev->drv->phy_id == ATH8030_PHY_ID) {
-               if (phydev->state == PHY_NOLINK) {
-                       if (priv->gpiod_reset && !priv->phy_reset) {
-                               struct at803x_context context;
-
-                               at803x_context_save(phydev, &context);
-
-                               gpiod_set_value(priv->gpiod_reset, 1);
-                               msleep(1);
-                               gpiod_set_value(priv->gpiod_reset, 0);
-                               msleep(1);
-
-                               at803x_context_restore(phydev, &context);
-
-                               phydev_dbg(phydev, "%s(): phy was reset\n",
-                                          __func__);
-                               priv->phy_reset = true;
-                       }
-               } else {
-                       priv->phy_reset = false;
+       if (phydev->state == PHY_NOLINK) {
+               if (priv->gpiod_reset && !priv->phy_reset) {
+                       struct at803x_context context;
+
+                       at803x_context_save(phydev, &context);
+
+                       gpiod_set_value(priv->gpiod_reset, 1);
+                       msleep(1);
+                       gpiod_set_value(priv->gpiod_reset, 0);
+                       msleep(1);
+
+                       at803x_context_restore(phydev, &context);
+
+                       phydev_dbg(phydev, "%s(): phy was reset\n",
+                                  __func__);
+                       priv->phy_reset = true;
                }
+       } else {
+               priv->phy_reset = false;
        }
 }
 
@@ -391,7 +389,6 @@ static struct phy_driver at803x_driver[] = {
        .phy_id_mask            = 0xffffffef,
        .probe                  = at803x_probe,
        .config_init            = at803x_config_init,
-       .link_change_notify     = at803x_link_change_notify,
        .set_wol                = at803x_set_wol,
        .get_wol                = at803x_get_wol,
        .suspend                = at803x_suspend,
@@ -427,7 +424,6 @@ static struct phy_driver at803x_driver[] = {
        .phy_id_mask            = 0xffffffef,
        .probe                  = at803x_probe,
        .config_init            = at803x_config_init,
-       .link_change_notify     = at803x_link_change_notify,
        .set_wol                = at803x_set_wol,
        .get_wol                = at803x_get_wol,
        .suspend                = at803x_suspend,
index fc07a88..9050f21 100644 (file)
@@ -328,7 +328,7 @@ struct phy_device *fixed_phy_register(unsigned int irq,
                return ERR_PTR(ret);
 
        phy = get_phy_device(fmb->mii_bus, phy_addr, false);
-       if (!phy || IS_ERR(phy)) {
+       if (IS_ERR(phy)) {
                fixed_phy_del(phy_addr);
                return ERR_PTR(-EINVAL);
        }
index 751202a..09deef4 100644 (file)
@@ -333,7 +333,7 @@ int __mdiobus_register(struct mii_bus *bus, struct module *owner)
                        struct phy_device *phydev;
 
                        phydev = mdiobus_scan(bus, i);
-                       if (IS_ERR(phydev)) {
+                       if (IS_ERR(phydev) && (PTR_ERR(phydev) != -ENODEV)) {
                                err = PTR_ERR(phydev);
                                goto error;
                        }
@@ -419,7 +419,7 @@ struct phy_device *mdiobus_scan(struct mii_bus *bus, int addr)
        int err;
 
        phydev = get_phy_device(bus, addr, false);
-       if (IS_ERR(phydev) || phydev == NULL)
+       if (IS_ERR(phydev))
                return phydev;
 
        /*
@@ -431,7 +431,7 @@ struct phy_device *mdiobus_scan(struct mii_bus *bus, int addr)
        err = phy_device_register(phydev);
        if (err) {
                phy_device_free(phydev);
-               return NULL;
+               return ERR_PTR(-ENODEV);
        }
 
        return phydev;
index 10e39c2..e977ba9 100644 (file)
@@ -529,7 +529,7 @@ struct phy_device *get_phy_device(struct mii_bus *bus, int addr, bool is_c45)
 
        /* If the phy_id is mostly Fs, there is no device there */
        if ((phy_id & 0x1fffffff) == 0x1fffffff)
-               return NULL;
+               return ERR_PTR(-ENODEV);
 
        return phy_device_create(bus, addr, phy_id, is_c45, &c45_ids);
 }
index f572b31..8dedafa 100644 (file)
@@ -46,6 +46,7 @@
 #include <linux/device.h>
 #include <linux/mutex.h>
 #include <linux/slab.h>
+#include <linux/file.h>
 #include <asm/unaligned.h>
 #include <net/slhc_vj.h>
 #include <linux/atomic.h>
@@ -183,6 +184,12 @@ struct channel {
 #endif /* CONFIG_PPP_MULTILINK */
 };
 
+struct ppp_config {
+       struct file *file;
+       s32 unit;
+       bool ifname_is_set;
+};
+
 /*
  * SMP locking issues:
  * Both the ppp.rlock and ppp.wlock locks protect the ppp.channels
@@ -269,8 +276,7 @@ static void ppp_ccp_peek(struct ppp *ppp, struct sk_buff *skb, int inbound);
 static void ppp_ccp_closed(struct ppp *ppp);
 static struct compressor *find_compressor(int type);
 static void ppp_get_stats(struct ppp *ppp, struct ppp_stats *st);
-static struct ppp *ppp_create_interface(struct net *net, int unit,
-                                       struct file *file, int *retp);
+static int ppp_create_interface(struct net *net, struct file *file, int *unit);
 static void init_ppp_file(struct ppp_file *pf, int kind);
 static void ppp_destroy_interface(struct ppp *ppp);
 static struct ppp *ppp_find_unit(struct ppp_net *pn, int unit);
@@ -282,6 +288,7 @@ static int unit_get(struct idr *p, void *ptr);
 static int unit_set(struct idr *p, void *ptr, int n);
 static void unit_put(struct idr *p, int n);
 static void *unit_find(struct idr *p, int n);
+static void ppp_setup(struct net_device *dev);
 
 static const struct net_device_ops ppp_netdev_ops;
 
@@ -853,12 +860,12 @@ static int ppp_unattached_ioctl(struct net *net, struct ppp_file *pf,
                /* Create a new ppp unit */
                if (get_user(unit, p))
                        break;
-               ppp = ppp_create_interface(net, unit, file, &err);
-               if (!ppp)
+               err = ppp_create_interface(net, file, &unit);
+               if (err < 0)
                        break;
-               file->private_data = &ppp->file;
+
                err = -EFAULT;
-               if (put_user(ppp->file.index, p))
+               if (put_user(unit, p))
                        break;
                err = 0;
                break;
@@ -960,6 +967,188 @@ static struct pernet_operations ppp_net_ops = {
        .size = sizeof(struct ppp_net),
 };
 
+static int ppp_unit_register(struct ppp *ppp, int unit, bool ifname_is_set)
+{
+       struct ppp_net *pn = ppp_pernet(ppp->ppp_net);
+       int ret;
+
+       mutex_lock(&pn->all_ppp_mutex);
+
+       if (unit < 0) {
+               ret = unit_get(&pn->units_idr, ppp);
+               if (ret < 0)
+                       goto err;
+       } else {
+               /* Caller asked for a specific unit number. Fail with -EEXIST
+                * if unavailable. For backward compatibility, return -EEXIST
+                * too if idr allocation fails; this makes pppd retry without
+                * requesting a specific unit number.
+                */
+               if (unit_find(&pn->units_idr, unit)) {
+                       ret = -EEXIST;
+                       goto err;
+               }
+               ret = unit_set(&pn->units_idr, ppp, unit);
+               if (ret < 0) {
+                       /* Rewrite error for backward compatibility */
+                       ret = -EEXIST;
+                       goto err;
+               }
+       }
+       ppp->file.index = ret;
+
+       if (!ifname_is_set)
+               snprintf(ppp->dev->name, IFNAMSIZ, "ppp%i", ppp->file.index);
+
+       ret = register_netdevice(ppp->dev);
+       if (ret < 0)
+               goto err_unit;
+
+       atomic_inc(&ppp_unit_count);
+
+       mutex_unlock(&pn->all_ppp_mutex);
+
+       return 0;
+
+err_unit:
+       unit_put(&pn->units_idr, ppp->file.index);
+err:
+       mutex_unlock(&pn->all_ppp_mutex);
+
+       return ret;
+}
+
+static int ppp_dev_configure(struct net *src_net, struct net_device *dev,
+                            const struct ppp_config *conf)
+{
+       struct ppp *ppp = netdev_priv(dev);
+       int indx;
+       int err;
+
+       ppp->dev = dev;
+       ppp->ppp_net = src_net;
+       ppp->mru = PPP_MRU;
+       ppp->owner = conf->file;
+
+       init_ppp_file(&ppp->file, INTERFACE);
+       ppp->file.hdrlen = PPP_HDRLEN - 2; /* don't count proto bytes */
+
+       for (indx = 0; indx < NUM_NP; ++indx)
+               ppp->npmode[indx] = NPMODE_PASS;
+       INIT_LIST_HEAD(&ppp->channels);
+       spin_lock_init(&ppp->rlock);
+       spin_lock_init(&ppp->wlock);
+#ifdef CONFIG_PPP_MULTILINK
+       ppp->minseq = -1;
+       skb_queue_head_init(&ppp->mrq);
+#endif /* CONFIG_PPP_MULTILINK */
+#ifdef CONFIG_PPP_FILTER
+       ppp->pass_filter = NULL;
+       ppp->active_filter = NULL;
+#endif /* CONFIG_PPP_FILTER */
+
+       err = ppp_unit_register(ppp, conf->unit, conf->ifname_is_set);
+       if (err < 0)
+               return err;
+
+       conf->file->private_data = &ppp->file;
+
+       return 0;
+}
+
+static const struct nla_policy ppp_nl_policy[IFLA_PPP_MAX + 1] = {
+       [IFLA_PPP_DEV_FD]       = { .type = NLA_S32 },
+};
+
+static int ppp_nl_validate(struct nlattr *tb[], struct nlattr *data[])
+{
+       if (!data)
+               return -EINVAL;
+
+       if (!data[IFLA_PPP_DEV_FD])
+               return -EINVAL;
+       if (nla_get_s32(data[IFLA_PPP_DEV_FD]) < 0)
+               return -EBADF;
+
+       return 0;
+}
+
+static int ppp_nl_newlink(struct net *src_net, struct net_device *dev,
+                         struct nlattr *tb[], struct nlattr *data[])
+{
+       struct ppp_config conf = {
+               .unit = -1,
+               .ifname_is_set = true,
+       };
+       struct file *file;
+       int err;
+
+       file = fget(nla_get_s32(data[IFLA_PPP_DEV_FD]));
+       if (!file)
+               return -EBADF;
+
+       /* rtnl_lock is already held here, but ppp_create_interface() locks
+        * ppp_mutex before holding rtnl_lock. Using mutex_trylock() avoids
+        * possible deadlock due to lock order inversion, at the cost of
+        * pushing the problem back to userspace.
+        */
+       if (!mutex_trylock(&ppp_mutex)) {
+               err = -EBUSY;
+               goto out;
+       }
+
+       if (file->f_op != &ppp_device_fops || file->private_data) {
+               err = -EBADF;
+               goto out_unlock;
+       }
+
+       conf.file = file;
+       err = ppp_dev_configure(src_net, dev, &conf);
+
+out_unlock:
+       mutex_unlock(&ppp_mutex);
+out:
+       fput(file);
+
+       return err;
+}
+
+static void ppp_nl_dellink(struct net_device *dev, struct list_head *head)
+{
+       unregister_netdevice_queue(dev, head);
+}
+
+static size_t ppp_nl_get_size(const struct net_device *dev)
+{
+       return 0;
+}
+
+static int ppp_nl_fill_info(struct sk_buff *skb, const struct net_device *dev)
+{
+       return 0;
+}
+
+static struct net *ppp_nl_get_link_net(const struct net_device *dev)
+{
+       struct ppp *ppp = netdev_priv(dev);
+
+       return ppp->ppp_net;
+}
+
+static struct rtnl_link_ops ppp_link_ops __read_mostly = {
+       .kind           = "ppp",
+       .maxtype        = IFLA_PPP_MAX,
+       .policy         = ppp_nl_policy,
+       .priv_size      = sizeof(struct ppp),
+       .setup          = ppp_setup,
+       .validate       = ppp_nl_validate,
+       .newlink        = ppp_nl_newlink,
+       .dellink        = ppp_nl_dellink,
+       .get_size       = ppp_nl_get_size,
+       .fill_info      = ppp_nl_fill_info,
+       .get_link_net   = ppp_nl_get_link_net,
+};
+
 #define PPP_MAJOR      108
 
 /* Called at boot time if ppp is compiled into the kernel,
@@ -988,11 +1177,19 @@ static int __init ppp_init(void)
                goto out_chrdev;
        }
 
+       err = rtnl_link_register(&ppp_link_ops);
+       if (err) {
+               pr_err("failed to register rtnetlink PPP handler\n");
+               goto out_class;
+       }
+
        /* not a big deal if we fail here :-) */
        device_create(ppp_class, NULL, MKDEV(PPP_MAJOR, 0), NULL, "ppp");
 
        return 0;
 
+out_class:
+       class_destroy(ppp_class);
 out_chrdev:
        unregister_chrdev(PPP_MAJOR, "ppp");
 out_net:
@@ -2732,102 +2929,42 @@ ppp_get_stats(struct ppp *ppp, struct ppp_stats *st)
  * or if there is already a unit with the requested number.
  * unit == -1 means allocate a new number.
  */
-static struct ppp *ppp_create_interface(struct net *net, int unit,
-                                       struct file *file, int *retp)
+static int ppp_create_interface(struct net *net, struct file *file, int *unit)
 {
+       struct ppp_config conf = {
+               .file = file,
+               .unit = *unit,
+               .ifname_is_set = false,
+       };
+       struct net_device *dev;
        struct ppp *ppp;
-       struct ppp_net *pn;
-       struct net_device *dev = NULL;
-       int ret = -ENOMEM;
-       int i;
+       int err;
 
        dev = alloc_netdev(sizeof(struct ppp), "", NET_NAME_ENUM, ppp_setup);
-       if (!dev)
-               goto out1;
-
-       pn = ppp_pernet(net);
-
-       ppp = netdev_priv(dev);
-       ppp->dev = dev;
-       ppp->mru = PPP_MRU;
-       init_ppp_file(&ppp->file, INTERFACE);
-       ppp->file.hdrlen = PPP_HDRLEN - 2;      /* don't count proto bytes */
-       ppp->owner = file;
-       for (i = 0; i < NUM_NP; ++i)
-               ppp->npmode[i] = NPMODE_PASS;
-       INIT_LIST_HEAD(&ppp->channels);
-       spin_lock_init(&ppp->rlock);
-       spin_lock_init(&ppp->wlock);
-#ifdef CONFIG_PPP_MULTILINK
-       ppp->minseq = -1;
-       skb_queue_head_init(&ppp->mrq);
-#endif /* CONFIG_PPP_MULTILINK */
-#ifdef CONFIG_PPP_FILTER
-       ppp->pass_filter = NULL;
-       ppp->active_filter = NULL;
-#endif /* CONFIG_PPP_FILTER */
-
-       /*
-        * drum roll: don't forget to set
-        * the net device is belong to
-        */
+       if (!dev) {
+               err = -ENOMEM;
+               goto err;
+       }
        dev_net_set(dev, net);
+       dev->rtnl_link_ops = &ppp_link_ops;
 
        rtnl_lock();
-       mutex_lock(&pn->all_ppp_mutex);
 
-       if (unit < 0) {
-               unit = unit_get(&pn->units_idr, ppp);
-               if (unit < 0) {
-                       ret = unit;
-                       goto out2;
-               }
-       } else {
-               ret = -EEXIST;
-               if (unit_find(&pn->units_idr, unit))
-                       goto out2; /* unit already exists */
-               /*
-                * if caller need a specified unit number
-                * lets try to satisfy him, otherwise --
-                * he should better ask us for new unit number
-                *
-                * NOTE: yes I know that returning EEXIST it's not
-                * fair but at least pppd will ask us to allocate
-                * new unit in this case so user is happy :)
-                */
-               unit = unit_set(&pn->units_idr, ppp, unit);
-               if (unit < 0)
-                       goto out2;
-       }
-
-       /* Initialize the new ppp unit */
-       ppp->file.index = unit;
-       sprintf(dev->name, "ppp%d", unit);
-
-       ret = register_netdevice(dev);
-       if (ret != 0) {
-               unit_put(&pn->units_idr, unit);
-               netdev_err(ppp->dev, "PPP: couldn't register device %s (%d)\n",
-                          dev->name, ret);
-               goto out2;
-       }
-
-       ppp->ppp_net = net;
+       err = ppp_dev_configure(net, dev, &conf);
+       if (err < 0)
+               goto err_dev;
+       ppp = netdev_priv(dev);
+       *unit = ppp->file.index;
 
-       atomic_inc(&ppp_unit_count);
-       mutex_unlock(&pn->all_ppp_mutex);
        rtnl_unlock();
 
-       *retp = 0;
-       return ppp;
+       return 0;
 
-out2:
-       mutex_unlock(&pn->all_ppp_mutex);
+err_dev:
        rtnl_unlock();
        free_netdev(dev);
-out1:
-       *retp = ret;
-       return NULL;
+err:
+       return err;
 }
 
 /*
@@ -3016,6 +3153,7 @@ static void __exit ppp_cleanup(void)
        /* should never happen */
        if (atomic_read(&ppp_unit_count) || atomic_read(&channel_count))
                pr_err("PPP: removing module but units remain!\n");
+       rtnl_link_unregister(&ppp_link_ops);
        unregister_chrdev(PPP_MAJOR, "ppp");
        device_destroy(ppp_class, MKDEV(PPP_MAJOR, 0));
        class_destroy(ppp_class);
@@ -3074,4 +3212,5 @@ EXPORT_SYMBOL(ppp_register_compressor);
 EXPORT_SYMBOL(ppp_unregister_compressor);
 MODULE_LICENSE("GPL");
 MODULE_ALIAS_CHARDEV(PPP_MAJOR, 0);
+MODULE_ALIAS_RTNL_LINK("ppp");
 MODULE_ALIAS("devname:ppp");
index 9cfe6ae..a31f461 100644 (file)
@@ -179,11 +179,7 @@ static int rionet_start_xmit(struct sk_buff *skb, struct net_device *ndev)
        unsigned long flags;
        int add_num = 1;
 
-       local_irq_save(flags);
-       if (!spin_trylock(&rnet->tx_lock)) {
-               local_irq_restore(flags);
-               return NETDEV_TX_LOCKED;
-       }
+       spin_lock_irqsave(&rnet->tx_lock, flags);
 
        if (is_multicast_ether_addr(eth->h_dest))
                add_num = nets[rnet->mport->id].nact;
index a17d86a..9ed6d1c 100644 (file)
@@ -407,7 +407,7 @@ static void sl_encaps(struct slip *sl, unsigned char *icp, int len)
        set_bit(TTY_DO_WRITE_WAKEUP, &sl->tty->flags);
        actual = sl->tty->ops->write(sl->tty, sl->xbuff, count);
 #ifdef SL_CHECK_TRANSMIT
-       sl->dev->trans_start = jiffies;
+       netif_trans_update(sl->dev);
 #endif
        sl->xleft = count - actual;
        sl->xhead = sl->xbuff + actual;
index 42992dc..425e983 100644 (file)
@@ -833,7 +833,8 @@ static netdev_tx_t tun_net_xmit(struct sk_buff *skb, struct net_device *dev)
        if (txq >= numqueues)
                goto drop;
 
-       if (numqueues == 1) {
+#ifdef CONFIG_RPS
+       if (numqueues == 1 && static_key_false(&rps_needed)) {
                /* Select queue was not called for the skbuff, so we extract the
                 * RPS hash and save it into the flow_table here.
                 */
@@ -848,6 +849,7 @@ static netdev_tx_t tun_net_xmit(struct sk_buff *skb, struct net_device *dev)
                                tun_flow_save_rps_rxhash(e, rxhash);
                }
        }
+#endif
 
        tun_debug(KERN_INFO, tun, "tun_net_xmit %d\n", skb->len);
 
index 4e2b26a..d9ca05d 100644 (file)
@@ -376,7 +376,7 @@ static int catc_tx_run(struct catc *catc)
        catc->tx_idx = !catc->tx_idx;
        catc->tx_ptr = 0;
 
-       catc->netdev->trans_start = jiffies;
+       netif_trans_update(catc->netdev);
        return status;
 }
 
@@ -389,7 +389,7 @@ static void catc_tx_done(struct urb *urb)
        if (status == -ECONNRESET) {
                dev_dbg(&urb->dev->dev, "Tx Reset.\n");
                urb->status = 0;
-               catc->netdev->trans_start = jiffies;
+               netif_trans_update(catc->netdev);
                catc->netdev->stats.tx_errors++;
                clear_bit(TX_RUNNING, &catc->flags);
                netif_wake_queue(catc->netdev);
index f64b25c..770212b 100644 (file)
@@ -938,7 +938,7 @@ static void kaweth_tx_timeout(struct net_device *net)
 
        dev_warn(&net->dev, "%s: Tx timed out. Resetting.\n", net->name);
        kaweth->stats.tx_errors++;
-       net->trans_start = jiffies;
+       netif_trans_update(net);
 
        usb_unlink_urb(kaweth->tx_urb);
 }
index f20890e..6a9d474 100644 (file)
@@ -269,6 +269,7 @@ struct skb_data {           /* skb->cb is one of these */
        struct lan78xx_net *dev;
        enum skb_state state;
        size_t length;
+       int num_of_packet;
 };
 
 struct usb_context {
@@ -1803,7 +1804,34 @@ static void lan78xx_remove_mdio(struct lan78xx_net *dev)
 
 static void lan78xx_link_status_change(struct net_device *net)
 {
-       /* nothing to do */
+       struct phy_device *phydev = net->phydev;
+       int ret, temp;
+
+       /* At forced 100 F/H mode, chip may fail to set mode correctly
+        * when cable is switched between long(~50+m) and short one.
+        * As workaround, set to 10 before setting to 100
+        * at forced 100 F/H mode.
+        */
+       if (!phydev->autoneg && (phydev->speed == 100)) {
+               /* disable phy interrupt */
+               temp = phy_read(phydev, LAN88XX_INT_MASK);
+               temp &= ~LAN88XX_INT_MASK_MDINTPIN_EN_;
+               ret = phy_write(phydev, LAN88XX_INT_MASK, temp);
+
+               temp = phy_read(phydev, MII_BMCR);
+               temp &= ~(BMCR_SPEED100 | BMCR_SPEED1000);
+               phy_write(phydev, MII_BMCR, temp); /* set to 10 first */
+               temp |= BMCR_SPEED100;
+               phy_write(phydev, MII_BMCR, temp); /* set to 100 later */
+
+               /* clear pending interrupt generated while workaround */
+               temp = phy_read(phydev, LAN88XX_INT_STS);
+
+               /* enable phy interrupt back */
+               temp = phy_read(phydev, LAN88XX_INT_MASK);
+               temp |= LAN88XX_INT_MASK_MDINTPIN_EN_;
+               ret = phy_write(phydev, LAN88XX_INT_MASK, temp);
+       }
 }
 
 static int lan78xx_phy_init(struct lan78xx_net *dev)
@@ -2464,7 +2492,7 @@ static void tx_complete(struct urb *urb)
        struct lan78xx_net *dev = entry->dev;
 
        if (urb->status == 0) {
-               dev->net->stats.tx_packets++;
+               dev->net->stats.tx_packets += entry->num_of_packet;
                dev->net->stats.tx_bytes += entry->length;
        } else {
                dev->net->stats.tx_errors++;
@@ -2681,10 +2709,11 @@ void lan78xx_skb_return(struct lan78xx_net *dev, struct sk_buff *skb)
                return;
        }
 
-       skb->protocol = eth_type_trans(skb, dev->net);
        dev->net->stats.rx_packets++;
        dev->net->stats.rx_bytes += skb->len;
 
+       skb->protocol = eth_type_trans(skb, dev->net);
+
        netif_dbg(dev, rx_status, dev->net, "< rx, len %zu, type 0x%x\n",
                  skb->len + sizeof(struct ethhdr), skb->protocol);
        memset(skb->cb, 0, sizeof(struct skb_data));
@@ -2934,13 +2963,16 @@ static void lan78xx_tx_bh(struct lan78xx_net *dev)
 
        skb_totallen = 0;
        pkt_cnt = 0;
+       count = 0;
+       length = 0;
        for (skb = tqp->next; pkt_cnt < tqp->qlen; skb = skb->next) {
                if (skb_is_gso(skb)) {
                        if (pkt_cnt) {
                                /* handle previous packets first */
                                break;
                        }
-                       length = skb->len;
+                       count = 1;
+                       length = skb->len - TX_OVERHEAD;
                        skb2 = skb_dequeue(tqp);
                        goto gso_skb;
                }
@@ -2961,14 +2993,13 @@ static void lan78xx_tx_bh(struct lan78xx_net *dev)
        for (count = pos = 0; count < pkt_cnt; count++) {
                skb2 = skb_dequeue(tqp);
                if (skb2) {
+                       length += (skb2->len - TX_OVERHEAD);
                        memcpy(skb->data + pos, skb2->data, skb2->len);
                        pos += roundup(skb2->len, sizeof(u32));
                        dev_kfree_skb(skb2);
                }
        }
 
-       length = skb_totallen;
-
 gso_skb:
        urb = usb_alloc_urb(0, GFP_ATOMIC);
        if (!urb) {
@@ -2980,6 +3011,7 @@ gso_skb:
        entry->urb = urb;
        entry->dev = dev;
        entry->length = length;
+       entry->num_of_packet = count;
 
        spin_lock_irqsave(&dev->txq.lock, flags);
        ret = usb_autopm_get_interface_async(dev->intf);
@@ -3013,7 +3045,7 @@ gso_skb:
        ret = usb_submit_urb(urb, GFP_ATOMIC);
        switch (ret) {
        case 0:
-               dev->net->trans_start = jiffies;
+               netif_trans_update(dev->net);
                lan78xx_queue_skb(&dev->txq, skb, tx_start);
                if (skb_queue_len(&dev->txq) >= dev->tx_qlen)
                        netif_stop_queue(dev->net);
@@ -3697,7 +3729,7 @@ int lan78xx_resume(struct usb_interface *intf)
                                usb_free_urb(res);
                                usb_autopm_put_interface_async(dev->intf);
                        } else {
-                               dev->net->trans_start = jiffies;
+                               netif_trans_update(dev->net);
                                lan78xx_queue_skb(&dev->txq, skb, tx_start);
                        }
                }
index f840802..36cd7f0 100644 (file)
@@ -411,7 +411,7 @@ static int enable_net_traffic(struct net_device *dev, struct usb_device *usb)
        int ret;
 
        read_mii_word(pegasus, pegasus->phy, MII_LPA, &linkpart);
-       data[0] = 0xc9;
+       data[0] = 0xc8; /* TX & RX enable, append status, no CRC */
        data[1] = 0;
        if (linkpart & (ADVERTISE_100FULL | ADVERTISE_10FULL))
                data[1] |= 0x20;        /* set full duplex */
@@ -497,7 +497,7 @@ static void read_bulk_callback(struct urb *urb)
                pkt_len = buf[count - 3] << 8;
                pkt_len += buf[count - 4];
                pkt_len &= 0xfff;
-               pkt_len -= 8;
+               pkt_len -= 4;
        }
 
        /*
@@ -528,7 +528,7 @@ static void read_bulk_callback(struct urb *urb)
 goon:
        usb_fill_bulk_urb(pegasus->rx_urb, pegasus->usb,
                          usb_rcvbulkpipe(pegasus->usb, 1),
-                         pegasus->rx_skb->data, PEGASUS_MTU + 8,
+                         pegasus->rx_skb->data, PEGASUS_MTU,
                          read_bulk_callback, pegasus);
        rx_status = usb_submit_urb(pegasus->rx_urb, GFP_ATOMIC);
        if (rx_status == -ENODEV)
@@ -569,7 +569,7 @@ static void rx_fixup(unsigned long data)
        }
        usb_fill_bulk_urb(pegasus->rx_urb, pegasus->usb,
                          usb_rcvbulkpipe(pegasus->usb, 1),
-                         pegasus->rx_skb->data, PEGASUS_MTU + 8,
+                         pegasus->rx_skb->data, PEGASUS_MTU,
                          read_bulk_callback, pegasus);
 try_again:
        status = usb_submit_urb(pegasus->rx_urb, GFP_ATOMIC);
@@ -615,7 +615,7 @@ static void write_bulk_callback(struct urb *urb)
                break;
        }
 
-       net->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(net); /* prevent tx timeout */
        netif_wake_queue(net);
 }
 
@@ -823,7 +823,7 @@ static int pegasus_open(struct net_device *net)
 
        usb_fill_bulk_urb(pegasus->rx_urb, pegasus->usb,
                          usb_rcvbulkpipe(pegasus->usb, 1),
-                         pegasus->rx_skb->data, PEGASUS_MTU + 8,
+                         pegasus->rx_skb->data, PEGASUS_MTU,
                          read_bulk_callback, pegasus);
        if ((res = usb_submit_urb(pegasus->rx_urb, GFP_KERNEL))) {
                if (res == -ENODEV)
index d1f78c2..3f9f6ed 100644 (file)
@@ -3366,7 +3366,7 @@ static void r8153_init(struct r8152 *tp)
        ocp_write_word(tp, MCU_TYPE_PLA, PLA_LED_FEATURE, ocp_data);
 
        ocp_data = FIFO_EMPTY_1FB | ROK_EXIT_LPM;
-       if (tp->version == RTL_VER_04 && tp->udev->speed != USB_SPEED_SUPER)
+       if (tp->version == RTL_VER_04 && tp->udev->speed < USB_SPEED_SUPER)
                ocp_data |= LPM_TIMER_500MS;
        else
                ocp_data |= LPM_TIMER_500US;
@@ -4211,6 +4211,7 @@ static int rtl8152_probe(struct usb_interface *intf,
 
        switch (udev->speed) {
        case USB_SPEED_SUPER:
+       case USB_SPEED_SUPER_PLUS:
                tp->coalesce = COALESCE_SUPER;
                break;
        case USB_SPEED_HIGH:
index d37b7dc..7c72bfa 100644 (file)
@@ -451,7 +451,7 @@ static void write_bulk_callback(struct urb *urb)
        if (status)
                dev_info(&urb->dev->dev, "%s: Tx status %d\n",
                         dev->netdev->name, status);
-       dev->netdev->trans_start = jiffies;
+       netif_trans_update(dev->netdev);
        netif_wake_queue(dev->netdev);
 }
 
@@ -694,7 +694,7 @@ static netdev_tx_t rtl8150_start_xmit(struct sk_buff *skb,
        } else {
                netdev->stats.tx_packets++;
                netdev->stats.tx_bytes += skb->len;
-               netdev->trans_start = jiffies;
+               netif_trans_update(netdev);
        }
 
        return NETDEV_TX_OK;
index 30033db..9af9799 100644 (file)
@@ -29,6 +29,7 @@
 #include <linux/crc32.h>
 #include <linux/usb/usbnet.h>
 #include <linux/slab.h>
+#include <linux/of_net.h>
 #include "smsc75xx.h"
 
 #define SMSC_CHIPNAME                  "smsc75xx"
@@ -98,9 +99,11 @@ static int __must_check __smsc75xx_read_reg(struct usbnet *dev, u32 index,
        ret = fn(dev, USB_VENDOR_REQUEST_READ_REGISTER, USB_DIR_IN
                 | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
                 0, index, &buf, 4);
-       if (unlikely(ret < 0))
+       if (unlikely(ret < 0)) {
                netdev_warn(dev->net, "Failed to read reg index 0x%08x: %d\n",
                            index, ret);
+               return ret;
+       }
 
        le32_to_cpus(&buf);
        *data = buf;
@@ -761,6 +764,15 @@ static int smsc75xx_ioctl(struct net_device *netdev, struct ifreq *rq, int cmd)
 
 static void smsc75xx_init_mac_address(struct usbnet *dev)
 {
+       const u8 *mac_addr;
+
+       /* maybe the boot loader passed the MAC address in devicetree */
+       mac_addr = of_get_mac_address(dev->udev->dev.of_node);
+       if (mac_addr) {
+               memcpy(dev->net->dev_addr, mac_addr, ETH_ALEN);
+               return;
+       }
+
        /* try reading mac address from EEPROM */
        if (smsc75xx_read_eeprom(dev, EEPROM_MAC_OFFSET, ETH_ALEN,
                        dev->net->dev_addr) == 0) {
@@ -772,7 +784,7 @@ static void smsc75xx_init_mac_address(struct usbnet *dev)
                }
        }
 
-       /* no eeprom, or eeprom values are invalid. generate random MAC */
+       /* no useful static MAC address found. generate a random one */
        eth_hw_addr_random(dev->net);
        netif_dbg(dev, ifup, dev->net, "MAC address set to eth_random_addr\n");
 }
index 66b3ab9..d9d2806 100644 (file)
@@ -29,6 +29,7 @@
 #include <linux/crc32.h>
 #include <linux/usb/usbnet.h>
 #include <linux/slab.h>
+#include <linux/of_net.h>
 #include "smsc95xx.h"
 
 #define SMSC_CHIPNAME                  "smsc95xx"
@@ -91,9 +92,11 @@ static int __must_check __smsc95xx_read_reg(struct usbnet *dev, u32 index,
        ret = fn(dev, USB_VENDOR_REQUEST_READ_REGISTER, USB_DIR_IN
                 | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
                 0, index, &buf, 4);
-       if (unlikely(ret < 0))
+       if (unlikely(ret < 0)) {
                netdev_warn(dev->net, "Failed to read reg index 0x%08x: %d\n",
                            index, ret);
+               return ret;
+       }
 
        le32_to_cpus(&buf);
        *data = buf;
@@ -765,6 +768,15 @@ static int smsc95xx_ioctl(struct net_device *netdev, struct ifreq *rq, int cmd)
 
 static void smsc95xx_init_mac_address(struct usbnet *dev)
 {
+       const u8 *mac_addr;
+
+       /* maybe the boot loader passed the MAC address in devicetree */
+       mac_addr = of_get_mac_address(dev->udev->dev.of_node);
+       if (mac_addr) {
+               memcpy(dev->net->dev_addr, mac_addr, ETH_ALEN);
+               return;
+       }
+
        /* try reading mac address from EEPROM */
        if (smsc95xx_read_eeprom(dev, EEPROM_MAC_OFFSET, ETH_ALEN,
                        dev->net->dev_addr) == 0) {
@@ -775,7 +787,7 @@ static void smsc95xx_init_mac_address(struct usbnet *dev)
                }
        }
 
-       /* no eeprom, or eeprom values are invalid. generate random MAC */
+       /* no useful static MAC address found. generate a random one */
        eth_hw_addr_random(dev->net);
        netif_dbg(dev, ifup, dev->net, "MAC address set to eth_random_addr\n");
 }
index 1079812..61ba464 100644 (file)
@@ -356,6 +356,7 @@ void usbnet_update_max_qlen(struct usbnet *dev)
                dev->tx_qlen = MAX_QUEUE_MEMORY / dev->hard_mtu;
                break;
        case USB_SPEED_SUPER:
+       case USB_SPEED_SUPER_PLUS:
                /*
                 * Not take default 5ms qlen for super speed HC to
                 * save memory, and iperf tests show 2.5ms qlen can
@@ -1415,7 +1416,7 @@ netdev_tx_t usbnet_start_xmit (struct sk_buff *skb,
                          "tx: submit urb err %d\n", retval);
                break;
        case 0:
-               net->trans_start = jiffies;
+               netif_trans_update(net);
                __usbnet_queue_skb(&dev->txq, skb, tx_start);
                if (dev->txq.qlen >= TX_QLEN (dev))
                        netif_stop_queue (net);
@@ -1844,7 +1845,7 @@ int usbnet_resume (struct usb_interface *intf)
                                usb_free_urb(res);
                                usb_autopm_put_interface_async(dev->intf);
                        } else {
-                               dev->net->trans_start = jiffies;
+                               netif_trans_update(dev->net);
                                __skb_queue_tail(&dev->txq, skb);
                        }
                }
index 8a8f1e5..4b2461a 100644 (file)
@@ -364,17 +364,23 @@ static int vrf_rt6_create(struct net_device *dev)
 {
        struct net_vrf *vrf = netdev_priv(dev);
        struct net *net = dev_net(dev);
+       struct fib6_table *rt6i_table;
        struct rt6_info *rt6;
        int rc = -ENOMEM;
 
+       rt6i_table = fib6_new_table(net, vrf->tb_id);
+       if (!rt6i_table)
+               goto out;
+
        rt6 = ip6_dst_alloc(net, dev,
                            DST_HOST | DST_NOPOLICY | DST_NOXFRM | DST_NOCACHE);
        if (!rt6)
                goto out;
 
-       rt6->dst.output = vrf_output6;
-       rt6->rt6i_table = fib6_get_table(net, vrf->tb_id);
        dst_hold(&rt6->dst);
+
+       rt6->rt6i_table = rt6i_table;
+       rt6->dst.output = vrf_output6;
        vrf->rt6 = rt6;
        rc = 0;
 out:
@@ -462,6 +468,9 @@ static struct rtable *vrf_rtable_create(struct net_device *dev)
        struct net_vrf *vrf = netdev_priv(dev);
        struct rtable *rth;
 
+       if (!fib_new_table(dev_net(dev), vrf->tb_id))
+               return NULL;
+
        rth = rt_dst_alloc(dev, 0, RTN_UNICAST, 1, 1, 0);
        if (rth) {
                rth->dst.output = vrf_output;
index 6fb93b5..2668e52 100644 (file)
@@ -2557,6 +2557,9 @@ static void vxlan_setup(struct net_device *dev)
        struct vxlan_dev *vxlan = netdev_priv(dev);
        unsigned int h;
 
+       eth_hw_addr_random(dev);
+       ether_setup(dev);
+
        dev->destructor = free_netdev;
        SET_NETDEV_DEVTYPE(dev, &vxlan_type);
 
@@ -2592,8 +2595,6 @@ static void vxlan_setup(struct net_device *dev)
 
 static void vxlan_ether_setup(struct net_device *dev)
 {
-       eth_hw_addr_random(dev);
-       ether_setup(dev);
        dev->priv_flags &= ~IFF_TX_SKB_SHARING;
        dev->priv_flags |= IFF_LIVE_ADDR_CHANGE;
        dev->netdev_ops = &vxlan_netdev_ether_ops;
@@ -2601,11 +2602,10 @@ static void vxlan_ether_setup(struct net_device *dev)
 
 static void vxlan_raw_setup(struct net_device *dev)
 {
+       dev->header_ops = NULL;
        dev->type = ARPHRD_NONE;
        dev->hard_header_len = 0;
        dev->addr_len = 0;
-       dev->mtu = ETH_DATA_LEN;
-       dev->tx_queue_len = 1000;
        dev->flags = IFF_POINTOPOINT | IFF_NOARP | IFF_MULTICAST;
        dev->netdev_ops = &vxlan_netdev_raw_ops;
 }
index 848ea6a..b87fe0a 100644 (file)
@@ -739,7 +739,7 @@ static char *cosa_net_setup_rx(struct channel_data *chan, int size)
                chan->netdev->stats.rx_dropped++;
                return NULL;
        }
-       chan->netdev->trans_start = jiffies;
+       netif_trans_update(chan->netdev);
        return skb_put(chan->rx_skb, size);
 }
 
index 69b994f..3c9cbf9 100644 (file)
@@ -831,7 +831,7 @@ fst_tx_dma_complete(struct fst_card_info *card, struct fst_port_info *port,
                DMA_OWN | TX_STP | TX_ENP);
        dev->stats.tx_packets++;
        dev->stats.tx_bytes += len;
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
 }
 
 /*
@@ -1389,7 +1389,7 @@ do_bottom_half_tx(struct fst_card_info *card)
                                                DMA_OWN | TX_STP | TX_ENP);
                                        dev->stats.tx_packets++;
                                        dev->stats.tx_bytes += skb->len;
-                                       dev->trans_start = jiffies;
+                                       netif_trans_update(dev);
                                } else {
                                        /* Or do it through dma */
                                        memcpy(card->tx_dma_handle_host,
@@ -2258,7 +2258,7 @@ fst_tx_timeout(struct net_device *dev)
            card->card_no, port->index);
        fst_issue_cmd(port, ABORTTX);
 
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
        netif_wake_queue(dev);
        port->start = 0;
 }
index bb33b24..299140c 100644 (file)
@@ -2105,7 +2105,7 @@ static void lmc_driver_timeout(struct net_device *dev)
     sc->lmc_device->stats.tx_errors++;
     sc->extra_stats.tx_ProcTimeout++; /* -baz */
 
-    dev->trans_start = jiffies; /* prevent tx timeout */
+    netif_trans_update(dev); /* prevent tx timeout */
 
 bug_out:
 
index 8fef8d8..d98c7e5 100644 (file)
@@ -860,9 +860,9 @@ prepare_to_send( struct sk_buff  *skb,  struct net_device  *dev )
 
        outb( inb( dev->base_addr + CSR0 ) | TR_REQ,  dev->base_addr + CSR0 );
 #ifdef CONFIG_SBNI_MULTILINE
-       nl->master->trans_start = jiffies;
+       netif_trans_update(nl->master);
 #else
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
 #endif
 }
 
@@ -889,10 +889,10 @@ drop_xmit_queue( struct net_device  *dev )
        nl->state &= ~(FL_WAIT_ACK | FL_NEED_RESEND);
 #ifdef CONFIG_SBNI_MULTILINE
        netif_start_queue( nl->master );
-       nl->master->trans_start = jiffies;
+       netif_trans_update(nl->master);
 #else
        netif_start_queue( dev );
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
 #endif
 }
 
index a9970f1..bb74f4b 100644 (file)
@@ -334,7 +334,7 @@ int i2400m_net_tx(struct i2400m *i2400m, struct net_device *net_dev,
        d_fnstart(3, dev, "(i2400m %p net_dev %p skb %p)\n",
                  i2400m, net_dev, skb);
        /* FIXME: check eth hdr, only IPv4 is routed by the device as of now */
-       net_dev->trans_start = jiffies;
+       netif_trans_update(net_dev);
        i2400m_tx_prep_header(skb);
        d_printf(3, dev, "NETTX: skb %p sending %d bytes to radio\n",
                 skb, skb->len);
index 7212802..9fb8d74 100644 (file)
@@ -1050,11 +1050,11 @@ int ath10k_ce_alloc_pipe(struct ath10k *ar, int ce_id,
         *
         * For the lack of a better place do the check here.
         */
-       BUILD_BUG_ON(2*TARGET_NUM_MSDU_DESC >
+       BUILD_BUG_ON(2 * TARGET_NUM_MSDU_DESC >
                     (CE_HTT_H2T_MSG_SRC_NENTRIES - 1));
-       BUILD_BUG_ON(2*TARGET_10X_NUM_MSDU_DESC >
+       BUILD_BUG_ON(2 * TARGET_10X_NUM_MSDU_DESC >
                     (CE_HTT_H2T_MSG_SRC_NENTRIES - 1));
-       BUILD_BUG_ON(2*TARGET_TLV_NUM_MSDU_DESC >
+       BUILD_BUG_ON(2 * TARGET_TLV_NUM_MSDU_DESC >
                     (CE_HTT_H2T_MSG_SRC_NENTRIES - 1));
 
        ce_state->ar = ar;
index 25cafcf..dfc0986 100644 (file)
@@ -408,7 +408,7 @@ static inline u32 ath10k_ce_base_address(struct ath10k *ar, unsigned int ce_id)
 
 /* Ring arithmetic (modulus number of entries in ring, which is a pwr of 2). */
 #define CE_RING_DELTA(nentries_mask, fromidx, toidx) \
-       (((int)(toidx)-(int)(fromidx)) & (nentries_mask))
+       (((int)(toidx) - (int)(fromidx)) & (nentries_mask))
 
 #define CE_RING_IDX_INCR(nentries_mask, idx) (((idx) + 1) & (nentries_mask))
 #define CE_RING_IDX_ADD(nentries_mask, idx, num) \
index b2c7fe3..e94cb87 100644 (file)
@@ -63,8 +63,6 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
                .cal_data_len = 2116,
                .fw = {
                        .dir = QCA988X_HW_2_0_FW_DIR,
-                       .fw = QCA988X_HW_2_0_FW_FILE,
-                       .otp = QCA988X_HW_2_0_OTP_FILE,
                        .board = QCA988X_HW_2_0_BOARD_DATA_FILE,
                        .board_size = QCA988X_BOARD_DATA_SZ,
                        .board_ext_size = QCA988X_BOARD_EXT_DATA_SZ,
@@ -82,8 +80,6 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
                .cal_data_len = 8124,
                .fw = {
                        .dir = QCA6174_HW_2_1_FW_DIR,
-                       .fw = QCA6174_HW_2_1_FW_FILE,
-                       .otp = QCA6174_HW_2_1_OTP_FILE,
                        .board = QCA6174_HW_2_1_BOARD_DATA_FILE,
                        .board_size = QCA6174_BOARD_DATA_SZ,
                        .board_ext_size = QCA6174_BOARD_EXT_DATA_SZ,
@@ -102,8 +98,6 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
                .cal_data_len = 8124,
                .fw = {
                        .dir = QCA6174_HW_2_1_FW_DIR,
-                       .fw = QCA6174_HW_2_1_FW_FILE,
-                       .otp = QCA6174_HW_2_1_OTP_FILE,
                        .board = QCA6174_HW_2_1_BOARD_DATA_FILE,
                        .board_size = QCA6174_BOARD_DATA_SZ,
                        .board_ext_size = QCA6174_BOARD_EXT_DATA_SZ,
@@ -122,8 +116,6 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
                .cal_data_len = 8124,
                .fw = {
                        .dir = QCA6174_HW_3_0_FW_DIR,
-                       .fw = QCA6174_HW_3_0_FW_FILE,
-                       .otp = QCA6174_HW_3_0_OTP_FILE,
                        .board = QCA6174_HW_3_0_BOARD_DATA_FILE,
                        .board_size = QCA6174_BOARD_DATA_SZ,
                        .board_ext_size = QCA6174_BOARD_EXT_DATA_SZ,
@@ -143,8 +135,6 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
                .fw = {
                        /* uses same binaries as hw3.0 */
                        .dir = QCA6174_HW_3_0_FW_DIR,
-                       .fw = QCA6174_HW_3_0_FW_FILE,
-                       .otp = QCA6174_HW_3_0_OTP_FILE,
                        .board = QCA6174_HW_3_0_BOARD_DATA_FILE,
                        .board_size = QCA6174_BOARD_DATA_SZ,
                        .board_ext_size = QCA6174_BOARD_EXT_DATA_SZ,
@@ -167,8 +157,6 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
                .cal_data_len = 12064,
                .fw = {
                        .dir = QCA99X0_HW_2_0_FW_DIR,
-                       .fw = QCA99X0_HW_2_0_FW_FILE,
-                       .otp = QCA99X0_HW_2_0_OTP_FILE,
                        .board = QCA99X0_HW_2_0_BOARD_DATA_FILE,
                        .board_size = QCA99X0_BOARD_DATA_SZ,
                        .board_ext_size = QCA99X0_BOARD_EXT_DATA_SZ,
@@ -186,8 +174,6 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
                .cal_data_len = 8124,
                .fw = {
                        .dir = QCA9377_HW_1_0_FW_DIR,
-                       .fw = QCA9377_HW_1_0_FW_FILE,
-                       .otp = QCA9377_HW_1_0_OTP_FILE,
                        .board = QCA9377_HW_1_0_BOARD_DATA_FILE,
                        .board_size = QCA9377_BOARD_DATA_SZ,
                        .board_ext_size = QCA9377_BOARD_EXT_DATA_SZ,
@@ -205,8 +191,6 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
                .cal_data_len = 8124,
                .fw = {
                        .dir = QCA9377_HW_1_0_FW_DIR,
-                       .fw = QCA9377_HW_1_0_FW_FILE,
-                       .otp = QCA9377_HW_1_0_OTP_FILE,
                        .board = QCA9377_HW_1_0_BOARD_DATA_FILE,
                        .board_size = QCA9377_BOARD_DATA_SZ,
                        .board_ext_size = QCA9377_BOARD_EXT_DATA_SZ,
@@ -229,8 +213,6 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
                .cal_data_len = 12064,
                .fw = {
                        .dir = QCA4019_HW_1_0_FW_DIR,
-                       .fw = QCA4019_HW_1_0_FW_FILE,
-                       .otp = QCA4019_HW_1_0_OTP_FILE,
                        .board = QCA4019_HW_1_0_BOARD_DATA_FILE,
                        .board_size = QCA4019_BOARD_DATA_SZ,
                        .board_ext_size = QCA4019_BOARD_EXT_DATA_SZ,
@@ -279,7 +261,7 @@ void ath10k_core_get_fw_features_str(struct ath10k *ar,
        int i;
 
        for (i = 0; i < ATH10K_FW_FEATURE_COUNT; i++) {
-               if (test_bit(i, ar->fw_features)) {
+               if (test_bit(i, ar->normal_mode_fw.fw_file.fw_features)) {
                        if (len > 0)
                                len += scnprintf(buf + len, buf_len - len, ",");
 
@@ -556,7 +538,8 @@ static int ath10k_core_get_board_id_from_otp(struct ath10k *ar)
 
        address = ar->hw_params.patch_load_addr;
 
-       if (!ar->otp_data || !ar->otp_len) {
+       if (!ar->normal_mode_fw.fw_file.otp_data ||
+           !ar->normal_mode_fw.fw_file.otp_len) {
                ath10k_warn(ar,
                            "failed to retrieve board id because of invalid otp\n");
                return -ENODATA;
@@ -564,9 +547,11 @@ static int ath10k_core_get_board_id_from_otp(struct ath10k *ar)
 
        ath10k_dbg(ar, ATH10K_DBG_BOOT,
                   "boot upload otp to 0x%x len %zd for board id\n",
-                  address, ar->otp_len);
+                  address, ar->normal_mode_fw.fw_file.otp_len);
 
-       ret = ath10k_bmi_fast_download(ar, address, ar->otp_data, ar->otp_len);
+       ret = ath10k_bmi_fast_download(ar, address,
+                                      ar->normal_mode_fw.fw_file.otp_data,
+                                      ar->normal_mode_fw.fw_file.otp_len);
        if (ret) {
                ath10k_err(ar, "could not write otp for board id check: %d\n",
                           ret);
@@ -604,7 +589,9 @@ static int ath10k_download_and_run_otp(struct ath10k *ar)
        u32 bmi_otp_exe_param = ar->hw_params.otp_exe_param;
        int ret;
 
-       ret = ath10k_download_board_data(ar, ar->board_data, ar->board_len);
+       ret = ath10k_download_board_data(ar,
+                                        ar->running_fw->board_data,
+                                        ar->running_fw->board_len);
        if (ret) {
                ath10k_err(ar, "failed to download board data: %d\n", ret);
                return ret;
@@ -612,16 +599,20 @@ static int ath10k_download_and_run_otp(struct ath10k *ar)
 
        /* OTP is optional */
 
-       if (!ar->otp_data || !ar->otp_len) {
+       if (!ar->running_fw->fw_file.otp_data ||
+           !ar->running_fw->fw_file.otp_len) {
                ath10k_warn(ar, "Not running otp, calibration will be incorrect (otp-data %p otp_len %zd)!\n",
-                           ar->otp_data, ar->otp_len);
+                           ar->running_fw->fw_file.otp_data,
+                           ar->running_fw->fw_file.otp_len);
                return 0;
        }
 
        ath10k_dbg(ar, ATH10K_DBG_BOOT, "boot upload otp to 0x%x len %zd\n",
-                  address, ar->otp_len);
+                  address, ar->running_fw->fw_file.otp_len);
 
-       ret = ath10k_bmi_fast_download(ar, address, ar->otp_data, ar->otp_len);
+       ret = ath10k_bmi_fast_download(ar, address,
+                                      ar->running_fw->fw_file.otp_data,
+                                      ar->running_fw->fw_file.otp_len);
        if (ret) {
                ath10k_err(ar, "could not write otp (%d)\n", ret);
                return ret;
@@ -636,7 +627,7 @@ static int ath10k_download_and_run_otp(struct ath10k *ar)
        ath10k_dbg(ar, ATH10K_DBG_BOOT, "boot otp execute result %d\n", result);
 
        if (!(skip_otp || test_bit(ATH10K_FW_FEATURE_IGNORE_OTP_RESULT,
-                                  ar->fw_features)) &&
+                                  ar->running_fw->fw_file.fw_features)) &&
            result != 0) {
                ath10k_err(ar, "otp calibration failed: %d", result);
                return -EINVAL;
@@ -645,46 +636,32 @@ static int ath10k_download_and_run_otp(struct ath10k *ar)
        return 0;
 }
 
-static int ath10k_download_fw(struct ath10k *ar, enum ath10k_firmware_mode mode)
+static int ath10k_download_fw(struct ath10k *ar)
 {
        u32 address, data_len;
-       const char *mode_name;
        const void *data;
        int ret;
 
        address = ar->hw_params.patch_load_addr;
 
-       switch (mode) {
-       case ATH10K_FIRMWARE_MODE_NORMAL:
-               data = ar->firmware_data;
-               data_len = ar->firmware_len;
-               mode_name = "normal";
-               ret = ath10k_swap_code_seg_configure(ar,
-                                                    ATH10K_SWAP_CODE_SEG_BIN_TYPE_FW);
-               if (ret) {
-                       ath10k_err(ar, "failed to configure fw code swap: %d\n",
-                                  ret);
-                       return ret;
-               }
-               break;
-       case ATH10K_FIRMWARE_MODE_UTF:
-               data = ar->testmode.utf_firmware_data;
-               data_len = ar->testmode.utf_firmware_len;
-               mode_name = "utf";
-               break;
-       default:
-               ath10k_err(ar, "unknown firmware mode: %d\n", mode);
-               return -EINVAL;
+       data = ar->running_fw->fw_file.firmware_data;
+       data_len = ar->running_fw->fw_file.firmware_len;
+
+       ret = ath10k_swap_code_seg_configure(ar);
+       if (ret) {
+               ath10k_err(ar, "failed to configure fw code swap: %d\n",
+                          ret);
+               return ret;
        }
 
        ath10k_dbg(ar, ATH10K_DBG_BOOT,
-                  "boot uploading firmware image %p len %d mode %s\n",
-                  data, data_len, mode_name);
+                  "boot uploading firmware image %p len %d\n",
+                  data, data_len);
 
        ret = ath10k_bmi_fast_download(ar, address, data, data_len);
        if (ret) {
-               ath10k_err(ar, "failed to download %s firmware: %d\n",
-                          mode_name, ret);
+               ath10k_err(ar, "failed to download firmware: %d\n",
+                          ret);
                return ret;
        }
 
@@ -693,34 +670,30 @@ static int ath10k_download_fw(struct ath10k *ar, enum ath10k_firmware_mode mode)
 
 static void ath10k_core_free_board_files(struct ath10k *ar)
 {
-       if (!IS_ERR(ar->board))
-               release_firmware(ar->board);
+       if (!IS_ERR(ar->normal_mode_fw.board))
+               release_firmware(ar->normal_mode_fw.board);
 
-       ar->board = NULL;
-       ar->board_data = NULL;
-       ar->board_len = 0;
+       ar->normal_mode_fw.board = NULL;
+       ar->normal_mode_fw.board_data = NULL;
+       ar->normal_mode_fw.board_len = 0;
 }
 
 static void ath10k_core_free_firmware_files(struct ath10k *ar)
 {
-       if (!IS_ERR(ar->otp))
-               release_firmware(ar->otp);
-
-       if (!IS_ERR(ar->firmware))
-               release_firmware(ar->firmware);
+       if (!IS_ERR(ar->normal_mode_fw.fw_file.firmware))
+               release_firmware(ar->normal_mode_fw.fw_file.firmware);
 
        if (!IS_ERR(ar->cal_file))
                release_firmware(ar->cal_file);
 
        ath10k_swap_code_seg_release(ar);
 
-       ar->otp = NULL;
-       ar->otp_data = NULL;
-       ar->otp_len = 0;
+       ar->normal_mode_fw.fw_file.otp_data = NULL;
+       ar->normal_mode_fw.fw_file.otp_len = 0;
 
-       ar->firmware = NULL;
-       ar->firmware_data = NULL;
-       ar->firmware_len = 0;
+       ar->normal_mode_fw.fw_file.firmware = NULL;
+       ar->normal_mode_fw.fw_file.firmware_data = NULL;
+       ar->normal_mode_fw.fw_file.firmware_len = 0;
 
        ar->cal_file = NULL;
 }
@@ -759,14 +732,14 @@ static int ath10k_core_fetch_board_data_api_1(struct ath10k *ar)
                return -EINVAL;
        }
 
-       ar->board = ath10k_fetch_fw_file(ar,
-                                        ar->hw_params.fw.dir,
-                                        ar->hw_params.fw.board);
-       if (IS_ERR(ar->board))
-               return PTR_ERR(ar->board);
+       ar->normal_mode_fw.board = ath10k_fetch_fw_file(ar,
+                                                       ar->hw_params.fw.dir,
+                                                       ar->hw_params.fw.board);
+       if (IS_ERR(ar->normal_mode_fw.board))
+               return PTR_ERR(ar->normal_mode_fw.board);
 
-       ar->board_data = ar->board->data;
-       ar->board_len = ar->board->size;
+       ar->normal_mode_fw.board_data = ar->normal_mode_fw.board->data;
+       ar->normal_mode_fw.board_len = ar->normal_mode_fw.board->size;
 
        return 0;
 }
@@ -826,8 +799,8 @@ static int ath10k_core_parse_bd_ie_board(struct ath10k *ar,
                                   "boot found board data for '%s'",
                                   boardname);
 
-                       ar->board_data = board_ie_data;
-                       ar->board_len = board_ie_len;
+                       ar->normal_mode_fw.board_data = board_ie_data;
+                       ar->normal_mode_fw.board_len = board_ie_len;
 
                        ret = 0;
                        goto out;
@@ -860,12 +833,14 @@ static int ath10k_core_fetch_board_data_api_n(struct ath10k *ar,
        const u8 *data;
        int ret, ie_id;
 
-       ar->board = ath10k_fetch_fw_file(ar, ar->hw_params.fw.dir, filename);
-       if (IS_ERR(ar->board))
-               return PTR_ERR(ar->board);
+       ar->normal_mode_fw.board = ath10k_fetch_fw_file(ar,
+                                                       ar->hw_params.fw.dir,
+                                                       filename);
+       if (IS_ERR(ar->normal_mode_fw.board))
+               return PTR_ERR(ar->normal_mode_fw.board);
 
-       data = ar->board->data;
-       len = ar->board->size;
+       data = ar->normal_mode_fw.board->data;
+       len = ar->normal_mode_fw.board->size;
 
        /* magic has extra null byte padded */
        magic_len = strlen(ATH10K_BOARD_MAGIC) + 1;
@@ -932,7 +907,7 @@ static int ath10k_core_fetch_board_data_api_n(struct ath10k *ar,
        }
 
 out:
-       if (!ar->board_data || !ar->board_len) {
+       if (!ar->normal_mode_fw.board_data || !ar->normal_mode_fw.board_len) {
                ath10k_err(ar,
                           "failed to fetch board data for %s from %s/%s\n",
                           boardname, ar->hw_params.fw.dir, filename);
@@ -1000,51 +975,8 @@ success:
        return 0;
 }
 
-static int ath10k_core_fetch_firmware_api_1(struct ath10k *ar)
-{
-       int ret = 0;
-
-       if (ar->hw_params.fw.fw == NULL) {
-               ath10k_err(ar, "firmware file not defined\n");
-               return -EINVAL;
-       }
-
-       ar->firmware = ath10k_fetch_fw_file(ar,
-                                           ar->hw_params.fw.dir,
-                                           ar->hw_params.fw.fw);
-       if (IS_ERR(ar->firmware)) {
-               ret = PTR_ERR(ar->firmware);
-               ath10k_err(ar, "could not fetch firmware (%d)\n", ret);
-               goto err;
-       }
-
-       ar->firmware_data = ar->firmware->data;
-       ar->firmware_len = ar->firmware->size;
-
-       /* OTP may be undefined. If so, don't fetch it at all */
-       if (ar->hw_params.fw.otp == NULL)
-               return 0;
-
-       ar->otp = ath10k_fetch_fw_file(ar,
-                                      ar->hw_params.fw.dir,
-                                      ar->hw_params.fw.otp);
-       if (IS_ERR(ar->otp)) {
-               ret = PTR_ERR(ar->otp);
-               ath10k_err(ar, "could not fetch otp (%d)\n", ret);
-               goto err;
-       }
-
-       ar->otp_data = ar->otp->data;
-       ar->otp_len = ar->otp->size;
-
-       return 0;
-
-err:
-       ath10k_core_free_firmware_files(ar);
-       return ret;
-}
-
-static int ath10k_core_fetch_firmware_api_n(struct ath10k *ar, const char *name)
+int ath10k_core_fetch_firmware_api_n(struct ath10k *ar, const char *name,
+                                    struct ath10k_fw_file *fw_file)
 {
        size_t magic_len, len, ie_len;
        int ie_id, i, index, bit, ret;
@@ -1053,15 +985,17 @@ static int ath10k_core_fetch_firmware_api_n(struct ath10k *ar, const char *name)
        __le32 *timestamp, *version;
 
        /* first fetch the firmware file (firmware-*.bin) */
-       ar->firmware = ath10k_fetch_fw_file(ar, ar->hw_params.fw.dir, name);
-       if (IS_ERR(ar->firmware)) {
+       fw_file->firmware = ath10k_fetch_fw_file(ar, ar->hw_params.fw.dir,
+                                                name);
+       if (IS_ERR(fw_file->firmware)) {
                ath10k_err(ar, "could not fetch firmware file '%s/%s': %ld\n",
-                          ar->hw_params.fw.dir, name, PTR_ERR(ar->firmware));
-               return PTR_ERR(ar->firmware);
+                          ar->hw_params.fw.dir, name,
+                          PTR_ERR(fw_file->firmware));
+               return PTR_ERR(fw_file->firmware);
        }
 
-       data = ar->firmware->data;
-       len = ar->firmware->size;
+       data = fw_file->firmware->data;
+       len = fw_file->firmware->size;
 
        /* magic also includes the null byte, check that as well */
        magic_len = strlen(ATH10K_FIRMWARE_MAGIC) + 1;
@@ -1104,15 +1038,15 @@ static int ath10k_core_fetch_firmware_api_n(struct ath10k *ar, const char *name)
 
                switch (ie_id) {
                case ATH10K_FW_IE_FW_VERSION:
-                       if (ie_len > sizeof(ar->hw->wiphy->fw_version) - 1)
+                       if (ie_len > sizeof(fw_file->fw_version) - 1)
                                break;
 
-                       memcpy(ar->hw->wiphy->fw_version, data, ie_len);
-                       ar->hw->wiphy->fw_version[ie_len] = '\0';
+                       memcpy(fw_file->fw_version, data, ie_len);
+                       fw_file->fw_version[ie_len] = '\0';
 
                        ath10k_dbg(ar, ATH10K_DBG_BOOT,
                                   "found fw version %s\n",
-                                   ar->hw->wiphy->fw_version);
+                                   fw_file->fw_version);
                        break;
                case ATH10K_FW_IE_TIMESTAMP:
                        if (ie_len != sizeof(u32))
@@ -1139,21 +1073,21 @@ static int ath10k_core_fetch_firmware_api_n(struct ath10k *ar, const char *name)
                                        ath10k_dbg(ar, ATH10K_DBG_BOOT,
                                                   "Enabling feature bit: %i\n",
                                                   i);
-                                       __set_bit(i, ar->fw_features);
+                                       __set_bit(i, fw_file->fw_features);
                                }
                        }
 
                        ath10k_dbg_dump(ar, ATH10K_DBG_BOOT, "features", "",
-                                       ar->fw_features,
-                                       sizeof(ar->fw_features));
+                                       ar->running_fw->fw_file.fw_features,
+                                       sizeof(fw_file->fw_features));
                        break;
                case ATH10K_FW_IE_FW_IMAGE:
                        ath10k_dbg(ar, ATH10K_DBG_BOOT,
                                   "found fw image ie (%zd B)\n",
                                   ie_len);
 
-                       ar->firmware_data = data;
-                       ar->firmware_len = ie_len;
+                       fw_file->firmware_data = data;
+                       fw_file->firmware_len = ie_len;
 
                        break;
                case ATH10K_FW_IE_OTP_IMAGE:
@@ -1161,8 +1095,8 @@ static int ath10k_core_fetch_firmware_api_n(struct ath10k *ar, const char *name)
                                   "found otp image ie (%zd B)\n",
                                   ie_len);
 
-                       ar->otp_data = data;
-                       ar->otp_len = ie_len;
+                       fw_file->otp_data = data;
+                       fw_file->otp_len = ie_len;
 
                        break;
                case ATH10K_FW_IE_WMI_OP_VERSION:
@@ -1171,10 +1105,10 @@ static int ath10k_core_fetch_firmware_api_n(struct ath10k *ar, const char *name)
 
                        version = (__le32 *)data;
 
-                       ar->wmi.op_version = le32_to_cpup(version);
+                       fw_file->wmi_op_version = le32_to_cpup(version);
 
                        ath10k_dbg(ar, ATH10K_DBG_BOOT, "found fw ie wmi op version %d\n",
-                                  ar->wmi.op_version);
+                                  fw_file->wmi_op_version);
                        break;
                case ATH10K_FW_IE_HTT_OP_VERSION:
                        if (ie_len != sizeof(u32))
@@ -1182,17 +1116,17 @@ static int ath10k_core_fetch_firmware_api_n(struct ath10k *ar, const char *name)
 
                        version = (__le32 *)data;
 
-                       ar->htt.op_version = le32_to_cpup(version);
+                       fw_file->htt_op_version = le32_to_cpup(version);
 
                        ath10k_dbg(ar, ATH10K_DBG_BOOT, "found fw ie htt op version %d\n",
-                                  ar->htt.op_version);
+                                  fw_file->htt_op_version);
                        break;
                case ATH10K_FW_IE_FW_CODE_SWAP_IMAGE:
                        ath10k_dbg(ar, ATH10K_DBG_BOOT,
                                   "found fw code swap image ie (%zd B)\n",
                                   ie_len);
-                       ar->swap.firmware_codeswap_data = data;
-                       ar->swap.firmware_codeswap_len = ie_len;
+                       fw_file->codeswap_data = data;
+                       fw_file->codeswap_len = ie_len;
                        break;
                default:
                        ath10k_warn(ar, "Unknown FW IE: %u\n",
@@ -1207,7 +1141,8 @@ static int ath10k_core_fetch_firmware_api_n(struct ath10k *ar, const char *name)
                data += ie_len;
        }
 
-       if (!ar->firmware_data || !ar->firmware_len) {
+       if (!fw_file->firmware_data ||
+           !fw_file->firmware_len) {
                ath10k_warn(ar, "No ATH10K_FW_IE_FW_IMAGE found from '%s/%s', skipping\n",
                            ar->hw_params.fw.dir, name);
                ret = -ENOMEDIUM;
@@ -1231,35 +1166,32 @@ static int ath10k_core_fetch_firmware_files(struct ath10k *ar)
        ar->fw_api = 5;
        ath10k_dbg(ar, ATH10K_DBG_BOOT, "trying fw api %d\n", ar->fw_api);
 
-       ret = ath10k_core_fetch_firmware_api_n(ar, ATH10K_FW_API5_FILE);
+       ret = ath10k_core_fetch_firmware_api_n(ar, ATH10K_FW_API5_FILE,
+                                              &ar->normal_mode_fw.fw_file);
        if (ret == 0)
                goto success;
 
        ar->fw_api = 4;
        ath10k_dbg(ar, ATH10K_DBG_BOOT, "trying fw api %d\n", ar->fw_api);
 
-       ret = ath10k_core_fetch_firmware_api_n(ar, ATH10K_FW_API4_FILE);
+       ret = ath10k_core_fetch_firmware_api_n(ar, ATH10K_FW_API4_FILE,
+                                              &ar->normal_mode_fw.fw_file);
        if (ret == 0)
                goto success;
 
        ar->fw_api = 3;
        ath10k_dbg(ar, ATH10K_DBG_BOOT, "trying fw api %d\n", ar->fw_api);
 
-       ret = ath10k_core_fetch_firmware_api_n(ar, ATH10K_FW_API3_FILE);
+       ret = ath10k_core_fetch_firmware_api_n(ar, ATH10K_FW_API3_FILE,
+                                              &ar->normal_mode_fw.fw_file);
        if (ret == 0)
                goto success;
 
        ar->fw_api = 2;
        ath10k_dbg(ar, ATH10K_DBG_BOOT, "trying fw api %d\n", ar->fw_api);
 
-       ret = ath10k_core_fetch_firmware_api_n(ar, ATH10K_FW_API2_FILE);
-       if (ret == 0)
-               goto success;
-
-       ar->fw_api = 1;
-       ath10k_dbg(ar, ATH10K_DBG_BOOT, "trying fw api %d\n", ar->fw_api);
-
-       ret = ath10k_core_fetch_firmware_api_1(ar);
+       ret = ath10k_core_fetch_firmware_api_n(ar, ATH10K_FW_API2_FILE,
+                                              &ar->normal_mode_fw.fw_file);
        if (ret)
                return ret;
 
@@ -1497,15 +1429,17 @@ static void ath10k_core_restart(struct work_struct *work)
 
 static int ath10k_core_init_firmware_features(struct ath10k *ar)
 {
-       if (test_bit(ATH10K_FW_FEATURE_WMI_10_2, ar->fw_features) &&
-           !test_bit(ATH10K_FW_FEATURE_WMI_10X, ar->fw_features)) {
+       struct ath10k_fw_file *fw_file = &ar->normal_mode_fw.fw_file;
+
+       if (test_bit(ATH10K_FW_FEATURE_WMI_10_2, fw_file->fw_features) &&
+           !test_bit(ATH10K_FW_FEATURE_WMI_10X, fw_file->fw_features)) {
                ath10k_err(ar, "feature bits corrupted: 10.2 feature requires 10.x feature to be set as well");
                return -EINVAL;
        }
 
-       if (ar->wmi.op_version >= ATH10K_FW_WMI_OP_VERSION_MAX) {
+       if (fw_file->wmi_op_version >= ATH10K_FW_WMI_OP_VERSION_MAX) {
                ath10k_err(ar, "unsupported WMI OP version (max %d): %d\n",
-                          ATH10K_FW_WMI_OP_VERSION_MAX, ar->wmi.op_version);
+                          ATH10K_FW_WMI_OP_VERSION_MAX, fw_file->wmi_op_version);
                return -EINVAL;
        }
 
@@ -1517,7 +1451,7 @@ static int ath10k_core_init_firmware_features(struct ath10k *ar)
                break;
        case ATH10K_CRYPT_MODE_SW:
                if (!test_bit(ATH10K_FW_FEATURE_RAW_MODE_SUPPORT,
-                             ar->fw_features)) {
+                             fw_file->fw_features)) {
                        ath10k_err(ar, "cryptmode > 0 requires raw mode support from firmware");
                        return -EINVAL;
                }
@@ -1536,7 +1470,7 @@ static int ath10k_core_init_firmware_features(struct ath10k *ar)
 
        if (rawmode) {
                if (!test_bit(ATH10K_FW_FEATURE_RAW_MODE_SUPPORT,
-                             ar->fw_features)) {
+                             fw_file->fw_features)) {
                        ath10k_err(ar, "rawmode = 1 requires support from firmware");
                        return -EINVAL;
                }
@@ -1561,19 +1495,19 @@ static int ath10k_core_init_firmware_features(struct ath10k *ar)
        /* Backwards compatibility for firmwares without
         * ATH10K_FW_IE_WMI_OP_VERSION.
         */
-       if (ar->wmi.op_version == ATH10K_FW_WMI_OP_VERSION_UNSET) {
-               if (test_bit(ATH10K_FW_FEATURE_WMI_10X, ar->fw_features)) {
+       if (fw_file->wmi_op_version == ATH10K_FW_WMI_OP_VERSION_UNSET) {
+               if (test_bit(ATH10K_FW_FEATURE_WMI_10X, fw_file->fw_features)) {
                        if (test_bit(ATH10K_FW_FEATURE_WMI_10_2,
-                                    ar->fw_features))
-                               ar->wmi.op_version = ATH10K_FW_WMI_OP_VERSION_10_2;
+                                    fw_file->fw_features))
+                               fw_file->wmi_op_version = ATH10K_FW_WMI_OP_VERSION_10_2;
                        else
-                               ar->wmi.op_version = ATH10K_FW_WMI_OP_VERSION_10_1;
+                               fw_file->wmi_op_version = ATH10K_FW_WMI_OP_VERSION_10_1;
                } else {
-                       ar->wmi.op_version = ATH10K_FW_WMI_OP_VERSION_MAIN;
+                       fw_file->wmi_op_version = ATH10K_FW_WMI_OP_VERSION_MAIN;
                }
        }
 
-       switch (ar->wmi.op_version) {
+       switch (fw_file->wmi_op_version) {
        case ATH10K_FW_WMI_OP_VERSION_MAIN:
                ar->max_num_peers = TARGET_NUM_PEERS;
                ar->max_num_stations = TARGET_NUM_STATIONS;
@@ -1620,7 +1554,7 @@ static int ath10k_core_init_firmware_features(struct ath10k *ar)
                ar->max_spatial_stream = ar->hw_params.max_spatial_stream;
 
                if (test_bit(ATH10K_FW_FEATURE_PEER_FLOW_CONTROL,
-                            ar->fw_features))
+                            fw_file->fw_features))
                        ar->htt.max_num_pending_tx = TARGET_10_4_NUM_MSDU_DESC_PFC;
                else
                        ar->htt.max_num_pending_tx = TARGET_10_4_NUM_MSDU_DESC;
@@ -1634,18 +1568,18 @@ static int ath10k_core_init_firmware_features(struct ath10k *ar)
        /* Backwards compatibility for firmwares without
         * ATH10K_FW_IE_HTT_OP_VERSION.
         */
-       if (ar->htt.op_version == ATH10K_FW_HTT_OP_VERSION_UNSET) {
-               switch (ar->wmi.op_version) {
+       if (fw_file->htt_op_version == ATH10K_FW_HTT_OP_VERSION_UNSET) {
+               switch (fw_file->wmi_op_version) {
                case ATH10K_FW_WMI_OP_VERSION_MAIN:
-                       ar->htt.op_version = ATH10K_FW_HTT_OP_VERSION_MAIN;
+                       fw_file->htt_op_version = ATH10K_FW_HTT_OP_VERSION_MAIN;
                        break;
                case ATH10K_FW_WMI_OP_VERSION_10_1:
                case ATH10K_FW_WMI_OP_VERSION_10_2:
                case ATH10K_FW_WMI_OP_VERSION_10_2_4:
-                       ar->htt.op_version = ATH10K_FW_HTT_OP_VERSION_10_1;
+                       fw_file->htt_op_version = ATH10K_FW_HTT_OP_VERSION_10_1;
                        break;
                case ATH10K_FW_WMI_OP_VERSION_TLV:
-                       ar->htt.op_version = ATH10K_FW_HTT_OP_VERSION_TLV;
+                       fw_file->htt_op_version = ATH10K_FW_HTT_OP_VERSION_TLV;
                        break;
                case ATH10K_FW_WMI_OP_VERSION_10_4:
                case ATH10K_FW_WMI_OP_VERSION_UNSET:
@@ -1658,7 +1592,8 @@ static int ath10k_core_init_firmware_features(struct ath10k *ar)
        return 0;
 }
 
-int ath10k_core_start(struct ath10k *ar, enum ath10k_firmware_mode mode)
+int ath10k_core_start(struct ath10k *ar, enum ath10k_firmware_mode mode,
+                     const struct ath10k_fw_components *fw)
 {
        int status;
        u32 val;
@@ -1667,6 +1602,8 @@ int ath10k_core_start(struct ath10k *ar, enum ath10k_firmware_mode mode)
 
        clear_bit(ATH10K_FLAG_CRASH_FLUSH, &ar->dev_flags);
 
+       ar->running_fw = fw;
+
        ath10k_bmi_start(ar);
 
        if (ath10k_init_configure_target(ar)) {
@@ -1685,7 +1622,7 @@ int ath10k_core_start(struct ath10k *ar, enum ath10k_firmware_mode mode)
         * to set the clock source once the target is initialized.
         */
        if (test_bit(ATH10K_FW_FEATURE_SUPPORTS_SKIP_CLOCK_INIT,
-                    ar->fw_features)) {
+                    ar->running_fw->fw_file.fw_features)) {
                status = ath10k_bmi_write32(ar, hi_skip_clock_init, 1);
                if (status) {
                        ath10k_err(ar, "could not write to skip_clock_init: %d\n",
@@ -1694,7 +1631,7 @@ int ath10k_core_start(struct ath10k *ar, enum ath10k_firmware_mode mode)
                }
        }
 
-       status = ath10k_download_fw(ar, mode);
+       status = ath10k_download_fw(ar);
        if (status)
                goto err;
 
@@ -1787,8 +1724,7 @@ int ath10k_core_start(struct ath10k *ar, enum ath10k_firmware_mode mode)
                if (ath10k_peer_stats_enabled(ar))
                        val = WMI_10_4_PEER_STATS;
 
-               status = ath10k_wmi_ext_resource_config(ar,
-                                                       WMI_HOST_PLATFORM_HIGH_PERF, val);
+               status = ath10k_mac_ext_resource_config(ar, val);
                if (status) {
                        ath10k_err(ar,
                                   "failed to send ext resource cfg command : %d\n",
@@ -1931,6 +1867,11 @@ static int ath10k_core_probe_fw(struct ath10k *ar)
                goto err_power_down;
        }
 
+       BUILD_BUG_ON(sizeof(ar->hw->wiphy->fw_version) !=
+                    sizeof(ar->normal_mode_fw.fw_file.fw_version));
+       memcpy(ar->hw->wiphy->fw_version, ar->normal_mode_fw.fw_file.fw_version,
+              sizeof(ar->hw->wiphy->fw_version));
+
        ath10k_debug_print_hwfw_info(ar);
 
        ret = ath10k_core_pre_cal_download(ar);
@@ -1973,7 +1914,8 @@ static int ath10k_core_probe_fw(struct ath10k *ar)
 
        mutex_lock(&ar->conf_mutex);
 
-       ret = ath10k_core_start(ar, ATH10K_FIRMWARE_MODE_NORMAL);
+       ret = ath10k_core_start(ar, ATH10K_FIRMWARE_MODE_NORMAL,
+                               &ar->normal_mode_fw);
        if (ret) {
                ath10k_err(ar, "could not init core (%d)\n", ret);
                goto err_unlock;
index 362bbed..1379054 100644 (file)
@@ -44,8 +44,8 @@
 
 #define ATH10K_SCAN_ID 0
 #define WMI_READY_TIMEOUT (5 * HZ)
-#define ATH10K_FLUSH_TIMEOUT_HZ (5*HZ)
-#define ATH10K_CONNECTION_LOSS_HZ (3*HZ)
+#define ATH10K_FLUSH_TIMEOUT_HZ (5 * HZ)
+#define ATH10K_CONNECTION_LOSS_HZ (3 * HZ)
 #define ATH10K_NUM_CHANS 39
 
 /* Antenna noise floor */
@@ -139,7 +139,6 @@ struct ath10k_mem_chunk {
 };
 
 struct ath10k_wmi {
-       enum ath10k_fw_wmi_op_version op_version;
        enum ath10k_htc_ep_id eid;
        struct completion service_ready;
        struct completion unified_ready;
@@ -334,7 +333,7 @@ struct ath10k_sta {
 #endif
 };
 
-#define ATH10K_VDEV_SETUP_TIMEOUT_HZ (5*HZ)
+#define ATH10K_VDEV_SETUP_TIMEOUT_HZ (5 * HZ)
 
 enum ath10k_beacon_state {
        ATH10K_BEACON_SCHEDULED = 0,
@@ -627,6 +626,34 @@ enum ath10k_tx_pause_reason {
        ATH10K_TX_PAUSE_MAX,
 };
 
+struct ath10k_fw_file {
+       const struct firmware *firmware;
+
+       char fw_version[ETHTOOL_FWVERS_LEN];
+
+       DECLARE_BITMAP(fw_features, ATH10K_FW_FEATURE_COUNT);
+
+       enum ath10k_fw_wmi_op_version wmi_op_version;
+       enum ath10k_fw_htt_op_version htt_op_version;
+
+       const void *firmware_data;
+       size_t firmware_len;
+
+       const void *otp_data;
+       size_t otp_len;
+
+       const void *codeswap_data;
+       size_t codeswap_len;
+};
+
+struct ath10k_fw_components {
+       const struct firmware *board;
+       const void *board_data;
+       size_t board_len;
+
+       struct ath10k_fw_file fw_file;
+};
+
 struct ath10k {
        struct ath_common ath_common;
        struct ieee80211_hw *hw;
@@ -652,8 +679,6 @@ struct ath10k {
        /* protected by conf_mutex */
        bool ani_enabled;
 
-       DECLARE_BITMAP(fw_features, ATH10K_FW_FEATURE_COUNT);
-
        bool p2p;
 
        struct {
@@ -708,32 +733,24 @@ struct ath10k {
 
                struct ath10k_hw_params_fw {
                        const char *dir;
-                       const char *fw;
-                       const char *otp;
                        const char *board;
                        size_t board_size;
                        size_t board_ext_size;
                } fw;
        } hw_params;
 
-       const struct firmware *board;
-       const void *board_data;
-       size_t board_len;
-
-       const struct firmware *otp;
-       const void *otp_data;
-       size_t otp_len;
+       /* contains the firmware images used with ATH10K_FIRMWARE_MODE_NORMAL */
+       struct ath10k_fw_components normal_mode_fw;
 
-       const struct firmware *firmware;
-       const void *firmware_data;
-       size_t firmware_len;
+       /* READ-ONLY images of the running firmware, which can be either
+        * normal or UTF. Do not modify, release etc!
+        */
+       const struct ath10k_fw_components *running_fw;
 
        const struct firmware *pre_cal_file;
        const struct firmware *cal_file;
 
        struct {
-               const void *firmware_codeswap_data;
-               size_t firmware_codeswap_len;
                struct ath10k_swap_code_seg_info *firmware_swap_code_seg_info;
        } swap;
 
@@ -879,13 +896,8 @@ struct ath10k {
 
        struct {
                /* protected by conf_mutex */
-               const struct firmware *utf;
-               char utf_version[32];
-               const void *utf_firmware_data;
-               size_t utf_firmware_len;
-               DECLARE_BITMAP(orig_fw_features, ATH10K_FW_FEATURE_COUNT);
-               enum ath10k_fw_wmi_op_version orig_wmi_op_version;
-               enum ath10k_fw_wmi_op_version op_version;
+               struct ath10k_fw_components utf_mode_fw;
+
                /* protected by data_lock */
                bool utf_monitor;
        } testmode;
@@ -921,8 +933,11 @@ void ath10k_core_destroy(struct ath10k *ar);
 void ath10k_core_get_fw_features_str(struct ath10k *ar,
                                     char *buf,
                                     size_t max_len);
+int ath10k_core_fetch_firmware_api_n(struct ath10k *ar, const char *name,
+                                    struct ath10k_fw_file *fw_file);
 
-int ath10k_core_start(struct ath10k *ar, enum ath10k_firmware_mode mode);
+int ath10k_core_start(struct ath10k *ar, enum ath10k_firmware_mode mode,
+                     const struct ath10k_fw_components *fw_components);
 int ath10k_wait_for_suspend(struct ath10k *ar, u32 suspend_opt);
 void ath10k_core_stop(struct ath10k *ar);
 int ath10k_core_register(struct ath10k *ar, u32 chip_id);
index 76bbe17..e251155 100644 (file)
@@ -126,6 +126,7 @@ EXPORT_SYMBOL(ath10k_info);
 
 void ath10k_debug_print_hwfw_info(struct ath10k *ar)
 {
+       const struct firmware *firmware;
        char fw_features[128] = {};
        u32 crc = 0;
 
@@ -144,8 +145,9 @@ void ath10k_debug_print_hwfw_info(struct ath10k *ar)
                    config_enabled(CONFIG_ATH10K_DFS_CERTIFIED),
                    config_enabled(CONFIG_NL80211_TESTMODE));
 
-       if (ar->firmware)
-               crc = crc32_le(0, ar->firmware->data, ar->firmware->size);
+       firmware = ar->normal_mode_fw.fw_file.firmware;
+       if (firmware)
+               crc = crc32_le(0, firmware->data, firmware->size);
 
        ath10k_info(ar, "firmware ver %s api %d features %s crc32 %08x\n",
                    ar->hw->wiphy->fw_version,
@@ -167,7 +169,8 @@ void ath10k_debug_print_board_info(struct ath10k *ar)
        ath10k_info(ar, "board_file api %d bmi_id %s crc32 %08x",
                    ar->bd_api,
                    boardinfo,
-                   crc32_le(0, ar->board->data, ar->board->size));
+                   crc32_le(0, ar->normal_mode_fw.board->data,
+                            ar->normal_mode_fw.board->size));
 }
 
 void ath10k_debug_print_boot_info(struct ath10k *ar)
@@ -175,8 +178,8 @@ void ath10k_debug_print_boot_info(struct ath10k *ar)
        ath10k_info(ar, "htt-ver %d.%d wmi-op %d htt-op %d cal %s max-sta %d raw %d hwcrypto %d\n",
                    ar->htt.target_version_major,
                    ar->htt.target_version_minor,
-                   ar->wmi.op_version,
-                   ar->htt.op_version,
+                   ar->normal_mode_fw.fw_file.wmi_op_version,
+                   ar->normal_mode_fw.fw_file.htt_op_version,
                    ath10k_cal_mode_str(ar->cal_mode),
                    ar->max_num_stations,
                    test_bit(ATH10K_FLAG_RAW_MODE, &ar->dev_flags),
@@ -2122,7 +2125,7 @@ static ssize_t ath10k_write_btcoex(struct file *file,
        struct ath10k *ar = file->private_data;
        char buf[32];
        size_t buf_size;
-       int ret = 0;
+       int ret;
        bool val;
 
        buf_size = min(count, (sizeof(buf) - 1));
@@ -2142,8 +2145,10 @@ static ssize_t ath10k_write_btcoex(struct file *file,
                goto exit;
        }
 
-       if (!(test_bit(ATH10K_FLAG_BTCOEX, &ar->dev_flags) ^ val))
+       if (!(test_bit(ATH10K_FLAG_BTCOEX, &ar->dev_flags) ^ val)) {
+               ret = count;
                goto exit;
+       }
 
        if (val)
                set_bit(ATH10K_FLAG_BTCOEX, &ar->dev_flags);
@@ -2189,7 +2194,7 @@ static ssize_t ath10k_write_peer_stats(struct file *file,
        struct ath10k *ar = file->private_data;
        char buf[32];
        size_t buf_size;
-       int ret = 0;
+       int ret;
        bool val;
 
        buf_size = min(count, (sizeof(buf) - 1));
@@ -2209,8 +2214,10 @@ static ssize_t ath10k_write_peer_stats(struct file *file,
                goto exit;
        }
 
-       if (!(test_bit(ATH10K_FLAG_PEER_STATS, &ar->dev_flags) ^ val))
+       if (!(test_bit(ATH10K_FLAG_PEER_STATS, &ar->dev_flags) ^ val)) {
+               ret = count;
                goto exit;
+       }
 
        if (val)
                set_bit(ATH10K_FLAG_PEER_STATS, &ar->dev_flags);
@@ -2266,23 +2273,28 @@ static ssize_t ath10k_debug_fw_checksums_read(struct file *file,
 
        len += scnprintf(buf + len, buf_len - len,
                         "firmware-N.bin\t\t%08x\n",
-                        crc32_le(0, ar->firmware->data, ar->firmware->size));
+                        crc32_le(0, ar->normal_mode_fw.fw_file.firmware->data,
+                                 ar->normal_mode_fw.fw_file.firmware->size));
        len += scnprintf(buf + len, buf_len - len,
                         "athwlan\t\t\t%08x\n",
-                        crc32_le(0, ar->firmware_data, ar->firmware_len));
+                        crc32_le(0, ar->normal_mode_fw.fw_file.firmware_data,
+                                 ar->normal_mode_fw.fw_file.firmware_len));
        len += scnprintf(buf + len, buf_len - len,
                         "otp\t\t\t%08x\n",
-                        crc32_le(0, ar->otp_data, ar->otp_len));
+                        crc32_le(0, ar->normal_mode_fw.fw_file.otp_data,
+                                 ar->normal_mode_fw.fw_file.otp_len));
        len += scnprintf(buf + len, buf_len - len,
                         "codeswap\t\t%08x\n",
-                        crc32_le(0, ar->swap.firmware_codeswap_data,
-                                 ar->swap.firmware_codeswap_len));
+                        crc32_le(0, ar->normal_mode_fw.fw_file.codeswap_data,
+                                 ar->normal_mode_fw.fw_file.codeswap_len));
        len += scnprintf(buf + len, buf_len - len,
                         "board-N.bin\t\t%08x\n",
-                        crc32_le(0, ar->board->data, ar->board->size));
+                        crc32_le(0, ar->normal_mode_fw.board->data,
+                                 ar->normal_mode_fw.board->size));
        len += scnprintf(buf + len, buf_len - len,
                         "board\t\t\t%08x\n",
-                        crc32_le(0, ar->board_data, ar->board_len));
+                        crc32_le(0, ar->normal_mode_fw.board_data,
+                                 ar->normal_mode_fw.board_len));
 
        ret_cnt = simple_read_from_buffer(user_buf, count, ppos, buf, len);
 
index 6206edd..75c89e3 100644 (file)
@@ -57,7 +57,7 @@ enum ath10k_dbg_aggr_mode {
 };
 
 /* FIXME: How to calculate the buffer size sanely? */
-#define ATH10K_FW_STATS_BUF_SIZE (1024*1024)
+#define ATH10K_FW_STATS_BUF_SIZE (1024 * 1024)
 
 extern unsigned int ath10k_debug_mask;
 
index e70aa38..cc82718 100644 (file)
@@ -297,10 +297,10 @@ struct ath10k_htc_svc_conn_resp {
 #define ATH10K_NUM_CONTROL_TX_BUFFERS 2
 #define ATH10K_HTC_MAX_LEN 4096
 #define ATH10K_HTC_MAX_CTRL_MSG_LEN 256
-#define ATH10K_HTC_WAIT_TIMEOUT_HZ (1*HZ)
+#define ATH10K_HTC_WAIT_TIMEOUT_HZ (1 * HZ)
 #define ATH10K_HTC_CONTROL_BUFFER_SIZE (ATH10K_HTC_MAX_CTRL_MSG_LEN + \
                                        sizeof(struct ath10k_htc_hdr))
-#define ATH10K_HTC_CONN_SVC_TIMEOUT_HZ (1*HZ)
+#define ATH10K_HTC_CONN_SVC_TIMEOUT_HZ (1 * HZ)
 
 struct ath10k_htc_ep {
        struct ath10k_htc *htc;
index 17a3008..130cd95 100644 (file)
@@ -183,7 +183,7 @@ int ath10k_htt_init(struct ath10k *ar)
                8 + /* llc snap */
                2; /* ip4 dscp or ip6 priority */
 
-       switch (ar->htt.op_version) {
+       switch (ar->running_fw->fw_file.htt_op_version) {
        case ATH10K_FW_HTT_OP_VERSION_10_4:
                ar->htt.t2h_msg_types = htt_10_4_t2h_msg_types;
                ar->htt.t2h_msg_types_max = HTT_10_4_T2H_NUM_MSGS;
@@ -208,7 +208,7 @@ int ath10k_htt_init(struct ath10k *ar)
        return 0;
 }
 
-#define HTT_TARGET_VERSION_TIMEOUT_HZ (3*HZ)
+#define HTT_TARGET_VERSION_TIMEOUT_HZ (3 * HZ)
 
 static int ath10k_htt_verify_version(struct ath10k_htt *htt)
 {
index 60bd9fe..911c535 100644 (file)
@@ -1475,10 +1475,10 @@ union htt_rx_pn_t {
        u32 pn24;
 
        /* TKIP or CCMP: 48-bit PN */
-       u_int64_t pn48;
+       u64 pn48;
 
        /* WAPI: 128-bit PN */
-       u_int64_t pn128[2];
+       u64 pn128[2];
 };
 
 struct htt_cmd {
@@ -1562,7 +1562,6 @@ struct ath10k_htt {
        u8 target_version_major;
        u8 target_version_minor;
        struct completion target_version_received;
-       enum ath10k_fw_htt_op_version op_version;
        u8 max_num_amsdu;
        u8 max_num_ampdu;
 
index 079fef5..cc979a4 100644 (file)
@@ -966,7 +966,7 @@ static int ath10k_htt_rx_nwifi_hdrlen(struct ath10k *ar,
        int len = ieee80211_hdrlen(hdr->frame_control);
 
        if (!test_bit(ATH10K_FW_FEATURE_NO_NWIFI_DECAP_4ADDR_PADDING,
-                     ar->fw_features))
+                     ar->running_fw->fw_file.fw_features))
                len = round_up(len, 4);
 
        return len;
index 9baa2e6..6269c61 100644 (file)
@@ -267,7 +267,8 @@ static void ath10k_htt_tx_free_txq(struct ath10k_htt *htt)
        struct ath10k *ar = htt->ar;
        size_t size;
 
-       if (!test_bit(ATH10K_FW_FEATURE_PEER_FLOW_CONTROL, ar->fw_features))
+       if (!test_bit(ATH10K_FW_FEATURE_PEER_FLOW_CONTROL,
+                     ar->running_fw->fw_file.fw_features))
                return;
 
        size = sizeof(*htt->tx_q_state.vaddr);
@@ -282,7 +283,8 @@ static int ath10k_htt_tx_alloc_txq(struct ath10k_htt *htt)
        size_t size;
        int ret;
 
-       if (!test_bit(ATH10K_FW_FEATURE_PEER_FLOW_CONTROL, ar->fw_features))
+       if (!test_bit(ATH10K_FW_FEATURE_PEER_FLOW_CONTROL,
+                     ar->running_fw->fw_file.fw_features))
                return 0;
 
        htt->tx_q_state.num_peers = HTT_TX_Q_STATE_NUM_PEERS;
@@ -513,7 +515,8 @@ int ath10k_htt_send_frag_desc_bank_cfg(struct ath10k_htt *htt)
        info |= SM(htt->tx_q_state.type,
                   HTT_FRAG_DESC_BANK_CFG_INFO_Q_STATE_DEPTH_TYPE);
 
-       if (test_bit(ATH10K_FW_FEATURE_PEER_FLOW_CONTROL, ar->fw_features))
+       if (test_bit(ATH10K_FW_FEATURE_PEER_FLOW_CONTROL,
+                    ar->running_fw->fw_file.fw_features))
                info |= HTT_FRAG_DESC_BANK_CFG_INFO_Q_STATE_VALID;
 
        cfg = &cmd->frag_desc_bank_cfg;
index c0179bc..aedd898 100644 (file)
@@ -35,8 +35,6 @@
 #define QCA988X_HW_2_0_VERSION         0x4100016c
 #define QCA988X_HW_2_0_CHIP_ID_REV     0x2
 #define QCA988X_HW_2_0_FW_DIR          ATH10K_FW_DIR "/QCA988X/hw2.0"
-#define QCA988X_HW_2_0_FW_FILE         "firmware.bin"
-#define QCA988X_HW_2_0_OTP_FILE                "otp.bin"
 #define QCA988X_HW_2_0_BOARD_DATA_FILE "board.bin"
 #define QCA988X_HW_2_0_PATCH_LOAD_ADDR 0x1234
 
@@ -76,14 +74,10 @@ enum qca9377_chip_id_rev {
 };
 
 #define QCA6174_HW_2_1_FW_DIR          "ath10k/QCA6174/hw2.1"
-#define QCA6174_HW_2_1_FW_FILE         "firmware.bin"
-#define QCA6174_HW_2_1_OTP_FILE                "otp.bin"
 #define QCA6174_HW_2_1_BOARD_DATA_FILE "board.bin"
 #define QCA6174_HW_2_1_PATCH_LOAD_ADDR 0x1234
 
 #define QCA6174_HW_3_0_FW_DIR          "ath10k/QCA6174/hw3.0"
-#define QCA6174_HW_3_0_FW_FILE         "firmware.bin"
-#define QCA6174_HW_3_0_OTP_FILE                "otp.bin"
 #define QCA6174_HW_3_0_BOARD_DATA_FILE "board.bin"
 #define QCA6174_HW_3_0_PATCH_LOAD_ADDR 0x1234
 
@@ -94,23 +88,17 @@ enum qca9377_chip_id_rev {
 #define QCA99X0_HW_2_0_DEV_VERSION     0x01000000
 #define QCA99X0_HW_2_0_CHIP_ID_REV     0x1
 #define QCA99X0_HW_2_0_FW_DIR          ATH10K_FW_DIR "/QCA99X0/hw2.0"
-#define QCA99X0_HW_2_0_FW_FILE         "firmware.bin"
-#define QCA99X0_HW_2_0_OTP_FILE        "otp.bin"
 #define QCA99X0_HW_2_0_BOARD_DATA_FILE "board.bin"
 #define QCA99X0_HW_2_0_PATCH_LOAD_ADDR 0x1234
 
 /* QCA9377 1.0 definitions */
 #define QCA9377_HW_1_0_FW_DIR          ATH10K_FW_DIR "/QCA9377/hw1.0"
-#define QCA9377_HW_1_0_FW_FILE         "firmware.bin"
-#define QCA9377_HW_1_0_OTP_FILE        "otp.bin"
 #define QCA9377_HW_1_0_BOARD_DATA_FILE "board.bin"
 #define QCA9377_HW_1_0_PATCH_LOAD_ADDR 0x1234
 
 /* QCA4019 1.0 definitions */
 #define QCA4019_HW_1_0_DEV_VERSION     0x01000000
 #define QCA4019_HW_1_0_FW_DIR          ATH10K_FW_DIR "/QCA4019/hw1.0"
-#define QCA4019_HW_1_0_FW_FILE         "firmware.bin"
-#define QCA4019_HW_1_0_OTP_FILE        "otp.bin"
 #define QCA4019_HW_1_0_BOARD_DATA_FILE "board.bin"
 #define QCA4019_HW_1_0_PATCH_LOAD_ADDR  0x1234
 
index 6ace10b..0e24f9e 100644 (file)
@@ -157,6 +157,26 @@ ath10k_mac_max_vht_nss(const u16 vht_mcs_mask[NL80211_VHT_NSS_MAX])
        return 1;
 }
 
+int ath10k_mac_ext_resource_config(struct ath10k *ar, u32 val)
+{
+       enum wmi_host_platform_type platform_type;
+       int ret;
+
+       if (test_bit(WMI_SERVICE_TX_MODE_DYNAMIC, ar->wmi.svc_map))
+               platform_type = WMI_HOST_PLATFORM_LOW_PERF;
+       else
+               platform_type = WMI_HOST_PLATFORM_HIGH_PERF;
+
+       ret = ath10k_wmi_ext_resource_config(ar, platform_type, val);
+
+       if (ret && ret != -EOPNOTSUPP) {
+               ath10k_warn(ar, "failed to configure ext resource: %d\n", ret);
+               return ret;
+       }
+
+       return 0;
+}
+
 /**********/
 /* Crypto */
 /**********/
@@ -449,10 +469,10 @@ static int ath10k_mac_vif_update_wep_key(struct ath10k_vif *arvif,
        lockdep_assert_held(&ar->conf_mutex);
 
        list_for_each_entry(peer, &ar->peers, list) {
-               if (!memcmp(peer->addr, arvif->vif->addr, ETH_ALEN))
+               if (ether_addr_equal(peer->addr, arvif->vif->addr))
                        continue;
 
-               if (!memcmp(peer->addr, arvif->bssid, ETH_ALEN))
+               if (ether_addr_equal(peer->addr, arvif->bssid))
                        continue;
 
                if (peer->keys[key->keyidx] == key)
@@ -1752,7 +1772,7 @@ static int ath10k_mac_vif_setup_ps(struct ath10k_vif *arvif)
 
        if (enable_ps && ath10k_mac_num_vifs_started(ar) > 1 &&
            !test_bit(ATH10K_FW_FEATURE_MULTI_VIF_PS_SUPPORT,
-                     ar->fw_features)) {
+                     ar->running_fw->fw_file.fw_features)) {
                ath10k_warn(ar, "refusing to enable ps on vdev %i: not supported by fw\n",
                            arvif->vdev_id);
                enable_ps = false;
@@ -2040,7 +2060,8 @@ static void ath10k_peer_assoc_h_crypto(struct ath10k *ar,
        }
 
        if (sta->mfp &&
-           test_bit(ATH10K_FW_FEATURE_MFP_SUPPORT, ar->fw_features)) {
+           test_bit(ATH10K_FW_FEATURE_MFP_SUPPORT,
+                    ar->running_fw->fw_file.fw_features)) {
                arg->peer_flags |= ar->wmi.peer_flags->pmf;
        }
 }
@@ -3187,7 +3208,8 @@ ath10k_mac_tx_h_get_txmode(struct ath10k *ar,
         */
        if (ar->htt.target_version_major < 3 &&
            (ieee80211_is_nullfunc(fc) || ieee80211_is_qos_nullfunc(fc)) &&
-           !test_bit(ATH10K_FW_FEATURE_HAS_WMI_MGMT_TX, ar->fw_features))
+           !test_bit(ATH10K_FW_FEATURE_HAS_WMI_MGMT_TX,
+                     ar->running_fw->fw_file.fw_features))
                return ATH10K_HW_TXRX_MGMT;
 
        /* Workaround:
@@ -3337,7 +3359,7 @@ bool ath10k_mac_tx_frm_has_freq(struct ath10k *ar)
         */
        return (ar->htt.target_version_major >= 3 &&
                ar->htt.target_version_minor >= 4 &&
-               ar->htt.op_version == ATH10K_FW_HTT_OP_VERSION_TLV);
+               ar->running_fw->fw_file.htt_op_version == ATH10K_FW_HTT_OP_VERSION_TLV);
 }
 
 static int ath10k_mac_tx_wmi_mgmt(struct ath10k *ar, struct sk_buff *skb)
@@ -3374,7 +3396,7 @@ ath10k_mac_tx_h_get_txpath(struct ath10k *ar,
                return ATH10K_MAC_TX_HTT;
        case ATH10K_HW_TXRX_MGMT:
                if (test_bit(ATH10K_FW_FEATURE_HAS_WMI_MGMT_TX,
-                            ar->fw_features))
+                            ar->running_fw->fw_file.fw_features))
                        return ATH10K_MAC_TX_WMI_MGMT;
                else if (ar->htt.target_version_major >= 3)
                        return ATH10K_MAC_TX_HTT;
@@ -3846,7 +3868,7 @@ static int ath10k_scan_stop(struct ath10k *ar)
                goto out;
        }
 
-       ret = wait_for_completion_timeout(&ar->scan.completed, 3*HZ);
+       ret = wait_for_completion_timeout(&ar->scan.completed, 3 * HZ);
        if (ret == 0) {
                ath10k_warn(ar, "failed to receive scan abortion completion: timed out\n");
                ret = -ETIMEDOUT;
@@ -3926,7 +3948,7 @@ static int ath10k_start_scan(struct ath10k *ar,
        if (ret)
                return ret;
 
-       ret = wait_for_completion_timeout(&ar->scan.started, 1*HZ);
+       ret = wait_for_completion_timeout(&ar->scan.started, 1 * HZ);
        if (ret == 0) {
                ret = ath10k_scan_stop(ar);
                if (ret)
@@ -4356,7 +4378,8 @@ static int ath10k_start(struct ieee80211_hw *hw)
                goto err_off;
        }
 
-       ret = ath10k_core_start(ar, ATH10K_FIRMWARE_MODE_NORMAL);
+       ret = ath10k_core_start(ar, ATH10K_FIRMWARE_MODE_NORMAL,
+                               &ar->normal_mode_fw);
        if (ret) {
                ath10k_err(ar, "Could not init core: %d\n", ret);
                goto err_power_down;
@@ -4414,7 +4437,7 @@ static int ath10k_start(struct ieee80211_hw *hw)
        }
 
        if (test_bit(ATH10K_FW_FEATURE_SUPPORTS_ADAPTIVE_CCA,
-                    ar->fw_features)) {
+                    ar->running_fw->fw_file.fw_features)) {
                ret = ath10k_wmi_pdev_enable_adaptive_cca(ar, 1,
                                                          WMI_CCA_DETECT_LEVEL_AUTO,
                                                          WMI_CCA_DETECT_MARGIN_AUTO);
@@ -6168,7 +6191,7 @@ exit:
        return ret;
 }
 
-#define ATH10K_ROC_TIMEOUT_HZ (2*HZ)
+#define ATH10K_ROC_TIMEOUT_HZ (2 * HZ)
 
 static int ath10k_remain_on_channel(struct ieee80211_hw *hw,
                                    struct ieee80211_vif *vif,
@@ -6232,7 +6255,7 @@ static int ath10k_remain_on_channel(struct ieee80211_hw *hw,
                goto exit;
        }
 
-       ret = wait_for_completion_timeout(&ar->scan.on_channel, 3*HZ);
+       ret = wait_for_completion_timeout(&ar->scan.on_channel, 3 * HZ);
        if (ret == 0) {
                ath10k_warn(ar, "failed to switch to channel for roc scan\n");
 
@@ -6796,6 +6819,32 @@ static u64 ath10k_get_tsf(struct ieee80211_hw *hw, struct ieee80211_vif *vif)
        return 0;
 }
 
+static void ath10k_set_tsf(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+                          u64 tsf)
+{
+       struct ath10k *ar = hw->priv;
+       struct ath10k_vif *arvif = ath10k_vif_to_arvif(vif);
+       u32 tsf_offset, vdev_param = ar->wmi.vdev_param->set_tsf;
+       int ret;
+
+       /* Workaround:
+        *
+        * Given tsf argument is entire TSF value, but firmware accepts
+        * only TSF offset to current TSF.
+        *
+        * get_tsf function is used to get offset value, however since
+        * ath10k_get_tsf is not implemented properly, it will return 0 always.
+        * Luckily all the caller functions to set_tsf, as of now, also rely on
+        * get_tsf function to get entire tsf value such get_tsf() + tsf_delta,
+        * final tsf offset value to firmware will be arithmetically correct.
+        */
+       tsf_offset = tsf - ath10k_get_tsf(hw, vif);
+       ret = ath10k_wmi_vdev_set_param(ar, arvif->vdev_id,
+                                       vdev_param, tsf_offset);
+       if (ret && ret != -EOPNOTSUPP)
+               ath10k_warn(ar, "failed to set tsf offset: %d\n", ret);
+}
+
 static int ath10k_ampdu_action(struct ieee80211_hw *hw,
                               struct ieee80211_vif *vif,
                               struct ieee80211_ampdu_params *params)
@@ -6867,7 +6916,13 @@ ath10k_mac_update_rx_channel(struct ath10k *ar,
                        def = &vifs[0].new_ctx->def;
 
                ar->rx_channel = def->chan;
-       } else if (ctx && ath10k_mac_num_chanctxs(ar) == 0) {
+       } else if ((ctx && ath10k_mac_num_chanctxs(ar) == 0) ||
+                  (ctx && (ar->state == ATH10K_STATE_RESTARTED))) {
+               /* During driver restart due to firmware assert, since mac80211
+                * already has valid channel context for given radio, channel
+                * context iteration return num_chanctx > 0. So fix rx_channel
+                * when restart is in progress.
+                */
                ar->rx_channel = ctx->def.chan;
        } else {
                ar->rx_channel = NULL;
@@ -7252,6 +7307,7 @@ static const struct ieee80211_ops ath10k_ops = {
        .set_bitrate_mask               = ath10k_mac_op_set_bitrate_mask,
        .sta_rc_update                  = ath10k_sta_rc_update,
        .get_tsf                        = ath10k_get_tsf,
+       .set_tsf                        = ath10k_set_tsf,
        .ampdu_action                   = ath10k_ampdu_action,
        .get_et_sset_count              = ath10k_debug_get_et_sset_count,
        .get_et_stats                   = ath10k_debug_get_et_stats,
@@ -7640,7 +7696,7 @@ int ath10k_mac_register(struct ath10k *ar)
        ar->hw->wiphy->available_antennas_rx = ar->cfg_rx_chainmask;
        ar->hw->wiphy->available_antennas_tx = ar->cfg_tx_chainmask;
 
-       if (!test_bit(ATH10K_FW_FEATURE_NO_P2P, ar->fw_features))
+       if (!test_bit(ATH10K_FW_FEATURE_NO_P2P, ar->normal_mode_fw.fw_file.fw_features))
                ar->hw->wiphy->interface_modes |=
                        BIT(NL80211_IFTYPE_P2P_DEVICE) |
                        BIT(NL80211_IFTYPE_P2P_CLIENT) |
@@ -7730,7 +7786,7 @@ int ath10k_mac_register(struct ath10k *ar)
         */
        ar->hw->offchannel_tx_hw_queue = IEEE80211_MAX_QUEUES - 1;
 
-       switch (ar->wmi.op_version) {
+       switch (ar->running_fw->fw_file.wmi_op_version) {
        case ATH10K_FW_WMI_OP_VERSION_MAIN:
                ar->hw->wiphy->iface_combinations = ath10k_if_comb;
                ar->hw->wiphy->n_iface_combinations =
index 2c3327b..1bd29ec 100644 (file)
@@ -81,6 +81,7 @@ int ath10k_mac_tx_push_txq(struct ieee80211_hw *hw,
 struct ieee80211_txq *ath10k_mac_txq_lookup(struct ath10k *ar,
                                            u16 peer_id,
                                            u8 tid);
+int ath10k_mac_ext_resource_config(struct ath10k *ar, u32 val);
 
 static inline struct ath10k_vif *ath10k_vif_to_arvif(struct ieee80211_vif *vif)
 {
index 0b305ef..8133d7b 100644 (file)
 #include "ce.h"
 #include "pci.h"
 
-enum ath10k_pci_irq_mode {
-       ATH10K_PCI_IRQ_AUTO = 0,
-       ATH10K_PCI_IRQ_LEGACY = 1,
-       ATH10K_PCI_IRQ_MSI = 2,
-};
-
 enum ath10k_pci_reset_mode {
        ATH10K_PCI_RESET_AUTO = 0,
        ATH10K_PCI_RESET_WARM_ONLY = 1,
@@ -745,10 +739,7 @@ static inline const char *ath10k_pci_get_irq_method(struct ath10k *ar)
 {
        struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
 
-       if (ar_pci->num_msi_intrs > 1)
-               return "msi-x";
-
-       if (ar_pci->num_msi_intrs == 1)
+       if (ar_pci->oper_irq_mode == ATH10K_PCI_IRQ_MSI)
                return "msi";
 
        return "legacy";
@@ -1502,13 +1493,8 @@ void ath10k_pci_hif_send_complete_check(struct ath10k *ar, u8 pipe,
 void ath10k_pci_kill_tasklet(struct ath10k *ar)
 {
        struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
-       int i;
 
        tasklet_kill(&ar_pci->intr_tq);
-       tasklet_kill(&ar_pci->msi_fw_err);
-
-       for (i = 0; i < CE_COUNT; i++)
-               tasklet_kill(&ar_pci->pipe_info[i].intr);
 
        del_timer_sync(&ar_pci->rx_post_retry);
 }
@@ -1624,10 +1610,8 @@ static void ath10k_pci_irq_disable(struct ath10k *ar)
 static void ath10k_pci_irq_sync(struct ath10k *ar)
 {
        struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
-       int i;
 
-       for (i = 0; i < max(1, ar_pci->num_msi_intrs); i++)
-               synchronize_irq(ar_pci->pdev->irq + i);
+       synchronize_irq(ar_pci->pdev->irq);
 }
 
 static void ath10k_pci_irq_enable(struct ath10k *ar)
@@ -2596,65 +2580,6 @@ static const struct ath10k_hif_ops ath10k_pci_hif_ops = {
 #endif
 };
 
-static void ath10k_pci_ce_tasklet(unsigned long ptr)
-{
-       struct ath10k_pci_pipe *pipe = (struct ath10k_pci_pipe *)ptr;
-       struct ath10k_pci *ar_pci = pipe->ar_pci;
-
-       ath10k_ce_per_engine_service(ar_pci->ar, pipe->pipe_num);
-}
-
-static void ath10k_msi_err_tasklet(unsigned long data)
-{
-       struct ath10k *ar = (struct ath10k *)data;
-
-       if (!ath10k_pci_has_fw_crashed(ar)) {
-               ath10k_warn(ar, "received unsolicited fw crash interrupt\n");
-               return;
-       }
-
-       ath10k_pci_irq_disable(ar);
-       ath10k_pci_fw_crashed_clear(ar);
-       ath10k_pci_fw_crashed_dump(ar);
-}
-
-/*
- * Handler for a per-engine interrupt on a PARTICULAR CE.
- * This is used in cases where each CE has a private MSI interrupt.
- */
-static irqreturn_t ath10k_pci_per_engine_handler(int irq, void *arg)
-{
-       struct ath10k *ar = arg;
-       struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
-       int ce_id = irq - ar_pci->pdev->irq - MSI_ASSIGN_CE_INITIAL;
-
-       if (ce_id < 0 || ce_id >= ARRAY_SIZE(ar_pci->pipe_info)) {
-               ath10k_warn(ar, "unexpected/invalid irq %d ce_id %d\n", irq,
-                           ce_id);
-               return IRQ_HANDLED;
-       }
-
-       /*
-        * NOTE: We are able to derive ce_id from irq because we
-        * use a one-to-one mapping for CE's 0..5.
-        * CE's 6 & 7 do not use interrupts at all.
-        *
-        * This mapping must be kept in sync with the mapping
-        * used by firmware.
-        */
-       tasklet_schedule(&ar_pci->pipe_info[ce_id].intr);
-       return IRQ_HANDLED;
-}
-
-static irqreturn_t ath10k_pci_msi_fw_handler(int irq, void *arg)
-{
-       struct ath10k *ar = arg;
-       struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
-
-       tasklet_schedule(&ar_pci->msi_fw_err);
-       return IRQ_HANDLED;
-}
-
 /*
  * Top-level interrupt handler for all PCI interrupts from a Target.
  * When a block of MSI interrupts is allocated, this top-level handler
@@ -2672,7 +2597,7 @@ static irqreturn_t ath10k_pci_interrupt_handler(int irq, void *arg)
                return IRQ_NONE;
        }
 
-       if (ar_pci->num_msi_intrs == 0) {
+       if (ar_pci->oper_irq_mode == ATH10K_PCI_IRQ_LEGACY) {
                if (!ath10k_pci_irq_pending(ar))
                        return IRQ_NONE;
 
@@ -2699,43 +2624,10 @@ static void ath10k_pci_tasklet(unsigned long data)
        ath10k_ce_per_engine_service_any(ar);
 
        /* Re-enable legacy irq that was disabled in the irq handler */
-       if (ar_pci->num_msi_intrs == 0)
+       if (ar_pci->oper_irq_mode == ATH10K_PCI_IRQ_LEGACY)
                ath10k_pci_enable_legacy_irq(ar);
 }
 
-static int ath10k_pci_request_irq_msix(struct ath10k *ar)
-{
-       struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
-       int ret, i;
-
-       ret = request_irq(ar_pci->pdev->irq + MSI_ASSIGN_FW,
-                         ath10k_pci_msi_fw_handler,
-                         IRQF_SHARED, "ath10k_pci", ar);
-       if (ret) {
-               ath10k_warn(ar, "failed to request MSI-X fw irq %d: %d\n",
-                           ar_pci->pdev->irq + MSI_ASSIGN_FW, ret);
-               return ret;
-       }
-
-       for (i = MSI_ASSIGN_CE_INITIAL; i <= MSI_ASSIGN_CE_MAX; i++) {
-               ret = request_irq(ar_pci->pdev->irq + i,
-                                 ath10k_pci_per_engine_handler,
-                                 IRQF_SHARED, "ath10k_pci", ar);
-               if (ret) {
-                       ath10k_warn(ar, "failed to request MSI-X ce irq %d: %d\n",
-                                   ar_pci->pdev->irq + i, ret);
-
-                       for (i--; i >= MSI_ASSIGN_CE_INITIAL; i--)
-                               free_irq(ar_pci->pdev->irq + i, ar);
-
-                       free_irq(ar_pci->pdev->irq + MSI_ASSIGN_FW, ar);
-                       return ret;
-               }
-       }
-
-       return 0;
-}
-
 static int ath10k_pci_request_irq_msi(struct ath10k *ar)
 {
        struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
@@ -2774,41 +2666,28 @@ static int ath10k_pci_request_irq(struct ath10k *ar)
 {
        struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
 
-       switch (ar_pci->num_msi_intrs) {
-       case 0:
+       switch (ar_pci->oper_irq_mode) {
+       case ATH10K_PCI_IRQ_LEGACY:
                return ath10k_pci_request_irq_legacy(ar);
-       case 1:
+       case ATH10K_PCI_IRQ_MSI:
                return ath10k_pci_request_irq_msi(ar);
        default:
-               return ath10k_pci_request_irq_msix(ar);
+               return -EINVAL;
        }
 }
 
 static void ath10k_pci_free_irq(struct ath10k *ar)
 {
        struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
-       int i;
 
-       /* There's at least one interrupt irregardless whether its legacy INTR
-        * or MSI or MSI-X */
-       for (i = 0; i < max(1, ar_pci->num_msi_intrs); i++)
-               free_irq(ar_pci->pdev->irq + i, ar);
+       free_irq(ar_pci->pdev->irq, ar);
 }
 
 void ath10k_pci_init_irq_tasklets(struct ath10k *ar)
 {
        struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
-       int i;
 
        tasklet_init(&ar_pci->intr_tq, ath10k_pci_tasklet, (unsigned long)ar);
-       tasklet_init(&ar_pci->msi_fw_err, ath10k_msi_err_tasklet,
-                    (unsigned long)ar);
-
-       for (i = 0; i < CE_COUNT; i++) {
-               ar_pci->pipe_info[i].ar_pci = ar_pci;
-               tasklet_init(&ar_pci->pipe_info[i].intr, ath10k_pci_ce_tasklet,
-                            (unsigned long)&ar_pci->pipe_info[i]);
-       }
 }
 
 static int ath10k_pci_init_irq(struct ath10k *ar)
@@ -2822,20 +2701,9 @@ static int ath10k_pci_init_irq(struct ath10k *ar)
                ath10k_info(ar, "limiting irq mode to: %d\n",
                            ath10k_pci_irq_mode);
 
-       /* Try MSI-X */
-       if (ath10k_pci_irq_mode == ATH10K_PCI_IRQ_AUTO) {
-               ar_pci->num_msi_intrs = MSI_ASSIGN_CE_MAX + 1;
-               ret = pci_enable_msi_range(ar_pci->pdev, ar_pci->num_msi_intrs,
-                                          ar_pci->num_msi_intrs);
-               if (ret > 0)
-                       return 0;
-
-               /* fall-through */
-       }
-
        /* Try MSI */
        if (ath10k_pci_irq_mode != ATH10K_PCI_IRQ_LEGACY) {
-               ar_pci->num_msi_intrs = 1;
+               ar_pci->oper_irq_mode = ATH10K_PCI_IRQ_MSI;
                ret = pci_enable_msi(ar_pci->pdev);
                if (ret == 0)
                        return 0;
@@ -2851,7 +2719,7 @@ static int ath10k_pci_init_irq(struct ath10k *ar)
         * This write might get lost if target has NOT written BAR.
         * For now, fix the race by repeating the write in below
         * synchronization checking. */
-       ar_pci->num_msi_intrs = 0;
+       ar_pci->oper_irq_mode = ATH10K_PCI_IRQ_LEGACY;
 
        ath10k_pci_write32(ar, SOC_CORE_BASE_ADDRESS + PCIE_INTR_ENABLE_ADDRESS,
                           PCIE_INTR_FIRMWARE_MASK | PCIE_INTR_CE_MASK_ALL);
@@ -2869,8 +2737,8 @@ static int ath10k_pci_deinit_irq(struct ath10k *ar)
 {
        struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
 
-       switch (ar_pci->num_msi_intrs) {
-       case 0:
+       switch (ar_pci->oper_irq_mode) {
+       case ATH10K_PCI_IRQ_LEGACY:
                ath10k_pci_deinit_irq_legacy(ar);
                break;
        default:
@@ -2908,7 +2776,7 @@ int ath10k_pci_wait_for_target_init(struct ath10k *ar)
                if (val & FW_IND_INITIALIZED)
                        break;
 
-               if (ar_pci->num_msi_intrs == 0)
+               if (ar_pci->oper_irq_mode == ATH10K_PCI_IRQ_LEGACY)
                        /* Fix potential race by repeating CORE_BASE writes */
                        ath10k_pci_enable_legacy_irq(ar);
 
@@ -3186,8 +3054,8 @@ static int ath10k_pci_probe(struct pci_dev *pdev,
                goto err_sleep;
        }
 
-       ath10k_info(ar, "pci irq %s interrupts %d irq_mode %d reset_mode %d\n",
-                   ath10k_pci_get_irq_method(ar), ar_pci->num_msi_intrs,
+       ath10k_info(ar, "pci irq %s oper_irq_mode %d irq_mode %d reset_mode %d\n",
+                   ath10k_pci_get_irq_method(ar), ar_pci->oper_irq_mode,
                    ath10k_pci_irq_mode, ath10k_pci_reset_mode);
 
        ret = ath10k_pci_request_irq(ar);
@@ -3305,7 +3173,6 @@ MODULE_DESCRIPTION("Driver support for Atheros QCA988X PCIe devices");
 MODULE_LICENSE("Dual BSD/GPL");
 
 /* QCA988x 2.0 firmware files */
-MODULE_FIRMWARE(QCA988X_HW_2_0_FW_DIR "/" QCA988X_HW_2_0_FW_FILE);
 MODULE_FIRMWARE(QCA988X_HW_2_0_FW_DIR "/" ATH10K_FW_API2_FILE);
 MODULE_FIRMWARE(QCA988X_HW_2_0_FW_DIR "/" ATH10K_FW_API3_FILE);
 MODULE_FIRMWARE(QCA988X_HW_2_0_FW_DIR "/" ATH10K_FW_API4_FILE);
index 249c73a..959dc32 100644 (file)
@@ -148,9 +148,6 @@ struct ath10k_pci_pipe {
 
        /* protects compl_free and num_send_allowed */
        spinlock_t pipe_lock;
-
-       struct ath10k_pci *ar_pci;
-       struct tasklet_struct intr;
 };
 
 struct ath10k_pci_supp_chip {
@@ -164,6 +161,12 @@ struct ath10k_bus_ops {
        int (*get_num_banks)(struct ath10k *ar);
 };
 
+enum ath10k_pci_irq_mode {
+       ATH10K_PCI_IRQ_AUTO = 0,
+       ATH10K_PCI_IRQ_LEGACY = 1,
+       ATH10K_PCI_IRQ_MSI = 2,
+};
+
 struct ath10k_pci {
        struct pci_dev *pdev;
        struct device *dev;
@@ -171,14 +174,10 @@ struct ath10k_pci {
        void __iomem *mem;
        size_t mem_len;
 
-       /*
-        * Number of MSI interrupts granted, 0 --> using legacy PCI line
-        * interrupts.
-        */
-       int num_msi_intrs;
+       /* Operating interrupt mode */
+       enum ath10k_pci_irq_mode oper_irq_mode;
 
        struct tasklet_struct intr_tq;
-       struct tasklet_struct msi_fw_err;
 
        struct ath10k_pci_pipe pipe_info[CE_COUNT_MAX];
 
index 3ca3fae..0c5f586 100644 (file)
@@ -134,27 +134,17 @@ ath10k_swap_code_seg_alloc(struct ath10k *ar, size_t swap_bin_len)
        return seg_info;
 }
 
-int ath10k_swap_code_seg_configure(struct ath10k *ar,
-                                  enum ath10k_swap_code_seg_bin_type type)
+int ath10k_swap_code_seg_configure(struct ath10k *ar)
 {
        int ret;
        struct ath10k_swap_code_seg_info *seg_info = NULL;
 
-       switch (type) {
-       case ATH10K_SWAP_CODE_SEG_BIN_TYPE_FW:
-               if (!ar->swap.firmware_swap_code_seg_info)
-                       return 0;
-
-               ath10k_dbg(ar, ATH10K_DBG_BOOT, "boot found firmware code swap binary\n");
-               seg_info = ar->swap.firmware_swap_code_seg_info;
-               break;
-       default:
-       case ATH10K_SWAP_CODE_SEG_BIN_TYPE_OTP:
-       case ATH10K_SWAP_CODE_SEG_BIN_TYPE_UTF:
-               ath10k_warn(ar, "ignoring unknown code swap binary type %d\n",
-                           type);
+       if (!ar->swap.firmware_swap_code_seg_info)
                return 0;
-       }
+
+       ath10k_dbg(ar, ATH10K_DBG_BOOT, "boot found firmware code swap binary\n");
+
+       seg_info = ar->swap.firmware_swap_code_seg_info;
 
        ret = ath10k_bmi_write_memory(ar, seg_info->target_addr,
                                      &seg_info->seg_hw_info,
@@ -171,8 +161,13 @@ int ath10k_swap_code_seg_configure(struct ath10k *ar,
 void ath10k_swap_code_seg_release(struct ath10k *ar)
 {
        ath10k_swap_code_seg_free(ar, ar->swap.firmware_swap_code_seg_info);
-       ar->swap.firmware_codeswap_data = NULL;
-       ar->swap.firmware_codeswap_len = 0;
+
+       /* FIXME: these two assignments look to bein wrong place! Shouldn't
+        * they be in ath10k_core_free_firmware_files() like the rest?
+        */
+       ar->normal_mode_fw.fw_file.codeswap_data = NULL;
+       ar->normal_mode_fw.fw_file.codeswap_len = 0;
+
        ar->swap.firmware_swap_code_seg_info = NULL;
 }
 
@@ -180,20 +175,23 @@ int ath10k_swap_code_seg_init(struct ath10k *ar)
 {
        int ret;
        struct ath10k_swap_code_seg_info *seg_info;
+       const void *codeswap_data;
+       size_t codeswap_len;
+
+       codeswap_data = ar->normal_mode_fw.fw_file.codeswap_data;
+       codeswap_len = ar->normal_mode_fw.fw_file.codeswap_len;
 
-       if (!ar->swap.firmware_codeswap_len || !ar->swap.firmware_codeswap_data)
+       if (!codeswap_len || !codeswap_data)
                return 0;
 
-       seg_info = ath10k_swap_code_seg_alloc(ar,
-                                             ar->swap.firmware_codeswap_len);
+       seg_info = ath10k_swap_code_seg_alloc(ar, codeswap_len);
        if (!seg_info) {
                ath10k_err(ar, "failed to allocate fw code swap segment\n");
                return -ENOMEM;
        }
 
        ret = ath10k_swap_code_seg_fill(ar, seg_info,
-                                       ar->swap.firmware_codeswap_data,
-                                       ar->swap.firmware_codeswap_len);
+                                       codeswap_data, codeswap_len);
 
        if (ret) {
                ath10k_warn(ar, "failed to initialize fw code swap segment: %d\n",
index 5c89952..36991c7 100644 (file)
@@ -39,12 +39,6 @@ union ath10k_swap_code_seg_item {
        struct ath10k_swap_code_seg_tail tail;
 } __packed;
 
-enum ath10k_swap_code_seg_bin_type {
-        ATH10K_SWAP_CODE_SEG_BIN_TYPE_OTP,
-        ATH10K_SWAP_CODE_SEG_BIN_TYPE_FW,
-        ATH10K_SWAP_CODE_SEG_BIN_TYPE_UTF,
-};
-
 struct ath10k_swap_code_seg_hw_info {
        /* Swap binary image size */
        __le32 swap_size;
@@ -64,8 +58,7 @@ struct ath10k_swap_code_seg_info {
        dma_addr_t paddr[ATH10K_SWAP_CODE_SEG_NUM_SUPPORTED];
 };
 
-int ath10k_swap_code_seg_configure(struct ath10k *ar,
-                                  enum ath10k_swap_code_seg_bin_type type);
+int ath10k_swap_code_seg_configure(struct ath10k *ar);
 void ath10k_swap_code_seg_release(struct ath10k *ar);
 int ath10k_swap_code_seg_init(struct ath10k *ar);
 
index 361f143..8e24099 100644 (file)
@@ -438,7 +438,7 @@ Fw Mode/SubMode Mask
        ((HOST_INTEREST->hi_pwr_save_flags & HI_PWR_SAVE_LPL_ENABLED))
 #define HI_DEV_LPL_TYPE_GET(_devix) \
        (HOST_INTEREST->hi_pwr_save_flags & ((HI_PWR_SAVE_LPL_DEV_MASK) << \
-        (HI_PWR_SAVE_LPL_DEV0_LSB + (_devix)*2)))
+        (HI_PWR_SAVE_LPL_DEV0_LSB + (_devix) * 2)))
 
 #define HOST_INTEREST_SMPS_IS_ALLOWED() \
        ((HOST_INTEREST->hi_smps_options & HI_SMPS_ALLOW_MASK))
index 1d5a2fd..120f423 100644 (file)
@@ -139,127 +139,8 @@ static int ath10k_tm_cmd_get_version(struct ath10k *ar, struct nlattr *tb[])
        return cfg80211_testmode_reply(skb);
 }
 
-static int ath10k_tm_fetch_utf_firmware_api_2(struct ath10k *ar)
-{
-       size_t len, magic_len, ie_len;
-       struct ath10k_fw_ie *hdr;
-       char filename[100];
-       __le32 *version;
-       const u8 *data;
-       int ie_id, ret;
-
-       snprintf(filename, sizeof(filename), "%s/%s",
-                ar->hw_params.fw.dir, ATH10K_FW_UTF_API2_FILE);
-
-       /* load utf firmware image */
-       ret = request_firmware(&ar->testmode.utf, filename, ar->dev);
-       if (ret) {
-               ath10k_warn(ar, "failed to retrieve utf firmware '%s': %d\n",
-                           filename, ret);
-               return ret;
-       }
-
-       data = ar->testmode.utf->data;
-       len = ar->testmode.utf->size;
-
-       /* FIXME: call release_firmware() in error cases */
-
-       /* magic also includes the null byte, check that as well */
-       magic_len = strlen(ATH10K_FIRMWARE_MAGIC) + 1;
-
-       if (len < magic_len) {
-               ath10k_err(ar, "utf firmware file is too small to contain magic\n");
-               ret = -EINVAL;
-               goto err;
-       }
-
-       if (memcmp(data, ATH10K_FIRMWARE_MAGIC, magic_len) != 0) {
-               ath10k_err(ar, "invalid firmware magic\n");
-               ret = -EINVAL;
-               goto err;
-       }
-
-       /* jump over the padding */
-       magic_len = ALIGN(magic_len, 4);
-
-       len -= magic_len;
-       data += magic_len;
-
-       /* loop elements */
-       while (len > sizeof(struct ath10k_fw_ie)) {
-               hdr = (struct ath10k_fw_ie *)data;
-
-               ie_id = le32_to_cpu(hdr->id);
-               ie_len = le32_to_cpu(hdr->len);
-
-               len -= sizeof(*hdr);
-               data += sizeof(*hdr);
-
-               if (len < ie_len) {
-                       ath10k_err(ar, "invalid length for FW IE %d (%zu < %zu)\n",
-                                  ie_id, len, ie_len);
-                       ret = -EINVAL;
-                       goto err;
-               }
-
-               switch (ie_id) {
-               case ATH10K_FW_IE_FW_VERSION:
-                       if (ie_len > sizeof(ar->testmode.utf_version) - 1)
-                               break;
-
-                       memcpy(ar->testmode.utf_version, data, ie_len);
-                       ar->testmode.utf_version[ie_len] = '\0';
-
-                       ath10k_dbg(ar, ATH10K_DBG_TESTMODE,
-                                  "testmode found fw utf version %s\n",
-                                  ar->testmode.utf_version);
-                       break;
-               case ATH10K_FW_IE_TIMESTAMP:
-                       /* ignore timestamp, but don't warn about it either */
-                       break;
-               case ATH10K_FW_IE_FW_IMAGE:
-                       ath10k_dbg(ar, ATH10K_DBG_TESTMODE,
-                                  "testmode found fw image ie (%zd B)\n",
-                                  ie_len);
-
-                       ar->testmode.utf_firmware_data = data;
-                       ar->testmode.utf_firmware_len = ie_len;
-                       break;
-               case ATH10K_FW_IE_WMI_OP_VERSION:
-                       if (ie_len != sizeof(u32))
-                               break;
-                       version = (__le32 *)data;
-                       ar->testmode.op_version = le32_to_cpup(version);
-                       ath10k_dbg(ar, ATH10K_DBG_TESTMODE, "testmode found fw ie wmi op version %d\n",
-                                  ar->testmode.op_version);
-                       break;
-               default:
-                       ath10k_warn(ar, "Unknown testmode FW IE: %u\n",
-                                   le32_to_cpu(hdr->id));
-                       break;
-               }
-               /* jump over the padding */
-               ie_len = ALIGN(ie_len, 4);
-
-               len -= ie_len;
-               data += ie_len;
-       }
-
-       if (!ar->testmode.utf_firmware_data || !ar->testmode.utf_firmware_len) {
-               ath10k_err(ar, "No ATH10K_FW_IE_FW_IMAGE found\n");
-               ret = -EINVAL;
-               goto err;
-       }
-
-       return 0;
-
-err:
-       release_firmware(ar->testmode.utf);
-
-       return ret;
-}
-
-static int ath10k_tm_fetch_utf_firmware_api_1(struct ath10k *ar)
+static int ath10k_tm_fetch_utf_firmware_api_1(struct ath10k *ar,
+                                             struct ath10k_fw_file *fw_file)
 {
        char filename[100];
        int ret;
@@ -268,7 +149,7 @@ static int ath10k_tm_fetch_utf_firmware_api_1(struct ath10k *ar)
                 ar->hw_params.fw.dir, ATH10K_FW_UTF_FILE);
 
        /* load utf firmware image */
-       ret = request_firmware(&ar->testmode.utf, filename, ar->dev);
+       ret = request_firmware(&fw_file->firmware, filename, ar->dev);
        if (ret) {
                ath10k_warn(ar, "failed to retrieve utf firmware '%s': %d\n",
                            filename, ret);
@@ -281,24 +162,27 @@ static int ath10k_tm_fetch_utf_firmware_api_1(struct ath10k *ar)
         * correct WMI interface.
         */
 
-       ar->testmode.op_version = ATH10K_FW_WMI_OP_VERSION_10_1;
-       ar->testmode.utf_firmware_data = ar->testmode.utf->data;
-       ar->testmode.utf_firmware_len = ar->testmode.utf->size;
+       fw_file->wmi_op_version = ATH10K_FW_WMI_OP_VERSION_10_1;
+       fw_file->htt_op_version = ATH10K_FW_HTT_OP_VERSION_10_1;
+       fw_file->firmware_data = fw_file->firmware->data;
+       fw_file->firmware_len = fw_file->firmware->size;
 
        return 0;
 }
 
 static int ath10k_tm_fetch_firmware(struct ath10k *ar)
 {
+       struct ath10k_fw_components *utf_mode_fw;
        int ret;
 
-       ret = ath10k_tm_fetch_utf_firmware_api_2(ar);
+       ret = ath10k_core_fetch_firmware_api_n(ar, ATH10K_FW_UTF_API2_FILE,
+                                              &ar->testmode.utf_mode_fw.fw_file);
        if (ret == 0) {
                ath10k_dbg(ar, ATH10K_DBG_TESTMODE, "testmode using fw utf api 2");
-               return 0;
+               goto out;
        }
 
-       ret = ath10k_tm_fetch_utf_firmware_api_1(ar);
+       ret = ath10k_tm_fetch_utf_firmware_api_1(ar, &ar->testmode.utf_mode_fw.fw_file);
        if (ret) {
                ath10k_err(ar, "failed to fetch utf firmware binary: %d", ret);
                return ret;
@@ -306,6 +190,21 @@ static int ath10k_tm_fetch_firmware(struct ath10k *ar)
 
        ath10k_dbg(ar, ATH10K_DBG_TESTMODE, "testmode using utf api 1");
 
+out:
+       utf_mode_fw = &ar->testmode.utf_mode_fw;
+
+       /* Use the same board data file as the normal firmware uses (but
+        * it's still "owned" by normal_mode_fw so we shouldn't free it.
+        */
+       utf_mode_fw->board_data = ar->normal_mode_fw.board_data;
+       utf_mode_fw->board_len = ar->normal_mode_fw.board_len;
+
+       if (!utf_mode_fw->fw_file.otp_data) {
+               ath10k_info(ar, "utf.bin didn't contain otp binary, taking it from the normal mode firmware");
+               utf_mode_fw->fw_file.otp_data = ar->normal_mode_fw.fw_file.otp_data;
+               utf_mode_fw->fw_file.otp_len = ar->normal_mode_fw.fw_file.otp_len;
+       }
+
        return 0;
 }
 
@@ -329,7 +228,7 @@ static int ath10k_tm_cmd_utf_start(struct ath10k *ar, struct nlattr *tb[])
                goto err;
        }
 
-       if (WARN_ON(ar->testmode.utf != NULL)) {
+       if (WARN_ON(ar->testmode.utf_mode_fw.fw_file.firmware != NULL)) {
                /* utf image is already downloaded, it shouldn't be */
                ret = -EEXIST;
                goto err;
@@ -344,27 +243,19 @@ static int ath10k_tm_cmd_utf_start(struct ath10k *ar, struct nlattr *tb[])
        spin_lock_bh(&ar->data_lock);
        ar->testmode.utf_monitor = true;
        spin_unlock_bh(&ar->data_lock);
-       BUILD_BUG_ON(sizeof(ar->fw_features) !=
-                    sizeof(ar->testmode.orig_fw_features));
-
-       memcpy(ar->testmode.orig_fw_features, ar->fw_features,
-              sizeof(ar->fw_features));
-       ar->testmode.orig_wmi_op_version = ar->wmi.op_version;
-       memset(ar->fw_features, 0, sizeof(ar->fw_features));
-
-       ar->wmi.op_version = ar->testmode.op_version;
 
        ath10k_dbg(ar, ATH10K_DBG_TESTMODE, "testmode wmi version %d\n",
-                  ar->wmi.op_version);
+                  ar->testmode.utf_mode_fw.fw_file.wmi_op_version);
 
        ret = ath10k_hif_power_up(ar);
        if (ret) {
                ath10k_err(ar, "failed to power up hif (testmode): %d\n", ret);
                ar->state = ATH10K_STATE_OFF;
-               goto err_fw_features;
+               goto err_release_utf_mode_fw;
        }
 
-       ret = ath10k_core_start(ar, ATH10K_FIRMWARE_MODE_UTF);
+       ret = ath10k_core_start(ar, ATH10K_FIRMWARE_MODE_UTF,
+                               &ar->testmode.utf_mode_fw);
        if (ret) {
                ath10k_err(ar, "failed to start core (testmode): %d\n", ret);
                ar->state = ATH10K_STATE_OFF;
@@ -373,8 +264,8 @@ static int ath10k_tm_cmd_utf_start(struct ath10k *ar, struct nlattr *tb[])
 
        ar->state = ATH10K_STATE_UTF;
 
-       if (strlen(ar->testmode.utf_version) > 0)
-               ver = ar->testmode.utf_version;
+       if (strlen(ar->testmode.utf_mode_fw.fw_file.fw_version) > 0)
+               ver = ar->testmode.utf_mode_fw.fw_file.fw_version;
        else
                ver = "API 1";
 
@@ -387,14 +278,9 @@ static int ath10k_tm_cmd_utf_start(struct ath10k *ar, struct nlattr *tb[])
 err_power_down:
        ath10k_hif_power_down(ar);
 
-err_fw_features:
-       /* return the original firmware features */
-       memcpy(ar->fw_features, ar->testmode.orig_fw_features,
-              sizeof(ar->fw_features));
-       ar->wmi.op_version = ar->testmode.orig_wmi_op_version;
-
-       release_firmware(ar->testmode.utf);
-       ar->testmode.utf = NULL;
+err_release_utf_mode_fw:
+       release_firmware(ar->testmode.utf_mode_fw.fw_file.firmware);
+       ar->testmode.utf_mode_fw.fw_file.firmware = NULL;
 
 err:
        mutex_unlock(&ar->conf_mutex);
@@ -415,13 +301,8 @@ static void __ath10k_tm_cmd_utf_stop(struct ath10k *ar)
 
        spin_unlock_bh(&ar->data_lock);
 
-       /* return the original firmware features */
-       memcpy(ar->fw_features, ar->testmode.orig_fw_features,
-              sizeof(ar->fw_features));
-       ar->wmi.op_version = ar->testmode.orig_wmi_op_version;
-
-       release_firmware(ar->testmode.utf);
-       ar->testmode.utf = NULL;
+       release_firmware(ar->testmode.utf_mode_fw.fw_file.firmware);
+       ar->testmode.utf_mode_fw.fw_file.firmware = NULL;
 
        ar->state = ATH10K_STATE_OFF;
 }
index c9223e9..3abb97f 100644 (file)
@@ -20,7 +20,7 @@
 #define ATH10K_QUIET_PERIOD_MIN         25
 #define ATH10K_QUIET_START_OFFSET       10
 #define ATH10K_HWMON_NAME_LEN           15
-#define ATH10K_THERMAL_SYNC_TIMEOUT_HZ (5*HZ)
+#define ATH10K_THERMAL_SYNC_TIMEOUT_HZ (5 * HZ)
 #define ATH10K_THERMAL_THROTTLE_MAX     100
 
 struct ath10k_thermal {
index 9369411..576e7c4 100644 (file)
@@ -130,7 +130,7 @@ struct ath10k_peer *ath10k_peer_find(struct ath10k *ar, int vdev_id,
        list_for_each_entry(peer, &ar->peers, list) {
                if (peer->vdev_id != vdev_id)
                        continue;
-               if (memcmp(peer->addr, addr, ETH_ALEN))
+               if (!ether_addr_equal(peer->addr, addr))
                        continue;
 
                return peer;
@@ -166,7 +166,7 @@ static int ath10k_wait_for_peer_common(struct ath10k *ar, int vdev_id,
 
                        (mapped == expect_mapped ||
                         test_bit(ATH10K_FLAG_CRASH_FLUSH, &ar->dev_flags));
-               }), 3*HZ);
+               }), 3 * HZ);
 
        if (time_left == 0)
                return -ETIMEDOUT;
@@ -190,6 +190,13 @@ void ath10k_peer_map_event(struct ath10k_htt *htt,
        struct ath10k *ar = htt->ar;
        struct ath10k_peer *peer;
 
+       if (ev->peer_id >= ATH10K_MAX_NUM_PEER_IDS) {
+               ath10k_warn(ar,
+                           "received htt peer map event with idx out of bounds: %hu\n",
+                           ev->peer_id);
+               return;
+       }
+
        spin_lock_bh(&ar->data_lock);
        peer = ath10k_peer_find(ar, ev->vdev_id, ev->addr);
        if (!peer) {
@@ -218,6 +225,13 @@ void ath10k_peer_unmap_event(struct ath10k_htt *htt,
        struct ath10k *ar = htt->ar;
        struct ath10k_peer *peer;
 
+       if (ev->peer_id >= ATH10K_MAX_NUM_PEER_IDS) {
+               ath10k_warn(ar,
+                           "received htt peer unmap event with idx out of bounds: %hu\n",
+                           ev->peer_id);
+               return;
+       }
+
        spin_lock_bh(&ar->data_lock);
        peer = ath10k_peer_find_by_id(ar, ev->peer_id);
        if (!peer) {
index 1085932..e09337e 100644 (file)
@@ -3409,6 +3409,7 @@ static struct wmi_vdev_param_map wmi_tlv_vdev_param_map = {
        .meru_vc = WMI_VDEV_PARAM_UNSUPPORTED,
        .rx_decap_type = WMI_VDEV_PARAM_UNSUPPORTED,
        .bw_nss_ratemask = WMI_VDEV_PARAM_UNSUPPORTED,
+       .set_tsf = WMI_VDEV_PARAM_UNSUPPORTED,
 };
 
 static const struct wmi_ops wmi_tlv_ops = {
index dd67859..b8aa600 100644 (file)
@@ -968,8 +968,8 @@ enum wmi_tlv_service {
 
 #define WMI_SERVICE_IS_ENABLED(wmi_svc_bmap, svc_id, len) \
        ((svc_id) < (len) && \
-        __le32_to_cpu((wmi_svc_bmap)[(svc_id)/(sizeof(u32))]) & \
-        BIT((svc_id)%(sizeof(u32))))
+        __le32_to_cpu((wmi_svc_bmap)[(svc_id) / (sizeof(u32))]) & \
+        BIT((svc_id) % (sizeof(u32))))
 
 #define SVCMAP(x, y, len) \
        do { \
index 4c75c74..621019f 100644 (file)
@@ -781,6 +781,7 @@ static struct wmi_vdev_param_map wmi_vdev_param_map = {
        .meru_vc = WMI_VDEV_PARAM_UNSUPPORTED,
        .rx_decap_type = WMI_VDEV_PARAM_UNSUPPORTED,
        .bw_nss_ratemask = WMI_VDEV_PARAM_UNSUPPORTED,
+       .set_tsf = WMI_VDEV_PARAM_UNSUPPORTED,
 };
 
 /* 10.X WMI VDEV param map */
@@ -856,6 +857,7 @@ static struct wmi_vdev_param_map wmi_10x_vdev_param_map = {
        .meru_vc = WMI_VDEV_PARAM_UNSUPPORTED,
        .rx_decap_type = WMI_VDEV_PARAM_UNSUPPORTED,
        .bw_nss_ratemask = WMI_VDEV_PARAM_UNSUPPORTED,
+       .set_tsf = WMI_VDEV_PARAM_UNSUPPORTED,
 };
 
 static struct wmi_vdev_param_map wmi_10_2_4_vdev_param_map = {
@@ -930,6 +932,7 @@ static struct wmi_vdev_param_map wmi_10_2_4_vdev_param_map = {
        .meru_vc = WMI_VDEV_PARAM_UNSUPPORTED,
        .rx_decap_type = WMI_VDEV_PARAM_UNSUPPORTED,
        .bw_nss_ratemask = WMI_VDEV_PARAM_UNSUPPORTED,
+       .set_tsf = WMI_10X_VDEV_PARAM_TSF_INCREMENT,
 };
 
 static struct wmi_vdev_param_map wmi_10_4_vdev_param_map = {
@@ -1005,6 +1008,7 @@ static struct wmi_vdev_param_map wmi_10_4_vdev_param_map = {
        .meru_vc = WMI_10_4_VDEV_PARAM_MERU_VC,
        .rx_decap_type = WMI_10_4_VDEV_PARAM_RX_DECAP_TYPE,
        .bw_nss_ratemask = WMI_10_4_VDEV_PARAM_BW_NSS_RATEMASK,
+       .set_tsf = WMI_10_4_VDEV_PARAM_TSF_INCREMENT,
 };
 
 static struct wmi_pdev_param_map wmi_pdev_param_map = {
@@ -1804,7 +1808,7 @@ int ath10k_wmi_cmd_send(struct ath10k *ar, struct sk_buff *skb, u32 cmd_id)
                        ret = -ESHUTDOWN;
 
                (ret != -EAGAIN);
-       }), 3*HZ);
+       }), 3 * HZ);
 
        if (ret)
                dev_kfree_skb_any(skb);
@@ -2145,7 +2149,8 @@ static int ath10k_wmi_op_pull_mgmt_rx_ev(struct ath10k *ar, struct sk_buff *skb,
        u32 msdu_len;
        u32 len;
 
-       if (test_bit(ATH10K_FW_FEATURE_EXT_WMI_MGMT_RX, ar->fw_features)) {
+       if (test_bit(ATH10K_FW_FEATURE_EXT_WMI_MGMT_RX,
+                    ar->running_fw->fw_file.fw_features)) {
                ev_v2 = (struct wmi_mgmt_rx_event_v2 *)skb->data;
                ev_hdr = &ev_v2->hdr.v1;
                pull_len = sizeof(*ev_v2);
@@ -4600,10 +4605,6 @@ static void ath10k_wmi_event_service_ready_work(struct work_struct *work)
        ath10k_dbg_dump(ar, ATH10K_DBG_WMI, NULL, "wmi svc: ",
                        arg.service_map, arg.service_map_len);
 
-       /* only manually set fw features when not using FW IE format */
-       if (ar->fw_api == 1 && ar->fw_version_build > 636)
-               set_bit(ATH10K_FW_FEATURE_EXT_WMI_MGMT_RX, ar->fw_features);
-
        if (ar->num_rf_chains > ar->max_spatial_stream) {
                ath10k_warn(ar, "hardware advertises support for more spatial streams than it should (%d > %d)\n",
                            ar->num_rf_chains, ar->max_spatial_stream);
@@ -4634,7 +4635,7 @@ static void ath10k_wmi_event_service_ready_work(struct work_struct *work)
 
        if (test_bit(WMI_SERVICE_PEER_CACHING, ar->wmi.svc_map)) {
                if (test_bit(ATH10K_FW_FEATURE_PEER_FLOW_CONTROL,
-                            ar->fw_features))
+                            ar->running_fw->fw_file.fw_features))
                        ar->num_active_peers = TARGET_10_4_QCACHE_ACTIVE_PEERS_PFC +
                                               ar->max_num_vdevs;
                else
@@ -5823,9 +5824,8 @@ ath10k_wmi_put_start_scan_tlvs(struct wmi_start_scan_tlvs *tlvs,
                bssids->num_bssid = __cpu_to_le32(arg->n_bssids);
 
                for (i = 0; i < arg->n_bssids; i++)
-                       memcpy(&bssids->bssid_list[i],
-                              arg->bssids[i].bssid,
-                              ETH_ALEN);
+                       ether_addr_copy(bssids->bssid_list[i].addr,
+                                       arg->bssids[i].bssid);
 
                ptr += sizeof(*bssids);
                ptr += sizeof(struct wmi_mac_addr) * arg->n_bssids;
@@ -7865,7 +7865,7 @@ static const struct wmi_ops wmi_10_4_ops = {
 
 int ath10k_wmi_attach(struct ath10k *ar)
 {
-       switch (ar->wmi.op_version) {
+       switch (ar->running_fw->fw_file.wmi_op_version) {
        case ATH10K_FW_WMI_OP_VERSION_10_4:
                ar->wmi.ops = &wmi_10_4_ops;
                ar->wmi.cmd = &wmi_10_4_cmd_map;
@@ -7907,7 +7907,7 @@ int ath10k_wmi_attach(struct ath10k *ar)
        case ATH10K_FW_WMI_OP_VERSION_UNSET:
        case ATH10K_FW_WMI_OP_VERSION_MAX:
                ath10k_err(ar, "unsupported WMI op version: %d\n",
-                          ar->wmi.op_version);
+                          ar->running_fw->fw_file.wmi_op_version);
                return -EINVAL;
        }
 
index feebd19..db25535 100644 (file)
@@ -180,6 +180,9 @@ enum wmi_service {
        WMI_SERVICE_MESH_NON_11S,
        WMI_SERVICE_PEER_STATS,
        WMI_SERVICE_RESTRT_CHNL_SUPPORT,
+       WMI_SERVICE_TX_MODE_PUSH_ONLY,
+       WMI_SERVICE_TX_MODE_PUSH_PULL,
+       WMI_SERVICE_TX_MODE_DYNAMIC,
 
        /* keep last */
        WMI_SERVICE_MAX,
@@ -302,6 +305,9 @@ enum wmi_10_4_service {
        WMI_10_4_SERVICE_RESTRT_CHNL_SUPPORT,
        WMI_10_4_SERVICE_PEER_STATS,
        WMI_10_4_SERVICE_MESH_11S,
+       WMI_10_4_SERVICE_TX_MODE_PUSH_ONLY,
+       WMI_10_4_SERVICE_TX_MODE_PUSH_PULL,
+       WMI_10_4_SERVICE_TX_MODE_DYNAMIC,
 };
 
 static inline char *wmi_service_name(int service_id)
@@ -396,6 +402,9 @@ static inline char *wmi_service_name(int service_id)
        SVCSTR(WMI_SERVICE_MESH_NON_11S);
        SVCSTR(WMI_SERVICE_PEER_STATS);
        SVCSTR(WMI_SERVICE_RESTRT_CHNL_SUPPORT);
+       SVCSTR(WMI_SERVICE_TX_MODE_PUSH_ONLY);
+       SVCSTR(WMI_SERVICE_TX_MODE_PUSH_PULL);
+       SVCSTR(WMI_SERVICE_TX_MODE_DYNAMIC);
        default:
                return NULL;
        }
@@ -405,8 +414,8 @@ static inline char *wmi_service_name(int service_id)
 
 #define WMI_SERVICE_IS_ENABLED(wmi_svc_bmap, svc_id, len) \
        ((svc_id) < (len) && \
-        __le32_to_cpu((wmi_svc_bmap)[(svc_id)/(sizeof(u32))]) & \
-        BIT((svc_id)%(sizeof(u32))))
+        __le32_to_cpu((wmi_svc_bmap)[(svc_id) / (sizeof(u32))]) & \
+        BIT((svc_id) % (sizeof(u32))))
 
 #define SVCMAP(x, y, len) \
        do { \
@@ -643,6 +652,12 @@ static inline void wmi_10_4_svc_map(const __le32 *in, unsigned long *out,
               WMI_SERVICE_PEER_STATS, len);
        SVCMAP(WMI_10_4_SERVICE_MESH_11S,
               WMI_SERVICE_MESH_11S, len);
+       SVCMAP(WMI_10_4_SERVICE_TX_MODE_PUSH_ONLY,
+              WMI_SERVICE_TX_MODE_PUSH_ONLY, len);
+       SVCMAP(WMI_10_4_SERVICE_TX_MODE_PUSH_PULL,
+              WMI_SERVICE_TX_MODE_PUSH_PULL, len);
+       SVCMAP(WMI_10_4_SERVICE_TX_MODE_DYNAMIC,
+              WMI_SERVICE_TX_MODE_DYNAMIC, len);
 }
 
 #undef SVCMAP
@@ -1309,7 +1324,7 @@ enum wmi_10x_event_id {
        WMI_10X_PDEV_TPC_CONFIG_EVENTID,
 
        WMI_10X_GPIO_INPUT_EVENTID,
-       WMI_10X_PDEV_UTF_EVENTID = WMI_10X_END_EVENTID-1,
+       WMI_10X_PDEV_UTF_EVENTID = WMI_10X_END_EVENTID - 1,
 };
 
 enum wmi_10_2_cmd_id {
@@ -2042,8 +2057,8 @@ struct wmi_10x_service_ready_event {
        struct wlan_host_mem_req mem_reqs[0];
 } __packed;
 
-#define WMI_SERVICE_READY_TIMEOUT_HZ (5*HZ)
-#define WMI_UNIFIED_READY_TIMEOUT_HZ (5*HZ)
+#define WMI_SERVICE_READY_TIMEOUT_HZ (5 * HZ)
+#define WMI_UNIFIED_READY_TIMEOUT_HZ (5 * HZ)
 
 struct wmi_ready_event {
        __le32 sw_version;
@@ -2661,9 +2676,14 @@ struct wmi_resource_config_10_4 {
         */
        __le32 iphdr_pad_config;
 
-       /* qwrap configuration
+       /* qwrap configuration (bits 15-0)
         * 1  - This is qwrap configuration
         * 0  - This is not qwrap
+        *
+        * Bits 31-16 is alloc_frag_desc_for_data_pkt (1 enables, 0 disables)
+        * In order to get ack-RSSI reporting and to specify the tx-rate for
+        * individual frames, this option must be enabled.  This uses an extra
+        * 4 bytes per tx-msdu descriptor, so don't enable it unless you need it.
         */
        __le32 qwrap_config;
 } __packed;
@@ -4384,14 +4404,14 @@ enum wmi_vdev_subtype_10_4 {
 /*
  * Indicates that AP VDEV uses hidden ssid. only valid for
  *  AP/GO */
-#define WMI_VDEV_START_HIDDEN_SSID  (1<<0)
+#define WMI_VDEV_START_HIDDEN_SSID  (1 << 0)
 /*
  * Indicates if robust management frame/management frame
  *  protection is enabled. For GO/AP vdevs, it indicates that
  *  it may support station/client associations with RMF enabled.
  *  For STA/client vdevs, it indicates that sta will
  *  associate with AP with RMF enabled. */
-#define WMI_VDEV_START_PMF_ENABLED  (1<<1)
+#define WMI_VDEV_START_PMF_ENABLED  (1 << 1)
 
 struct wmi_p2p_noa_descriptor {
        __le32 type_count; /* 255: continuous schedule, 0: reserved */
@@ -4630,6 +4650,7 @@ struct wmi_vdev_param_map {
        u32 meru_vc;
        u32 rx_decap_type;
        u32 bw_nss_ratemask;
+       u32 set_tsf;
 };
 
 #define WMI_VDEV_PARAM_UNSUPPORTED 0
@@ -4886,6 +4907,7 @@ enum wmi_10x_vdev_param {
        WMI_10X_VDEV_PARAM_RTS_FIXED_RATE,
        WMI_10X_VDEV_PARAM_VHT_SGIMASK,
        WMI_10X_VDEV_PARAM_VHT80_RATEMASK,
+       WMI_10X_VDEV_PARAM_TSF_INCREMENT,
 };
 
 enum wmi_10_4_vdev_param {
@@ -4955,6 +4977,12 @@ enum wmi_10_4_vdev_param {
        WMI_10_4_VDEV_PARAM_MERU_VC,
        WMI_10_4_VDEV_PARAM_RX_DECAP_TYPE,
        WMI_10_4_VDEV_PARAM_BW_NSS_RATEMASK,
+       WMI_10_4_VDEV_PARAM_SENSOR_AP,
+       WMI_10_4_VDEV_PARAM_BEACON_RATE,
+       WMI_10_4_VDEV_PARAM_DTIM_ENABLE_CTS,
+       WMI_10_4_VDEV_PARAM_STA_KICKOUT,
+       WMI_10_4_VDEV_PARAM_CAPABILITIES,
+       WMI_10_4_VDEV_PARAM_TSF_INCREMENT,
 };
 
 #define WMI_VDEV_PARAM_TXBF_SU_TX_BFEE BIT(0)
@@ -5329,7 +5357,7 @@ enum wmi_sta_ps_param_pspoll_count {
 #define WMI_UAPSD_AC_TYPE_TRIG 1
 
 #define WMI_UAPSD_AC_BIT_MASK(ac, type) \
-       ((type ==  WMI_UAPSD_AC_TYPE_DELI) ? (1<<(ac<<1)) : (1<<((ac<<1)+1)))
+       ((type ==  WMI_UAPSD_AC_TYPE_DELI) ? (1 << (ac << 1)) : (1 << ((ac << 1) + 1)))
 
 enum wmi_sta_ps_param_uapsd {
        WMI_STA_PS_UAPSD_AC0_DELIVERY_EN = (1 << 0),
@@ -5744,7 +5772,7 @@ struct wmi_rate_set {
         * the rates are filled from least significant byte to most
         * significant byte.
         */
-       __le32 rates[(MAX_SUPPORTED_RATES/4)+1];
+       __le32 rates[(MAX_SUPPORTED_RATES / 4) + 1];
 } __packed;
 
 struct wmi_rate_set_arg {
index 8e02b38..77100d4 100644 (file)
@@ -233,7 +233,7 @@ int ath10k_wow_op_suspend(struct ieee80211_hw *hw,
        mutex_lock(&ar->conf_mutex);
 
        if (WARN_ON(!test_bit(ATH10K_FW_FEATURE_WOWLAN_SUPPORT,
-                             ar->fw_features))) {
+                             ar->running_fw->fw_file.fw_features))) {
                ret = 1;
                goto exit;
        }
@@ -285,7 +285,7 @@ int ath10k_wow_op_resume(struct ieee80211_hw *hw)
        mutex_lock(&ar->conf_mutex);
 
        if (WARN_ON(!test_bit(ATH10K_FW_FEATURE_WOWLAN_SUPPORT,
-                             ar->fw_features))) {
+                             ar->running_fw->fw_file.fw_features))) {
                ret = 1;
                goto exit;
        }
@@ -325,7 +325,8 @@ exit:
 
 int ath10k_wow_init(struct ath10k *ar)
 {
-       if (!test_bit(ATH10K_FW_FEATURE_WOWLAN_SUPPORT, ar->fw_features))
+       if (!test_bit(ATH10K_FW_FEATURE_WOWLAN_SUPPORT,
+                     ar->running_fw->fw_file.fw_features))
                return 0;
 
        if (WARN_ON(!test_bit(WMI_SERVICE_WOW, ar->wmi.svc_map)))
index 8f87930..1b271b9 100644 (file)
@@ -274,6 +274,9 @@ void ar5008_hw_cmn_spur_mitigate(struct ath_hw *ah,
        };
        static const int inc[4] = { 0, 100, 0, 0 };
 
+       memset(&mask_m, 0, sizeof(int8_t) * 123);
+       memset(&mask_p, 0, sizeof(int8_t) * 123);
+
        cur_bin = -6000;
        upper = bin + 100;
        lower = bin - 100;
@@ -424,14 +427,9 @@ static void ar5008_hw_spur_mitigate(struct ath_hw *ah,
        int tmp, new;
        int i;
 
-       int8_t mask_m[123];
-       int8_t mask_p[123];
        int cur_bb_spur;
        bool is2GHz = IS_CHAN_2GHZ(chan);
 
-       memset(&mask_m, 0, sizeof(int8_t) * 123);
-       memset(&mask_p, 0, sizeof(int8_t) * 123);
-
        for (i = 0; i < AR_EEPROM_MODAL_SPURS; i++) {
                cur_bb_spur = ah->eep_ops->get_spur_channel(ah, i, is2GHz);
                if (AR_NO_SPUR == cur_bb_spur)
index db66245..53d7445 100644 (file)
@@ -178,14 +178,9 @@ static void ar9002_hw_spur_mitigate(struct ath_hw *ah,
        int i;
        struct chan_centers centers;
 
-       int8_t mask_m[123];
-       int8_t mask_p[123];
        int cur_bb_spur;
        bool is2GHz = IS_CHAN_2GHZ(chan);
 
-       memset(&mask_m, 0, sizeof(int8_t) * 123);
-       memset(&mask_p, 0, sizeof(int8_t) * 123);
-
        ath9k_hw_get_channel_centers(ah, chan, &centers);
        freq = centers.synth_center;
 
index 8a8d785..a553c91 100644 (file)
@@ -246,7 +246,7 @@ static int ath9k_htc_set_channel(struct ath9k_htc_priv *priv,
        struct ieee80211_conf *conf = &common->hw->conf;
        bool fastcc;
        struct ieee80211_channel *channel = hw->conf.chandef.chan;
-       struct ath9k_hw_cal_data *caldata = NULL;
+       struct ath9k_hw_cal_data *caldata;
        enum htc_phymode mode;
        __be16 htc_mode;
        u8 cmd_rsp;
@@ -274,10 +274,7 @@ static int ath9k_htc_set_channel(struct ath9k_htc_priv *priv,
                priv->ah->curchan->channel,
                channel->center_freq, conf_is_ht(conf), conf_is_ht40(conf),
                fastcc);
-
-       if (!fastcc)
-               caldata = &priv->caldata;
-
+       caldata = fastcc ? NULL : &priv->caldata;
        ret = ath9k_hw_reset(ah, hchan, caldata, fastcc);
        if (ret) {
                ath_err(common,
index 4200906..8b2895f 100644 (file)
@@ -2914,8 +2914,7 @@ void ath9k_hw_apply_txpower(struct ath_hw *ah, struct ath9k_channel *chan,
 {
        struct ath_regulatory *reg = ath9k_hw_regulatory(ah);
        struct ieee80211_channel *channel;
-       int chan_pwr, new_pwr, max_gain;
-       int ant_gain, ant_reduction = 0;
+       int chan_pwr, new_pwr;
 
        if (!chan)
                return;
@@ -2923,15 +2922,10 @@ void ath9k_hw_apply_txpower(struct ath_hw *ah, struct ath9k_channel *chan,
        channel = chan->chan;
        chan_pwr = min_t(int, channel->max_power * 2, MAX_RATE_POWER);
        new_pwr = min_t(int, chan_pwr, reg->power_limit);
-       max_gain = chan_pwr - new_pwr + channel->max_antenna_gain * 2;
-
-       ant_gain = get_antenna_gain(ah, chan);
-       if (ant_gain > max_gain)
-               ant_reduction = ant_gain - max_gain;
 
        ah->eep_ops->set_txpower(ah, chan,
                                 ath9k_regd_get_ctl(reg, chan),
-                                ant_reduction, new_pwr, test);
+                                get_antenna_gain(ah, chan), new_pwr, test);
 }
 
 void ath9k_hw_set_txpowerlimit(struct ath_hw *ah, u32 limit, bool test)
index 4bf1e24..2ee8624 100644 (file)
@@ -49,6 +49,10 @@ int ath9k_led_blink;
 module_param_named(blink, ath9k_led_blink, int, 0444);
 MODULE_PARM_DESC(blink, "Enable LED blink on activity");
 
+static int ath9k_led_active_high = -1;
+module_param_named(led_active_high, ath9k_led_active_high, int, 0444);
+MODULE_PARM_DESC(led_active_high, "Invert LED polarity");
+
 static int ath9k_btcoex_enable;
 module_param_named(btcoex_enable, ath9k_btcoex_enable, int, 0444);
 MODULE_PARM_DESC(btcoex_enable, "Enable wifi-BT coexistence");
@@ -477,7 +481,7 @@ static void ath9k_eeprom_request_cb(const struct firmware *eeprom_blob,
 static int ath9k_eeprom_request(struct ath_softc *sc, const char *name)
 {
        struct ath9k_eeprom_ctx ec;
-       struct ath_hw *ah = ah = sc->sc_ah;
+       struct ath_hw *ah = sc->sc_ah;
        int err;
 
        /* try to load the EEPROM content asynchronously */
@@ -600,6 +604,9 @@ static int ath9k_init_softc(u16 devid, struct ath_softc *sc,
        if (ret)
                return ret;
 
+       if (ath9k_led_active_high != -1)
+               ah->config.led_active_high = ath9k_led_active_high == 1;
+
        /*
         * Enable WLAN/BT RX Antenna diversity only when:
         *
index e6fef1b..7cdaf40 100644 (file)
@@ -28,6 +28,16 @@ static const struct pci_device_id ath_pci_id_table[] = {
        { PCI_VDEVICE(ATHEROS, 0x0024) }, /* PCI-E */
        { PCI_VDEVICE(ATHEROS, 0x0027) }, /* PCI   */
        { PCI_VDEVICE(ATHEROS, 0x0029) }, /* PCI   */
+
+#ifdef CONFIG_ATH9K_PCOEM
+       /* Mini PCI AR9220 MB92 cards: Compex WLM200NX, Wistron DNMA-92 */
+       { PCI_DEVICE_SUB(PCI_VENDOR_ID_ATHEROS,
+                        0x0029,
+                        PCI_VENDOR_ID_ATHEROS,
+                        0x2096),
+         .driver_data = ATH9K_PCI_LED_ACT_HI },
+#endif
+
        { PCI_VDEVICE(ATHEROS, 0x002A) }, /* PCI-E */
 
 #ifdef CONFIG_ATH9K_PCOEM
index ef44a2d..2a6bb62 100644 (file)
@@ -33,9 +33,7 @@ static ssize_t read_file_bool_bmps(struct file *file, char __user *user_buf,
        char buf[3];
 
        list_for_each_entry(vif_priv, &wcn->vif_list, list) {
-                       vif = container_of((void *)vif_priv,
-                                  struct ieee80211_vif,
-                                  drv_priv);
+                       vif = wcn36xx_priv_to_vif(vif_priv);
                        if (NL80211_IFTYPE_STATION == vif->type) {
                                if (vif_priv->pw_state == WCN36XX_BMPS)
                                        buf[0] = '1';
@@ -70,9 +68,7 @@ static ssize_t write_file_bool_bmps(struct file *file,
        case 'Y':
        case '1':
                list_for_each_entry(vif_priv, &wcn->vif_list, list) {
-                       vif = container_of((void *)vif_priv,
-                                  struct ieee80211_vif,
-                                  drv_priv);
+                       vif = wcn36xx_priv_to_vif(vif_priv);
                        if (NL80211_IFTYPE_STATION == vif->type) {
                                wcn36xx_enable_keep_alive_null_packet(wcn, vif);
                                wcn36xx_pmc_enter_bmps_state(wcn, vif);
@@ -83,9 +79,7 @@ static ssize_t write_file_bool_bmps(struct file *file,
        case 'N':
        case '0':
                list_for_each_entry(vif_priv, &wcn->vif_list, list) {
-                       vif = container_of((void *)vif_priv,
-                                  struct ieee80211_vif,
-                                  drv_priv);
+                       vif = wcn36xx_priv_to_vif(vif_priv);
                        if (NL80211_IFTYPE_STATION == vif->type)
                                wcn36xx_pmc_exit_bmps_state(wcn, vif);
                }
index b947de0..658bfb8 100644 (file)
 
 #define WCN36XX_HAL_IPV4_ADDR_LEN       4
 
-#define WALN_HAL_STA_INVALID_IDX 0xFF
+#define WCN36XX_HAL_STA_INVALID_IDX 0xFF
 #define WCN36XX_HAL_BSS_INVALID_IDX 0xFF
 
 /* Default Beacon template size */
 #define BEACON_TEMPLATE_SIZE 0x180
 
+/* Minimum PVM size that the FW expects. See comment in smd.c for details. */
+#define TIM_MIN_PVM_SIZE 6
+
 /* Param Change Bitmap sent to HAL */
 #define PARAM_BCN_INTERVAL_CHANGED                      (1 << 0)
 #define PARAM_SHORT_PREAMBLE_CHANGED                 (1 << 1)
@@ -2884,11 +2887,14 @@ struct update_beacon_rsp_msg {
 struct wcn36xx_hal_send_beacon_req_msg {
        struct wcn36xx_hal_msg_header header;
 
+       /* length of the template + 6. Only qcom knows why */
+       u32 beacon_length6;
+
        /* length of the template. */
        u32 beacon_length;
 
        /* Beacon data. */
-       u8 beacon[BEACON_TEMPLATE_SIZE];
+       u8 beacon[BEACON_TEMPLATE_SIZE - sizeof(u32)];
 
        u8 bssid[ETH_ALEN];
 
@@ -4261,9 +4267,9 @@ struct wcn36xx_hal_rcv_flt_mc_addr_list_type {
        u8 data_offset;
 
        u32 mc_addr_count;
-       u8 mc_addr[ETH_ALEN][WCN36XX_HAL_MAX_NUM_MULTICAST_ADDRESS];
+       u8 mc_addr[WCN36XX_HAL_MAX_NUM_MULTICAST_ADDRESS][ETH_ALEN];
        u8 bss_index;
-};
+} __packed;
 
 struct wcn36xx_hal_set_pkt_filter_rsp_msg {
        struct wcn36xx_hal_msg_header header;
@@ -4317,7 +4323,7 @@ struct wcn36xx_hal_rcv_flt_pkt_clear_rsp_msg {
 struct wcn36xx_hal_rcv_flt_pkt_set_mc_list_req_msg {
        struct wcn36xx_hal_msg_header header;
        struct wcn36xx_hal_rcv_flt_mc_addr_list_type mc_addr_list;
-};
+} __packed;
 
 struct wcn36xx_hal_rcv_flt_pkt_set_mc_list_rsp_msg {
        struct wcn36xx_hal_msg_header header;
@@ -4383,6 +4389,45 @@ enum place_holder_in_cap_bitmap {
        RTT = 20,
        RATECTRL = 21,
        WOW = 22,
+       WLAN_ROAM_SCAN_OFFLOAD = 23,
+       SPECULATIVE_PS_POLL = 24,
+       SCAN_SCH = 25,
+       IBSS_HEARTBEAT_OFFLOAD = 26,
+       WLAN_SCAN_OFFLOAD = 27,
+       WLAN_PERIODIC_TX_PTRN = 28,
+       ADVANCE_TDLS = 29,
+       BATCH_SCAN = 30,
+       FW_IN_TX_PATH = 31,
+       EXTENDED_NSOFFLOAD_SLOT = 32,
+       CH_SWITCH_V1 = 33,
+       HT40_OBSS_SCAN = 34,
+       UPDATE_CHANNEL_LIST = 35,
+       WLAN_MCADDR_FLT = 36,
+       WLAN_CH144 = 37,
+       NAN = 38,
+       TDLS_SCAN_COEXISTENCE = 39,
+       LINK_LAYER_STATS_MEAS = 40,
+       MU_MIMO = 41,
+       EXTENDED_SCAN = 42,
+       DYNAMIC_WMM_PS = 43,
+       MAC_SPOOFED_SCAN = 44,
+       BMU_ERROR_GENERIC_RECOVERY = 45,
+       DISA = 46,
+       FW_STATS = 47,
+       WPS_PRBRSP_TMPL = 48,
+       BCN_IE_FLT_DELTA = 49,
+       TDLS_OFF_CHANNEL = 51,
+       RTT3 = 52,
+       MGMT_FRAME_LOGGING = 53,
+       ENHANCED_TXBD_COMPLETION = 54,
+       LOGGING_ENHANCEMENT = 55,
+       EXT_SCAN_ENHANCED = 56,
+       MEMORY_DUMP_SUPPORTED = 57,
+       PER_PKT_STATS_SUPPORTED = 58,
+       EXT_LL_STAT = 60,
+       WIFI_CONFIG = 61,
+       ANTENNA_DIVERSITY_SELECTION = 62,
+
        MAX_FEATURE_SUPPORTED = 128,
 };
 
index 9a1db3b..a920d70 100644 (file)
@@ -201,7 +201,45 @@ static const char * const wcn36xx_caps_names[] = {
        "BCN_FILTER",                   /* 19 */
        "RTT",                          /* 20 */
        "RATECTRL",                     /* 21 */
-       "WOW"                           /* 22 */
+       "WOW",                          /* 22 */
+       "WLAN_ROAM_SCAN_OFFLOAD",       /* 23 */
+       "SPECULATIVE_PS_POLL",          /* 24 */
+       "SCAN_SCH",                     /* 25 */
+       "IBSS_HEARTBEAT_OFFLOAD",       /* 26 */
+       "WLAN_SCAN_OFFLOAD",            /* 27 */
+       "WLAN_PERIODIC_TX_PTRN",        /* 28 */
+       "ADVANCE_TDLS",                 /* 29 */
+       "BATCH_SCAN",                   /* 30 */
+       "FW_IN_TX_PATH",                /* 31 */
+       "EXTENDED_NSOFFLOAD_SLOT",      /* 32 */
+       "CH_SWITCH_V1",                 /* 33 */
+       "HT40_OBSS_SCAN",               /* 34 */
+       "UPDATE_CHANNEL_LIST",          /* 35 */
+       "WLAN_MCADDR_FLT",              /* 36 */
+       "WLAN_CH144",                   /* 37 */
+       "NAN",                          /* 38 */
+       "TDLS_SCAN_COEXISTENCE",        /* 39 */
+       "LINK_LAYER_STATS_MEAS",        /* 40 */
+       "MU_MIMO",                      /* 41 */
+       "EXTENDED_SCAN",                /* 42 */
+       "DYNAMIC_WMM_PS",               /* 43 */
+       "MAC_SPOOFED_SCAN",             /* 44 */
+       "BMU_ERROR_GENERIC_RECOVERY",   /* 45 */
+       "DISA",                         /* 46 */
+       "FW_STATS",                     /* 47 */
+       "WPS_PRBRSP_TMPL",              /* 48 */
+       "BCN_IE_FLT_DELTA",             /* 49 */
+       "TDLS_OFF_CHANNEL",             /* 51 */
+       "RTT3",                         /* 52 */
+       "MGMT_FRAME_LOGGING",           /* 53 */
+       "ENHANCED_TXBD_COMPLETION",     /* 54 */
+       "LOGGING_ENHANCEMENT",          /* 55 */
+       "EXT_SCAN_ENHANCED",            /* 56 */
+       "MEMORY_DUMP_SUPPORTED",        /* 57 */
+       "PER_PKT_STATS_SUPPORTED",      /* 58 */
+       "EXT_LL_STAT",                  /* 60 */
+       "WIFI_CONFIG",                  /* 61 */
+       "ANTENNA_DIVERSITY_SELECTION",  /* 62 */
 };
 
 static const char *wcn36xx_get_cap_name(enum place_holder_in_cap_bitmap x)
@@ -287,6 +325,7 @@ static int wcn36xx_start(struct ieee80211_hw *hw)
        }
 
        wcn36xx_detect_chip_version(wcn);
+       wcn36xx_smd_update_cfg(wcn, WCN36XX_HAL_CFG_ENABLE_MC_ADDR_LIST, 1);
 
        /* DMA channel initialization */
        ret = wcn36xx_dxe_init(wcn);
@@ -346,9 +385,7 @@ static int wcn36xx_config(struct ieee80211_hw *hw, u32 changed)
                wcn36xx_dbg(WCN36XX_DBG_MAC, "wcn36xx_config channel switch=%d\n",
                            ch);
                list_for_each_entry(tmp, &wcn->vif_list, list) {
-                       vif = container_of((void *)tmp,
-                                          struct ieee80211_vif,
-                                          drv_priv);
+                       vif = wcn36xx_priv_to_vif(tmp);
                        wcn36xx_smd_switch_channel(wcn, vif, ch);
                }
        }
@@ -356,15 +393,57 @@ static int wcn36xx_config(struct ieee80211_hw *hw, u32 changed)
        return 0;
 }
 
-#define WCN36XX_SUPPORTED_FILTERS (0)
-
 static void wcn36xx_configure_filter(struct ieee80211_hw *hw,
                                     unsigned int changed,
                                     unsigned int *total, u64 multicast)
 {
+       struct wcn36xx_hal_rcv_flt_mc_addr_list_type *fp;
+       struct wcn36xx *wcn = hw->priv;
+       struct wcn36xx_vif *tmp;
+       struct ieee80211_vif *vif = NULL;
+
        wcn36xx_dbg(WCN36XX_DBG_MAC, "mac configure filter\n");
 
-       *total &= WCN36XX_SUPPORTED_FILTERS;
+       *total &= FIF_ALLMULTI;
+
+       fp = (void *)(unsigned long)multicast;
+       list_for_each_entry(tmp, &wcn->vif_list, list) {
+               vif = wcn36xx_priv_to_vif(tmp);
+
+               /* FW handles MC filtering only when connected as STA */
+               if (*total & FIF_ALLMULTI)
+                       wcn36xx_smd_set_mc_list(wcn, vif, NULL);
+               else if (NL80211_IFTYPE_STATION == vif->type && tmp->sta_assoc)
+                       wcn36xx_smd_set_mc_list(wcn, vif, fp);
+       }
+       kfree(fp);
+}
+
+static u64 wcn36xx_prepare_multicast(struct ieee80211_hw *hw,
+                                    struct netdev_hw_addr_list *mc_list)
+{
+       struct wcn36xx_hal_rcv_flt_mc_addr_list_type *fp;
+       struct netdev_hw_addr *ha;
+
+       wcn36xx_dbg(WCN36XX_DBG_MAC, "mac prepare multicast list\n");
+       fp = kzalloc(sizeof(*fp), GFP_ATOMIC);
+       if (!fp) {
+               wcn36xx_err("Out of memory setting filters.\n");
+               return 0;
+       }
+
+       fp->mc_addr_count = 0;
+       /* update multicast filtering parameters */
+       if (netdev_hw_addr_list_count(mc_list) <=
+           WCN36XX_HAL_MAX_NUM_MULTICAST_ADDRESS) {
+               netdev_hw_addr_list_for_each(ha, mc_list) {
+                       memcpy(fp->mc_addr[fp->mc_addr_count],
+                                       ha->addr, ETH_ALEN);
+                       fp->mc_addr_count++;
+               }
+       }
+
+       return (u64)(unsigned long)fp;
 }
 
 static void wcn36xx_tx(struct ieee80211_hw *hw,
@@ -375,7 +454,7 @@ static void wcn36xx_tx(struct ieee80211_hw *hw,
        struct wcn36xx_sta *sta_priv = NULL;
 
        if (control->sta)
-               sta_priv = (struct wcn36xx_sta *)control->sta->drv_priv;
+               sta_priv = wcn36xx_sta_to_priv(control->sta);
 
        if (wcn36xx_start_tx(wcn, sta_priv, skb))
                ieee80211_free_txskb(wcn->hw, skb);
@@ -387,8 +466,8 @@ static int wcn36xx_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
                           struct ieee80211_key_conf *key_conf)
 {
        struct wcn36xx *wcn = hw->priv;
-       struct wcn36xx_vif *vif_priv = (struct wcn36xx_vif *)vif->drv_priv;
-       struct wcn36xx_sta *sta_priv = vif_priv->sta;
+       struct wcn36xx_vif *vif_priv = wcn36xx_vif_to_priv(vif);
+       struct wcn36xx_sta *sta_priv = wcn36xx_sta_to_priv(sta);
        int ret = 0;
        u8 key[WLAN_MAX_KEY_LEN];
 
@@ -473,6 +552,7 @@ static int wcn36xx_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
                break;
        case DISABLE_KEY:
                if (!(IEEE80211_KEY_FLAG_PAIRWISE & key_conf->flags)) {
+                       vif_priv->encrypt_type = WCN36XX_HAL_ED_NONE;
                        wcn36xx_smd_remove_bsskey(wcn,
                                vif_priv->encrypt_type,
                                key_conf->keyidx);
@@ -520,7 +600,7 @@ static void wcn36xx_update_allowed_rates(struct ieee80211_sta *sta,
 {
        int i, size;
        u16 *rates_table;
-       struct wcn36xx_sta *sta_priv = (struct wcn36xx_sta *)sta->drv_priv;
+       struct wcn36xx_sta *sta_priv = wcn36xx_sta_to_priv(sta);
        u32 rates = sta->supp_rates[band];
 
        memset(&sta_priv->supported_rates, 0,
@@ -590,7 +670,7 @@ static void wcn36xx_bss_info_changed(struct ieee80211_hw *hw,
        struct sk_buff *skb = NULL;
        u16 tim_off, tim_len;
        enum wcn36xx_hal_link_state link_state;
-       struct wcn36xx_vif *vif_priv = (struct wcn36xx_vif *)vif->drv_priv;
+       struct wcn36xx_vif *vif_priv = wcn36xx_vif_to_priv(vif);
 
        wcn36xx_dbg(WCN36XX_DBG_MAC, "mac bss info changed vif %p changed 0x%08x\n",
                    vif, changed);
@@ -620,7 +700,7 @@ static void wcn36xx_bss_info_changed(struct ieee80211_hw *hw,
 
                if (!is_zero_ether_addr(bss_conf->bssid)) {
                        vif_priv->is_joining = true;
-                       vif_priv->bss_index = 0xff;
+                       vif_priv->bss_index = WCN36XX_HAL_BSS_INVALID_IDX;
                        wcn36xx_smd_join(wcn, bss_conf->bssid,
                                         vif->addr, WCN36XX_HW_CHANNEL(wcn));
                        wcn36xx_smd_config_bss(wcn, vif, NULL,
@@ -628,6 +708,7 @@ static void wcn36xx_bss_info_changed(struct ieee80211_hw *hw,
                } else {
                        vif_priv->is_joining = false;
                        wcn36xx_smd_delete_bss(wcn, vif);
+                       vif_priv->encrypt_type = WCN36XX_HAL_ED_NONE;
                }
        }
 
@@ -655,6 +736,7 @@ static void wcn36xx_bss_info_changed(struct ieee80211_hw *hw,
                                     vif->addr,
                                     bss_conf->aid);
 
+                       vif_priv->sta_assoc = true;
                        rcu_read_lock();
                        sta = ieee80211_find_sta(vif, bss_conf->bssid);
                        if (!sta) {
@@ -663,7 +745,7 @@ static void wcn36xx_bss_info_changed(struct ieee80211_hw *hw,
                                rcu_read_unlock();
                                goto out;
                        }
-                       sta_priv = (struct wcn36xx_sta *)sta->drv_priv;
+                       sta_priv = wcn36xx_sta_to_priv(sta);
 
                        wcn36xx_update_allowed_rates(sta, WCN36XX_BAND(wcn));
 
@@ -686,6 +768,7 @@ static void wcn36xx_bss_info_changed(struct ieee80211_hw *hw,
                                    bss_conf->bssid,
                                    vif->addr,
                                    bss_conf->aid);
+                       vif_priv->sta_assoc = false;
                        wcn36xx_smd_set_link_st(wcn,
                                                bss_conf->bssid,
                                                vif->addr,
@@ -713,7 +796,7 @@ static void wcn36xx_bss_info_changed(struct ieee80211_hw *hw,
 
                if (bss_conf->enable_beacon) {
                        vif_priv->dtim_period = bss_conf->dtim_period;
-                       vif_priv->bss_index = 0xff;
+                       vif_priv->bss_index = WCN36XX_HAL_BSS_INVALID_IDX;
                        wcn36xx_smd_config_bss(wcn, vif, NULL,
                                               vif->addr, false);
                        skb = ieee80211_beacon_get_tim(hw, vif, &tim_off,
@@ -734,9 +817,9 @@ static void wcn36xx_bss_info_changed(struct ieee80211_hw *hw,
                        wcn36xx_smd_set_link_st(wcn, vif->addr, vif->addr,
                                                link_state);
                } else {
+                       wcn36xx_smd_delete_bss(wcn, vif);
                        wcn36xx_smd_set_link_st(wcn, vif->addr, vif->addr,
                                                WCN36XX_HAL_LINK_IDLE_STATE);
-                       wcn36xx_smd_delete_bss(wcn, vif);
                }
        }
 out:
@@ -757,7 +840,7 @@ static void wcn36xx_remove_interface(struct ieee80211_hw *hw,
                                     struct ieee80211_vif *vif)
 {
        struct wcn36xx *wcn = hw->priv;
-       struct wcn36xx_vif *vif_priv = (struct wcn36xx_vif *)vif->drv_priv;
+       struct wcn36xx_vif *vif_priv = wcn36xx_vif_to_priv(vif);
        wcn36xx_dbg(WCN36XX_DBG_MAC, "mac remove interface vif %p\n", vif);
 
        list_del(&vif_priv->list);
@@ -768,7 +851,7 @@ static int wcn36xx_add_interface(struct ieee80211_hw *hw,
                                 struct ieee80211_vif *vif)
 {
        struct wcn36xx *wcn = hw->priv;
-       struct wcn36xx_vif *vif_priv = (struct wcn36xx_vif *)vif->drv_priv;
+       struct wcn36xx_vif *vif_priv = wcn36xx_vif_to_priv(vif);
 
        wcn36xx_dbg(WCN36XX_DBG_MAC, "mac add interface vif %p type %d\n",
                    vif, vif->type);
@@ -792,13 +875,12 @@ static int wcn36xx_sta_add(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
                           struct ieee80211_sta *sta)
 {
        struct wcn36xx *wcn = hw->priv;
-       struct wcn36xx_vif *vif_priv = (struct wcn36xx_vif *)vif->drv_priv;
-       struct wcn36xx_sta *sta_priv = (struct wcn36xx_sta *)sta->drv_priv;
+       struct wcn36xx_vif *vif_priv = wcn36xx_vif_to_priv(vif);
+       struct wcn36xx_sta *sta_priv = wcn36xx_sta_to_priv(sta);
        wcn36xx_dbg(WCN36XX_DBG_MAC, "mac sta add vif %p sta %pM\n",
                    vif, sta->addr);
 
        spin_lock_init(&sta_priv->ampdu_lock);
-       vif_priv->sta = sta_priv;
        sta_priv->vif = vif_priv;
        /*
         * For STA mode HW will be configured on BSS_CHANGED_ASSOC because
@@ -817,14 +899,12 @@ static int wcn36xx_sta_remove(struct ieee80211_hw *hw,
                              struct ieee80211_sta *sta)
 {
        struct wcn36xx *wcn = hw->priv;
-       struct wcn36xx_vif *vif_priv = (struct wcn36xx_vif *)vif->drv_priv;
-       struct wcn36xx_sta *sta_priv = (struct wcn36xx_sta *)sta->drv_priv;
+       struct wcn36xx_sta *sta_priv = wcn36xx_sta_to_priv(sta);
 
        wcn36xx_dbg(WCN36XX_DBG_MAC, "mac sta remove vif %p sta %pM index %d\n",
                    vif, sta->addr, sta_priv->sta_index);
 
        wcn36xx_smd_delete_sta(wcn, sta_priv->sta_index);
-       vif_priv->sta = NULL;
        sta_priv->vif = NULL;
        return 0;
 }
@@ -860,7 +940,7 @@ static int wcn36xx_ampdu_action(struct ieee80211_hw *hw,
                    struct ieee80211_ampdu_params *params)
 {
        struct wcn36xx *wcn = hw->priv;
-       struct wcn36xx_sta *sta_priv = NULL;
+       struct wcn36xx_sta *sta_priv = wcn36xx_sta_to_priv(params->sta);
        struct ieee80211_sta *sta = params->sta;
        enum ieee80211_ampdu_mlme_action action = params->action;
        u16 tid = params->tid;
@@ -869,8 +949,6 @@ static int wcn36xx_ampdu_action(struct ieee80211_hw *hw,
        wcn36xx_dbg(WCN36XX_DBG_MAC, "mac ampdu action action %d tid %d\n",
                    action, tid);
 
-       sta_priv = (struct wcn36xx_sta *)sta->drv_priv;
-
        switch (action) {
        case IEEE80211_AMPDU_RX_START:
                sta_priv->tid = tid;
@@ -923,6 +1001,7 @@ static const struct ieee80211_ops wcn36xx_ops = {
        .resume                 = wcn36xx_resume,
 #endif
        .config                 = wcn36xx_config,
+       .prepare_multicast      = wcn36xx_prepare_multicast,
        .configure_filter       = wcn36xx_configure_filter,
        .tx                     = wcn36xx_tx,
        .set_key                = wcn36xx_set_key,
index 28b515c..589fe5f 100644 (file)
@@ -22,7 +22,7 @@ int wcn36xx_pmc_enter_bmps_state(struct wcn36xx *wcn,
                                 struct ieee80211_vif *vif)
 {
        int ret = 0;
-       struct wcn36xx_vif *vif_priv = (struct wcn36xx_vif *)vif->drv_priv;
+       struct wcn36xx_vif *vif_priv = wcn36xx_vif_to_priv(vif);
        /* TODO: Make sure the TX chain clean */
        ret = wcn36xx_smd_enter_bmps(wcn, vif);
        if (!ret) {
@@ -42,7 +42,7 @@ int wcn36xx_pmc_enter_bmps_state(struct wcn36xx *wcn,
 int wcn36xx_pmc_exit_bmps_state(struct wcn36xx *wcn,
                                struct ieee80211_vif *vif)
 {
-       struct wcn36xx_vif *vif_priv = (struct wcn36xx_vif *)vif->drv_priv;
+       struct wcn36xx_vif *vif_priv = wcn36xx_vif_to_priv(vif);
 
        if (WCN36XX_BMPS != vif_priv->pw_state) {
                wcn36xx_err("Not in BMPS mode, no need to exit from BMPS mode!\n");
index 96992a2..e8b630c 100644 (file)
@@ -191,16 +191,16 @@ static void wcn36xx_smd_set_sta_params(struct wcn36xx *wcn,
                struct ieee80211_sta *sta,
                struct wcn36xx_hal_config_sta_params *sta_params)
 {
-       struct wcn36xx_vif *priv_vif = (struct wcn36xx_vif *)vif->drv_priv;
-       struct wcn36xx_sta *priv_sta = NULL;
+       struct wcn36xx_vif *vif_priv = wcn36xx_vif_to_priv(vif);
+       struct wcn36xx_sta *sta_priv = NULL;
        if (vif->type == NL80211_IFTYPE_ADHOC ||
            vif->type == NL80211_IFTYPE_AP ||
            vif->type == NL80211_IFTYPE_MESH_POINT) {
                sta_params->type = 1;
-               sta_params->sta_index = 0xFF;
+               sta_params->sta_index = WCN36XX_HAL_STA_INVALID_IDX;
        } else {
                sta_params->type = 0;
-               sta_params->sta_index = 1;
+               sta_params->sta_index = vif_priv->self_sta_index;
        }
 
        sta_params->listen_interval = WCN36XX_LISTEN_INTERVAL(wcn);
@@ -215,7 +215,7 @@ static void wcn36xx_smd_set_sta_params(struct wcn36xx *wcn,
        else
                memcpy(&sta_params->bssid, vif->addr, ETH_ALEN);
 
-       sta_params->encrypt_type = priv_vif->encrypt_type;
+       sta_params->encrypt_type = vif_priv->encrypt_type;
        sta_params->short_preamble_supported = true;
 
        sta_params->rifs_mode = 0;
@@ -224,21 +224,21 @@ static void wcn36xx_smd_set_sta_params(struct wcn36xx *wcn,
        sta_params->uapsd = 0;
        sta_params->mimo_ps = WCN36XX_HAL_HT_MIMO_PS_STATIC;
        sta_params->max_ampdu_duration = 0;
-       sta_params->bssid_index = priv_vif->bss_index;
+       sta_params->bssid_index = vif_priv->bss_index;
        sta_params->p2p = 0;
 
        if (sta) {
-               priv_sta = (struct wcn36xx_sta *)sta->drv_priv;
+               sta_priv = wcn36xx_sta_to_priv(sta);
                if (NL80211_IFTYPE_STATION == vif->type)
                        memcpy(&sta_params->bssid, sta->addr, ETH_ALEN);
                else
                        memcpy(&sta_params->mac, sta->addr, ETH_ALEN);
                sta_params->wmm_enabled = sta->wme;
                sta_params->max_sp_len = sta->max_sp;
-               sta_params->aid = priv_sta->aid;
+               sta_params->aid = sta_priv->aid;
                wcn36xx_smd_set_sta_ht_params(sta, sta_params);
-               memcpy(&sta_params->supported_rates, &priv_sta->supported_rates,
-                       sizeof(priv_sta->supported_rates));
+               memcpy(&sta_params->supported_rates, &sta_priv->supported_rates,
+                       sizeof(sta_priv->supported_rates));
        } else {
                wcn36xx_set_default_rates(&sta_params->supported_rates);
                wcn36xx_smd_set_sta_default_ht_params(sta_params);
@@ -271,6 +271,16 @@ out:
        return ret;
 }
 
+static void init_hal_msg(struct wcn36xx_hal_msg_header *hdr,
+                        enum wcn36xx_hal_host_msg_type msg_type,
+                        size_t msg_size)
+{
+       memset(hdr, 0, msg_size + sizeof(*hdr));
+       hdr->msg_type = msg_type;
+       hdr->msg_version = WCN36XX_HAL_MSG_VERSION0;
+       hdr->len = msg_size + sizeof(*hdr);
+}
+
 #define INIT_HAL_MSG(msg_body, type) \
        do {                                                            \
                memset(&msg_body, 0, sizeof(msg_body));                 \
@@ -302,22 +312,6 @@ static int wcn36xx_smd_rsp_status_check(void *buf, size_t len)
        return 0;
 }
 
-static int wcn36xx_smd_rsp_status_check_v2(struct wcn36xx *wcn, void *buf,
-                                            size_t len)
-{
-       struct wcn36xx_fw_msg_status_rsp_v2 *rsp;
-
-       if (len < sizeof(struct wcn36xx_hal_msg_header) + sizeof(*rsp))
-               return wcn36xx_smd_rsp_status_check(buf, len);
-
-       rsp = buf + sizeof(struct wcn36xx_hal_msg_header);
-
-       if (WCN36XX_FW_MSG_RESULT_SUCCESS != rsp->status)
-               return rsp->status;
-
-       return 0;
-}
-
 int wcn36xx_smd_load_nv(struct wcn36xx *wcn)
 {
        struct nv_data *nv_d;
@@ -726,7 +720,7 @@ static int wcn36xx_smd_add_sta_self_rsp(struct wcn36xx *wcn,
                                        size_t len)
 {
        struct wcn36xx_hal_add_sta_self_rsp_msg *rsp;
-       struct wcn36xx_vif *priv_vif = (struct wcn36xx_vif *)vif->drv_priv;
+       struct wcn36xx_vif *vif_priv = wcn36xx_vif_to_priv(vif);
 
        if (len < sizeof(*rsp))
                return -EINVAL;
@@ -743,8 +737,8 @@ static int wcn36xx_smd_add_sta_self_rsp(struct wcn36xx *wcn,
                    "hal add sta self status %d self_sta_index %d dpu_index %d\n",
                    rsp->status, rsp->self_sta_index, rsp->dpu_index);
 
-       priv_vif->self_sta_index = rsp->self_sta_index;
-       priv_vif->self_dpu_desc_index = rsp->dpu_index;
+       vif_priv->self_sta_index = rsp->self_sta_index;
+       vif_priv->self_dpu_desc_index = rsp->dpu_index;
 
        return 0;
 }
@@ -949,17 +943,32 @@ static void wcn36xx_smd_convert_sta_to_v1(struct wcn36xx *wcn,
        memcpy(&v1->mac, orig->mac, ETH_ALEN);
        v1->aid = orig->aid;
        v1->type = orig->type;
+       v1->short_preamble_supported = orig->short_preamble_supported;
        v1->listen_interval = orig->listen_interval;
+       v1->wmm_enabled = orig->wmm_enabled;
        v1->ht_capable = orig->ht_capable;
-
+       v1->tx_channel_width_set = orig->tx_channel_width_set;
+       v1->rifs_mode = orig->rifs_mode;
+       v1->lsig_txop_protection = orig->lsig_txop_protection;
        v1->max_ampdu_size = orig->max_ampdu_size;
        v1->max_ampdu_density = orig->max_ampdu_density;
        v1->sgi_40mhz = orig->sgi_40mhz;
        v1->sgi_20Mhz = orig->sgi_20Mhz;
-
+       v1->rmf = orig->rmf;
+       v1->encrypt_type = orig->encrypt_type;
+       v1->action = orig->action;
+       v1->uapsd = orig->uapsd;
+       v1->max_sp_len = orig->max_sp_len;
+       v1->green_field_capable = orig->green_field_capable;
+       v1->mimo_ps = orig->mimo_ps;
+       v1->delayed_ba_support = orig->delayed_ba_support;
+       v1->max_ampdu_duration = orig->max_ampdu_duration;
+       v1->dsss_cck_mode_40mhz = orig->dsss_cck_mode_40mhz;
        memcpy(&v1->supported_rates, &orig->supported_rates,
               sizeof(orig->supported_rates));
        v1->sta_index = orig->sta_index;
+       v1->bssid_index = orig->bssid_index;
+       v1->p2p = orig->p2p;
 }
 
 static int wcn36xx_smd_config_sta_rsp(struct wcn36xx *wcn,
@@ -969,7 +978,7 @@ static int wcn36xx_smd_config_sta_rsp(struct wcn36xx *wcn,
 {
        struct wcn36xx_hal_config_sta_rsp_msg *rsp;
        struct config_sta_rsp_params *params;
-       struct wcn36xx_sta *sta_priv = (struct wcn36xx_sta *)sta->drv_priv;
+       struct wcn36xx_sta *sta_priv = wcn36xx_sta_to_priv(sta);
 
        if (len < sizeof(*rsp))
                return -EINVAL;
@@ -1170,12 +1179,13 @@ static int wcn36xx_smd_config_bss_v1(struct wcn36xx *wcn,
 
 static int wcn36xx_smd_config_bss_rsp(struct wcn36xx *wcn,
                                      struct ieee80211_vif *vif,
+                                     struct ieee80211_sta *sta,
                                      void *buf,
                                      size_t len)
 {
        struct wcn36xx_hal_config_bss_rsp_msg *rsp;
        struct wcn36xx_hal_config_bss_rsp_params *params;
-       struct wcn36xx_vif *priv_vif = (struct wcn36xx_vif *)vif->drv_priv;
+       struct wcn36xx_vif *vif_priv = wcn36xx_vif_to_priv(vif);
 
        if (len < sizeof(*rsp))
                return -EINVAL;
@@ -1198,14 +1208,15 @@ static int wcn36xx_smd_config_bss_rsp(struct wcn36xx *wcn,
                    params->bss_bcast_sta_idx, params->mac,
                    params->tx_mgmt_power, params->ucast_dpu_signature);
 
-       priv_vif->bss_index = params->bss_index;
+       vif_priv->bss_index = params->bss_index;
 
-       if (priv_vif->sta) {
-               priv_vif->sta->bss_sta_index =  params->bss_sta_index;
-               priv_vif->sta->bss_dpu_desc_index = params->dpu_desc_index;
+       if (sta) {
+               struct wcn36xx_sta *sta_priv = wcn36xx_sta_to_priv(sta);
+               sta_priv->bss_sta_index = params->bss_sta_index;
+               sta_priv->bss_dpu_desc_index = params->dpu_desc_index;
        }
 
-       priv_vif->self_ucast_dpu_sign = params->ucast_dpu_signature;
+       vif_priv->self_ucast_dpu_sign = params->ucast_dpu_signature;
 
        return 0;
 }
@@ -1217,7 +1228,7 @@ int wcn36xx_smd_config_bss(struct wcn36xx *wcn, struct ieee80211_vif *vif,
        struct wcn36xx_hal_config_bss_req_msg msg;
        struct wcn36xx_hal_config_bss_params *bss;
        struct wcn36xx_hal_config_sta_params *sta_params;
-       struct wcn36xx_vif *vif_priv = (struct wcn36xx_vif *)vif->drv_priv;
+       struct wcn36xx_vif *vif_priv = wcn36xx_vif_to_priv(vif);
        int ret = 0;
 
        mutex_lock(&wcn->hal_mutex);
@@ -1329,6 +1340,7 @@ int wcn36xx_smd_config_bss(struct wcn36xx *wcn, struct ieee80211_vif *vif,
        }
        ret = wcn36xx_smd_config_bss_rsp(wcn,
                                         vif,
+                                        sta,
                                         wcn->hal_buf,
                                         wcn->hal_rsp_len);
        if (ret) {
@@ -1343,13 +1355,13 @@ out:
 int wcn36xx_smd_delete_bss(struct wcn36xx *wcn, struct ieee80211_vif *vif)
 {
        struct wcn36xx_hal_delete_bss_req_msg msg_body;
-       struct wcn36xx_vif *priv_vif = (struct wcn36xx_vif *)vif->drv_priv;
+       struct wcn36xx_vif *vif_priv = wcn36xx_vif_to_priv(vif);
        int ret = 0;
 
        mutex_lock(&wcn->hal_mutex);
        INIT_HAL_MSG(msg_body, WCN36XX_HAL_DELETE_BSS_REQ);
 
-       msg_body.bss_index = priv_vif->bss_index;
+       msg_body.bss_index = vif_priv->bss_index;
 
        PREPARE_HAL_BUF(wcn->hal_buf, msg_body);
 
@@ -1375,26 +1387,47 @@ int wcn36xx_smd_send_beacon(struct wcn36xx *wcn, struct ieee80211_vif *vif,
                            u16 p2p_off)
 {
        struct wcn36xx_hal_send_beacon_req_msg msg_body;
-       int ret = 0;
+       int ret = 0, pad, pvm_len;
 
        mutex_lock(&wcn->hal_mutex);
        INIT_HAL_MSG(msg_body, WCN36XX_HAL_SEND_BEACON_REQ);
 
-       /* TODO need to find out why this is needed? */
-       msg_body.beacon_length = skb_beacon->len + 6;
+       pvm_len = skb_beacon->data[tim_off + 1] - 3;
+       pad = TIM_MIN_PVM_SIZE - pvm_len;
 
-       if (BEACON_TEMPLATE_SIZE > msg_body.beacon_length) {
-               memcpy(&msg_body.beacon, &skb_beacon->len, sizeof(u32));
-               memcpy(&(msg_body.beacon[4]), skb_beacon->data,
-                      skb_beacon->len);
-       } else {
+       /* Padding is irrelevant to mesh mode since tim_off is always 0. */
+       if (vif->type == NL80211_IFTYPE_MESH_POINT)
+               pad = 0;
+
+       msg_body.beacon_length = skb_beacon->len + pad;
+       /* TODO need to find out why + 6 is needed */
+       msg_body.beacon_length6 = msg_body.beacon_length + 6;
+
+       if (msg_body.beacon_length > BEACON_TEMPLATE_SIZE) {
                wcn36xx_err("Beacon is to big: beacon size=%d\n",
                              msg_body.beacon_length);
                ret = -ENOMEM;
                goto out;
        }
+       memcpy(msg_body.beacon, skb_beacon->data, skb_beacon->len);
        memcpy(msg_body.bssid, vif->addr, ETH_ALEN);
 
+       if (pad > 0) {
+               /*
+                * The wcn36xx FW has a fixed size for the PVM in the TIM. If
+                * given the beacon template from mac80211 with a PVM shorter
+                * than the FW expectes it will overwrite the data after the
+                * TIM.
+                */
+               wcn36xx_dbg(WCN36XX_DBG_HAL, "Pad TIM PVM. %d bytes at %d\n",
+                           pad, pvm_len);
+               memmove(&msg_body.beacon[tim_off + 5 + pvm_len + pad],
+                       &msg_body.beacon[tim_off + 5 + pvm_len],
+                       skb_beacon->len - (tim_off + 5 + pvm_len));
+               memset(&msg_body.beacon[tim_off + 5 + pvm_len], 0, pad);
+               msg_body.beacon[tim_off + 1] += pad;
+       }
+
        /* TODO need to find out why this is needed? */
        if (vif->type == NL80211_IFTYPE_MESH_POINT)
                /* mesh beacon don't need this, so push further down */
@@ -1598,8 +1631,7 @@ int wcn36xx_smd_remove_bsskey(struct wcn36xx *wcn,
                wcn36xx_err("Sending hal_remove_bsskey failed\n");
                goto out;
        }
-       ret = wcn36xx_smd_rsp_status_check_v2(wcn, wcn->hal_buf,
-                                             wcn->hal_rsp_len);
+       ret = wcn36xx_smd_rsp_status_check(wcn->hal_buf, wcn->hal_rsp_len);
        if (ret) {
                wcn36xx_err("hal_remove_bsskey response failed err=%d\n", ret);
                goto out;
@@ -1612,7 +1644,7 @@ out:
 int wcn36xx_smd_enter_bmps(struct wcn36xx *wcn, struct ieee80211_vif *vif)
 {
        struct wcn36xx_hal_enter_bmps_req_msg msg_body;
-       struct wcn36xx_vif *vif_priv = (struct wcn36xx_vif *)vif->drv_priv;
+       struct wcn36xx_vif *vif_priv = wcn36xx_vif_to_priv(vif);
        int ret = 0;
 
        mutex_lock(&wcn->hal_mutex);
@@ -1641,8 +1673,8 @@ out:
 
 int wcn36xx_smd_exit_bmps(struct wcn36xx *wcn, struct ieee80211_vif *vif)
 {
-       struct wcn36xx_hal_enter_bmps_req_msg msg_body;
-       struct wcn36xx_vif *vif_priv = (struct wcn36xx_vif *)vif->drv_priv;
+       struct wcn36xx_hal_exit_bmps_req_msg msg_body;
+       struct wcn36xx_vif *vif_priv = wcn36xx_vif_to_priv(vif);
        int ret = 0;
 
        mutex_lock(&wcn->hal_mutex);
@@ -1703,7 +1735,7 @@ int wcn36xx_smd_keep_alive_req(struct wcn36xx *wcn,
                               int packet_type)
 {
        struct wcn36xx_hal_keep_alive_req_msg msg_body;
-       struct wcn36xx_vif *vif_priv = (struct wcn36xx_vif *)vif->drv_priv;
+       struct wcn36xx_vif *vif_priv = wcn36xx_vif_to_priv(vif);
        int ret = 0;
 
        mutex_lock(&wcn->hal_mutex);
@@ -1944,6 +1976,17 @@ out:
        return ret;
 }
 
+static int wcn36xx_smd_trigger_ba_rsp(void *buf, int len)
+{
+       struct wcn36xx_hal_trigger_ba_rsp_msg *rsp;
+
+       if (len < sizeof(*rsp))
+               return -EINVAL;
+
+       rsp = (struct wcn36xx_hal_trigger_ba_rsp_msg *) buf;
+       return rsp->status;
+}
+
 int wcn36xx_smd_trigger_ba(struct wcn36xx *wcn, u8 sta_index)
 {
        struct wcn36xx_hal_trigger_ba_req_msg msg_body;
@@ -1968,8 +2011,7 @@ int wcn36xx_smd_trigger_ba(struct wcn36xx *wcn, u8 sta_index)
                wcn36xx_err("Sending hal_trigger_ba failed\n");
                goto out;
        }
-       ret = wcn36xx_smd_rsp_status_check_v2(wcn, wcn->hal_buf,
-                                               wcn->hal_rsp_len);
+       ret = wcn36xx_smd_trigger_ba_rsp(wcn->hal_buf, wcn->hal_rsp_len);
        if (ret) {
                wcn36xx_err("hal_trigger_ba response failed err=%d\n", ret);
                goto out;
@@ -2006,9 +2048,7 @@ static int wcn36xx_smd_missed_beacon_ind(struct wcn36xx *wcn,
                list_for_each_entry(tmp, &wcn->vif_list, list) {
                        wcn36xx_dbg(WCN36XX_DBG_HAL, "beacon missed bss_index %d\n",
                                    tmp->bss_index);
-                       vif = container_of((void *)tmp,
-                                                struct ieee80211_vif,
-                                                drv_priv);
+                       vif = wcn36xx_priv_to_vif(tmp);
                        ieee80211_connection_loss(vif);
                }
                return 0;
@@ -2023,9 +2063,7 @@ static int wcn36xx_smd_missed_beacon_ind(struct wcn36xx *wcn,
                if (tmp->bss_index == rsp->bss_index) {
                        wcn36xx_dbg(WCN36XX_DBG_HAL, "beacon missed bss_index %d\n",
                                    rsp->bss_index);
-                       vif = container_of((void *)tmp,
-                                                struct ieee80211_vif,
-                                                drv_priv);
+                       vif = wcn36xx_priv_to_vif(tmp);
                        ieee80211_connection_loss(vif);
                        return 0;
                }
@@ -2041,25 +2079,24 @@ static int wcn36xx_smd_delete_sta_context_ind(struct wcn36xx *wcn,
 {
        struct wcn36xx_hal_delete_sta_context_ind_msg *rsp = buf;
        struct wcn36xx_vif *tmp;
-       struct ieee80211_sta *sta = NULL;
+       struct ieee80211_sta *sta;
 
        if (len != sizeof(*rsp)) {
                wcn36xx_warn("Corrupted delete sta indication\n");
                return -EIO;
        }
 
+       wcn36xx_dbg(WCN36XX_DBG_HAL, "delete station indication %pM index %d\n",
+                   rsp->addr2, rsp->sta_id);
+
        list_for_each_entry(tmp, &wcn->vif_list, list) {
-               if (sta && (tmp->sta->sta_index == rsp->sta_id)) {
-                       sta = container_of((void *)tmp->sta,
-                                                struct ieee80211_sta,
-                                                drv_priv);
-                       wcn36xx_dbg(WCN36XX_DBG_HAL,
-                                   "delete station indication %pM index %d\n",
-                                   rsp->addr2,
-                                   rsp->sta_id);
+               rcu_read_lock();
+               sta = ieee80211_find_sta(wcn36xx_priv_to_vif(tmp), rsp->addr2);
+               if (sta)
                        ieee80211_report_low_ack(sta, 0);
+               rcu_read_unlock();
+               if (sta)
                        return 0;
-               }
        }
 
        wcn36xx_warn("STA with addr %pM and index %d not found\n",
@@ -2100,6 +2137,46 @@ out:
        mutex_unlock(&wcn->hal_mutex);
        return ret;
 }
+
+int wcn36xx_smd_set_mc_list(struct wcn36xx *wcn,
+                           struct ieee80211_vif *vif,
+                           struct wcn36xx_hal_rcv_flt_mc_addr_list_type *fp)
+{
+       struct wcn36xx_vif *vif_priv = wcn36xx_vif_to_priv(vif);
+       struct wcn36xx_hal_rcv_flt_pkt_set_mc_list_req_msg *msg_body = NULL;
+       int ret = 0;
+
+       mutex_lock(&wcn->hal_mutex);
+
+       msg_body = (struct wcn36xx_hal_rcv_flt_pkt_set_mc_list_req_msg *)
+                  wcn->hal_buf;
+       init_hal_msg(&msg_body->header, WCN36XX_HAL_8023_MULTICAST_LIST_REQ,
+                    sizeof(msg_body->mc_addr_list));
+
+       /* An empty list means all mc traffic will be received */
+       if (fp)
+               memcpy(&msg_body->mc_addr_list, fp,
+                      sizeof(msg_body->mc_addr_list));
+       else
+               msg_body->mc_addr_list.mc_addr_count = 0;
+
+       msg_body->mc_addr_list.bss_index = vif_priv->bss_index;
+
+       ret = wcn36xx_smd_send_and_wait(wcn, msg_body->header.len);
+       if (ret) {
+               wcn36xx_err("Sending HAL_8023_MULTICAST_LIST failed\n");
+               goto out;
+       }
+       ret = wcn36xx_smd_rsp_status_check(wcn->hal_buf, wcn->hal_rsp_len);
+       if (ret) {
+               wcn36xx_err("HAL_8023_MULTICAST_LIST rsp failed err=%d\n", ret);
+               goto out;
+       }
+out:
+       mutex_unlock(&wcn->hal_mutex);
+       return ret;
+}
+
 static void wcn36xx_smd_rsp_process(struct wcn36xx *wcn, void *buf, size_t len)
 {
        struct wcn36xx_hal_msg_header *msg_header = buf;
@@ -2141,6 +2218,7 @@ static void wcn36xx_smd_rsp_process(struct wcn36xx *wcn, void *buf, size_t len)
        case WCN36XX_HAL_UPDATE_SCAN_PARAM_RSP:
        case WCN36XX_HAL_CH_SWITCH_RSP:
        case WCN36XX_HAL_FEATURE_CAPS_EXCHANGE_RSP:
+       case WCN36XX_HAL_8023_MULTICAST_LIST_RSP:
                memcpy(wcn->hal_buf, buf, len);
                wcn->hal_rsp_len = len;
                complete(&wcn->hal_rsp_compl);
index 8361f9e..d74d781 100644 (file)
@@ -44,15 +44,6 @@ struct wcn36xx_fw_msg_status_rsp {
        u32     status;
 } __packed;
 
-/* wcn3620 returns this for tigger_ba */
-
-struct wcn36xx_fw_msg_status_rsp_v2 {
-       u8      bss_id[6];
-       u32     status __packed;
-       u16     count_following_candidates __packed;
-       /* candidate list follows */
-};
-
 struct wcn36xx_hal_ind_msg {
        struct list_head list;
        u8 *msg;
@@ -136,4 +127,7 @@ int wcn36xx_smd_del_ba(struct wcn36xx *wcn, u16 tid, u8 sta_index);
 int wcn36xx_smd_trigger_ba(struct wcn36xx *wcn, u8 sta_index);
 
 int wcn36xx_smd_update_cfg(struct wcn36xx *wcn, u32 cfg_id, u32 value);
+int wcn36xx_smd_set_mc_list(struct wcn36xx *wcn,
+                           struct ieee80211_vif *vif,
+                           struct wcn36xx_hal_rcv_flt_mc_addr_list_type *fp);
 #endif /* _SMD_H_ */
index 6c47a73..1f34c2e 100644 (file)
@@ -102,9 +102,7 @@ static inline struct wcn36xx_vif *get_vif_by_addr(struct wcn36xx *wcn,
        struct wcn36xx_vif *vif_priv = NULL;
        struct ieee80211_vif *vif = NULL;
        list_for_each_entry(vif_priv, &wcn->vif_list, list) {
-                       vif = container_of((void *)vif_priv,
-                                  struct ieee80211_vif,
-                                  drv_priv);
+                       vif = wcn36xx_priv_to_vif(vif_priv);
                        if (memcmp(vif->addr, addr, ETH_ALEN) == 0)
                                return vif_priv;
        }
@@ -167,9 +165,7 @@ static void wcn36xx_set_tx_data(struct wcn36xx_tx_bd *bd,
         */
        if (sta_priv) {
                __vif_priv = sta_priv->vif;
-               vif = container_of((void *)__vif_priv,
-                                  struct ieee80211_vif,
-                                  drv_priv);
+               vif = wcn36xx_priv_to_vif(__vif_priv);
 
                bd->dpu_sign = sta_priv->ucast_dpu_sign;
                if (vif->type == NL80211_IFTYPE_STATION) {
index 7b41e83..7433d67 100644 (file)
@@ -125,10 +125,10 @@ struct wcn36xx_platform_ctrl_ops {
  */
 struct wcn36xx_vif {
        struct list_head list;
-       struct wcn36xx_sta *sta;
        u8 dtim_period;
        enum ani_ed_type encrypt_type;
        bool is_joining;
+       bool sta_assoc;
        struct wcn36xx_hal_mac_ssid ssid;
 
        /* Power management */
@@ -263,4 +263,22 @@ struct ieee80211_sta *wcn36xx_priv_to_sta(struct wcn36xx_sta *sta_priv)
        return container_of((void *)sta_priv, struct ieee80211_sta, drv_priv);
 }
 
+static inline
+struct wcn36xx_vif *wcn36xx_vif_to_priv(struct ieee80211_vif *vif)
+{
+       return (struct wcn36xx_vif *) vif->drv_priv;
+}
+
+static inline
+struct ieee80211_vif *wcn36xx_priv_to_vif(struct wcn36xx_vif *vif_priv)
+{
+       return container_of((void *) vif_priv, struct ieee80211_vif, drv_priv);
+}
+
+static inline
+struct wcn36xx_sta *wcn36xx_sta_to_priv(struct ieee80211_sta *sta)
+{
+       return (struct wcn36xx_sta *)sta->drv_priv;
+}
+
 #endif /* _WCN36XX_H_ */
index 6af658e..d1bc51f 100644 (file)
@@ -321,7 +321,8 @@ brcmf_proto_bcdc_hdrpull(struct brcmf_pub *drvr, bool do_fws,
        if (pktbuf->len == 0)
                return -ENODATA;
 
-       *ifp = tmp_if;
+       if (ifp != NULL)
+               *ifp = tmp_if;
        return 0;
 }
 
@@ -351,6 +352,12 @@ brcmf_proto_bcdc_add_tdls_peer(struct brcmf_pub *drvr, int ifidx,
 {
 }
 
+static void brcmf_proto_bcdc_rxreorder(struct brcmf_if *ifp,
+                                      struct sk_buff *skb)
+{
+       brcmf_fws_rxreorder(ifp, skb);
+}
+
 int brcmf_proto_bcdc_attach(struct brcmf_pub *drvr)
 {
        struct brcmf_bcdc *bcdc;
@@ -372,6 +379,7 @@ int brcmf_proto_bcdc_attach(struct brcmf_pub *drvr)
        drvr->proto->configure_addr_mode = brcmf_proto_bcdc_configure_addr_mode;
        drvr->proto->delete_peer = brcmf_proto_bcdc_delete_peer;
        drvr->proto->add_tdls_peer = brcmf_proto_bcdc_add_tdls_peer;
+       drvr->proto->rxreorder = brcmf_proto_bcdc_rxreorder;
        drvr->proto->pd = bcdc;
 
        drvr->hdrlen += BCDC_HEADER_LEN + BRCMF_PROT_FW_SIGNAL_MAX_TXBYTES;
index 8e02a47..2b24654 100644 (file)
@@ -216,7 +216,9 @@ bool brcmf_c_prec_enq(struct device *dev, struct pktq *q, struct sk_buff *pkt,
                      int prec);
 
 /* Receive frame for delivery to OS.  Callee disposes of rxp. */
-void brcmf_rx_frame(struct device *dev, struct sk_buff *rxp);
+void brcmf_rx_frame(struct device *dev, struct sk_buff *rxp, bool handle_event);
+/* Receive async event packet from firmware. Callee disposes of rxp. */
+void brcmf_rx_event(struct device *dev, struct sk_buff *rxp);
 
 /* Indication from bus module regarding presence/insertion of dongle. */
 int brcmf_attach(struct device *dev, struct brcmf_mp_device *settings);
index 9a567e2..d0631b6 100644 (file)
@@ -250,6 +250,20 @@ struct parsed_vndr_ies {
        struct parsed_vndr_ie_info ie_info[VNDR_IE_PARSE_LIMIT];
 };
 
+static u8 nl80211_band_to_fwil(enum nl80211_band band)
+{
+       switch (band) {
+       case NL80211_BAND_2GHZ:
+               return WLC_BAND_2G;
+       case NL80211_BAND_5GHZ:
+               return WLC_BAND_5G;
+       default:
+               WARN_ON(1);
+               break;
+       }
+       return 0;
+}
+
 static u16 chandef_to_chanspec(struct brcmu_d11inf *d11inf,
                               struct cfg80211_chan_def *ch)
 {
@@ -1796,6 +1810,50 @@ enum nl80211_auth_type brcmf_war_auth_type(struct brcmf_if *ifp,
        return type;
 }
 
+static void brcmf_set_join_pref(struct brcmf_if *ifp,
+                               struct cfg80211_bss_selection *bss_select)
+{
+       struct brcmf_join_pref_params join_pref_params[2];
+       enum nl80211_band band;
+       int err, i = 0;
+
+       join_pref_params[i].len = 2;
+       join_pref_params[i].rssi_gain = 0;
+
+       if (bss_select->behaviour != NL80211_BSS_SELECT_ATTR_BAND_PREF)
+               brcmf_fil_cmd_int_set(ifp, BRCMF_C_SET_ASSOC_PREFER, WLC_BAND_AUTO);
+
+       switch (bss_select->behaviour) {
+       case __NL80211_BSS_SELECT_ATTR_INVALID:
+               brcmf_c_set_joinpref_default(ifp);
+               return;
+       case NL80211_BSS_SELECT_ATTR_BAND_PREF:
+               join_pref_params[i].type = BRCMF_JOIN_PREF_BAND;
+               band = bss_select->param.band_pref;
+               join_pref_params[i].band = nl80211_band_to_fwil(band);
+               i++;
+               break;
+       case NL80211_BSS_SELECT_ATTR_RSSI_ADJUST:
+               join_pref_params[i].type = BRCMF_JOIN_PREF_RSSI_DELTA;
+               band = bss_select->param.adjust.band;
+               join_pref_params[i].band = nl80211_band_to_fwil(band);
+               join_pref_params[i].rssi_gain = bss_select->param.adjust.delta;
+               i++;
+               break;
+       case NL80211_BSS_SELECT_ATTR_RSSI:
+       default:
+               break;
+       }
+       join_pref_params[i].type = BRCMF_JOIN_PREF_RSSI;
+       join_pref_params[i].len = 2;
+       join_pref_params[i].rssi_gain = 0;
+       join_pref_params[i].band = 0;
+       err = brcmf_fil_iovar_data_set(ifp, "join_pref", join_pref_params,
+                                      sizeof(join_pref_params));
+       if (err)
+               brcmf_err("Set join_pref error (%d)\n", err);
+}
+
 static s32
 brcmf_cfg80211_connect(struct wiphy *wiphy, struct net_device *ndev,
                       struct cfg80211_connect_params *sme)
@@ -1952,6 +2010,8 @@ brcmf_cfg80211_connect(struct wiphy *wiphy, struct net_device *ndev,
                ext_join_params->scan_le.nprobes = cpu_to_le32(-1);
        }
 
+       brcmf_set_join_pref(ifp, &sme->bss_select);
+
        err  = brcmf_fil_bsscfg_data_set(ifp, "join", ext_join_params,
                                         join_params_size);
        kfree(ext_join_params);
@@ -3608,7 +3668,8 @@ static void brcmf_configure_wowl(struct brcmf_cfg80211_info *cfg,
        if (!test_bit(BRCMF_VIF_STATUS_CONNECTED, &ifp->vif->sme_state))
                wowl_config |= BRCMF_WOWL_UNASSOC;
 
-       brcmf_fil_iovar_data_set(ifp, "wowl_wakeind", "clear", strlen("clear"));
+       brcmf_fil_iovar_data_set(ifp, "wowl_wakeind", "clear",
+                                sizeof(struct brcmf_wowl_wakeind_le));
        brcmf_fil_iovar_int_set(ifp, "wowl", wowl_config);
        brcmf_fil_iovar_int_set(ifp, "wowl_activate", 1);
        brcmf_bus_wowl_config(cfg->pub->bus_if, true);
@@ -6279,6 +6340,10 @@ static int brcmf_setup_wiphy(struct wiphy *wiphy, struct brcmf_if *ifp)
        wiphy->n_cipher_suites = ARRAY_SIZE(brcmf_cipher_suites);
        if (!brcmf_feat_is_enabled(ifp, BRCMF_FEAT_MFP))
                wiphy->n_cipher_suites--;
+       wiphy->bss_select_support = BIT(NL80211_BSS_SELECT_ATTR_RSSI) |
+                                   BIT(NL80211_BSS_SELECT_ATTR_BAND_PREF) |
+                                   BIT(NL80211_BSS_SELECT_ATTR_RSSI_ADJUST);
+
        wiphy->flags |= WIPHY_FLAG_PS_ON_BY_DEFAULT |
                        WIPHY_FLAG_OFFCHAN_TX |
                        WIPHY_FLAG_HAS_REMAIN_ON_CHANNEL;
index 9e909e3..3e15d64 100644 (file)
@@ -38,7 +38,7 @@ const u8 ALLFFMAC[ETH_ALEN] = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff };
 #define BRCMF_DEFAULT_SCAN_CHANNEL_TIME        40
 #define BRCMF_DEFAULT_SCAN_UNASSOC_TIME        40
 
-/* boost value for RSSI_DELTA in preferred join selection */
+/* default boost value for RSSI_DELTA in preferred join selection */
 #define BRCMF_JOIN_PREF_RSSI_BOOST     8
 
 #define BRCMF_DEFAULT_TXGLOM_SIZE      32  /* max tx frames in glom chain */
@@ -83,11 +83,31 @@ MODULE_PARM_DESC(ignore_probe_fail, "always succeed probe for debugging");
 static struct brcmfmac_platform_data *brcmfmac_pdata;
 struct brcmf_mp_global_t brcmf_mp_global;
 
+void brcmf_c_set_joinpref_default(struct brcmf_if *ifp)
+{
+       struct brcmf_join_pref_params join_pref_params[2];
+       int err;
+
+       /* Setup join_pref to select target by RSSI (boost on 5GHz) */
+       join_pref_params[0].type = BRCMF_JOIN_PREF_RSSI_DELTA;
+       join_pref_params[0].len = 2;
+       join_pref_params[0].rssi_gain = BRCMF_JOIN_PREF_RSSI_BOOST;
+       join_pref_params[0].band = WLC_BAND_5G;
+
+       join_pref_params[1].type = BRCMF_JOIN_PREF_RSSI;
+       join_pref_params[1].len = 2;
+       join_pref_params[1].rssi_gain = 0;
+       join_pref_params[1].band = 0;
+       err = brcmf_fil_iovar_data_set(ifp, "join_pref", join_pref_params,
+                                      sizeof(join_pref_params));
+       if (err)
+               brcmf_err("Set join_pref error (%d)\n", err);
+}
+
 int brcmf_c_preinit_dcmds(struct brcmf_if *ifp)
 {
        s8 eventmask[BRCMF_EVENTING_MASK_LEN];
        u8 buf[BRCMF_DCMD_SMLEN];
-       struct brcmf_join_pref_params join_pref_params[2];
        struct brcmf_rev_info_le revinfo;
        struct brcmf_rev_info *ri;
        char *ptr;
@@ -154,19 +174,7 @@ int brcmf_c_preinit_dcmds(struct brcmf_if *ifp)
                goto done;
        }
 
-       /* Setup join_pref to select target by RSSI(with boost on 5GHz) */
-       join_pref_params[0].type = BRCMF_JOIN_PREF_RSSI_DELTA;
-       join_pref_params[0].len = 2;
-       join_pref_params[0].rssi_gain = BRCMF_JOIN_PREF_RSSI_BOOST;
-       join_pref_params[0].band = WLC_BAND_5G;
-       join_pref_params[1].type = BRCMF_JOIN_PREF_RSSI;
-       join_pref_params[1].len = 2;
-       join_pref_params[1].rssi_gain = 0;
-       join_pref_params[1].band = 0;
-       err = brcmf_fil_iovar_data_set(ifp, "join_pref", join_pref_params,
-                                      sizeof(join_pref_params));
-       if (err)
-               brcmf_err("Set join_pref error (%d)\n", err);
+       brcmf_c_set_joinpref_default(ifp);
 
        /* Setup event_msgs, enable E_IF */
        err = brcmf_fil_iovar_data_get(ifp, "event_msgs", eventmask,
index ff825cd..b590499 100644 (file)
 
 #define MAX_WAIT_FOR_8021X_TX                  msecs_to_jiffies(950)
 
-/* AMPDU rx reordering definitions */
-#define BRCMF_RXREORDER_FLOWID_OFFSET          0
-#define BRCMF_RXREORDER_MAXIDX_OFFSET          2
-#define BRCMF_RXREORDER_FLAGS_OFFSET           4
-#define BRCMF_RXREORDER_CURIDX_OFFSET          6
-#define BRCMF_RXREORDER_EXPIDX_OFFSET          8
-
-#define BRCMF_RXREORDER_DEL_FLOW               0x01
-#define BRCMF_RXREORDER_FLUSH_ALL              0x02
-#define BRCMF_RXREORDER_CURIDX_VALID           0x04
-#define BRCMF_RXREORDER_EXPIDX_VALID           0x08
-#define BRCMF_RXREORDER_NEW_HOLE               0x10
-
 #define BRCMF_BSSIDX_INVALID                   -1
 
 char *brcmf_ifname(struct brcmf_if *ifp)
@@ -313,15 +300,9 @@ void brcmf_txflowblock(struct device *dev, bool state)
 
 void brcmf_netif_rx(struct brcmf_if *ifp, struct sk_buff *skb)
 {
-       skb->dev = ifp->ndev;
-       skb->protocol = eth_type_trans(skb, skb->dev);
-
        if (skb->pkt_type == PACKET_MULTICAST)
                ifp->stats.multicast++;
 
-       /* Process special event packets */
-       brcmf_fweh_process_skb(ifp->drvr, skb);
-
        if (!(ifp->ndev->flags & IFF_UP)) {
                brcmu_pkt_buf_free_skb(skb);
                return;
@@ -341,226 +322,60 @@ void brcmf_netif_rx(struct brcmf_if *ifp, struct sk_buff *skb)
                netif_rx_ni(skb);
 }
 
-static void brcmf_rxreorder_get_skb_list(struct brcmf_ampdu_rx_reorder *rfi,
-                                        u8 start, u8 end,
-                                        struct sk_buff_head *skb_list)
+static int brcmf_rx_hdrpull(struct brcmf_pub *drvr, struct sk_buff *skb,
+                           struct brcmf_if **ifp)
 {
-       /* initialize return list */
-       __skb_queue_head_init(skb_list);
+       int ret;
 
-       if (rfi->pend_pkts == 0) {
-               brcmf_dbg(INFO, "no packets in reorder queue\n");
-               return;
+       /* process and remove protocol-specific header */
+       ret = brcmf_proto_hdrpull(drvr, true, skb, ifp);
+
+       if (ret || !(*ifp) || !(*ifp)->ndev) {
+               if (ret != -ENODATA && *ifp)
+                       (*ifp)->stats.rx_errors++;
+               brcmu_pkt_buf_free_skb(skb);
+               return -ENODATA;
        }
 
-       do {
-               if (rfi->pktslots[start]) {
-                       __skb_queue_tail(skb_list, rfi->pktslots[start]);
-                       rfi->pktslots[start] = NULL;
-               }
-               start++;
-               if (start > rfi->max_idx)
-                       start = 0;
-       } while (start != end);
-       rfi->pend_pkts -= skb_queue_len(skb_list);
+       skb->protocol = eth_type_trans(skb, (*ifp)->ndev);
+       return 0;
 }
 
-static void brcmf_rxreorder_process_info(struct brcmf_if *ifp, u8 *reorder_data,
-                                        struct sk_buff *pkt)
+void brcmf_rx_frame(struct device *dev, struct sk_buff *skb, bool handle_event)
 {
-       u8 flow_id, max_idx, cur_idx, exp_idx, end_idx;
-       struct brcmf_ampdu_rx_reorder *rfi;
-       struct sk_buff_head reorder_list;
-       struct sk_buff *pnext;
-       u8 flags;
-       u32 buf_size;
-
-       flow_id = reorder_data[BRCMF_RXREORDER_FLOWID_OFFSET];
-       flags = reorder_data[BRCMF_RXREORDER_FLAGS_OFFSET];
-
-       /* validate flags and flow id */
-       if (flags == 0xFF) {
-               brcmf_err("invalid flags...so ignore this packet\n");
-               brcmf_netif_rx(ifp, pkt);
-               return;
-       }
-
-       rfi = ifp->drvr->reorder_flows[flow_id];
-       if (flags & BRCMF_RXREORDER_DEL_FLOW) {
-               brcmf_dbg(INFO, "flow-%d: delete\n",
-                         flow_id);
+       struct brcmf_if *ifp;
+       struct brcmf_bus *bus_if = dev_get_drvdata(dev);
+       struct brcmf_pub *drvr = bus_if->drvr;
 
-               if (rfi == NULL) {
-                       brcmf_dbg(INFO, "received flags to cleanup, but no flow (%d) yet\n",
-                                 flow_id);
-                       brcmf_netif_rx(ifp, pkt);
-                       return;
-               }
+       brcmf_dbg(DATA, "Enter: %s: rxp=%p\n", dev_name(dev), skb);
 
-               brcmf_rxreorder_get_skb_list(rfi, rfi->exp_idx, rfi->exp_idx,
-                                            &reorder_list);
-               /* add the last packet */
-               __skb_queue_tail(&reorder_list, pkt);
-               kfree(rfi);
-               ifp->drvr->reorder_flows[flow_id] = NULL;
-               goto netif_rx;
-       }
-       /* from here on we need a flow reorder instance */
-       if (rfi == NULL) {
-               buf_size = sizeof(*rfi);
-               max_idx = reorder_data[BRCMF_RXREORDER_MAXIDX_OFFSET];
-
-               buf_size += (max_idx + 1) * sizeof(pkt);
-
-               /* allocate space for flow reorder info */
-               brcmf_dbg(INFO, "flow-%d: start, maxidx %d\n",
-                         flow_id, max_idx);
-               rfi = kzalloc(buf_size, GFP_ATOMIC);
-               if (rfi == NULL) {
-                       brcmf_err("failed to alloc buffer\n");
-                       brcmf_netif_rx(ifp, pkt);
-                       return;
-               }
+       if (brcmf_rx_hdrpull(drvr, skb, &ifp))
+               return;
 
-               ifp->drvr->reorder_flows[flow_id] = rfi;
-               rfi->pktslots = (struct sk_buff **)(rfi+1);
-               rfi->max_idx = max_idx;
-       }
-       if (flags & BRCMF_RXREORDER_NEW_HOLE)  {
-               if (rfi->pend_pkts) {
-                       brcmf_rxreorder_get_skb_list(rfi, rfi->exp_idx,
-                                                    rfi->exp_idx,
-                                                    &reorder_list);
-                       WARN_ON(rfi->pend_pkts);
-               } else {
-                       __skb_queue_head_init(&reorder_list);
-               }
-               rfi->cur_idx = reorder_data[BRCMF_RXREORDER_CURIDX_OFFSET];
-               rfi->exp_idx = reorder_data[BRCMF_RXREORDER_EXPIDX_OFFSET];
-               rfi->max_idx = reorder_data[BRCMF_RXREORDER_MAXIDX_OFFSET];
-               rfi->pktslots[rfi->cur_idx] = pkt;
-               rfi->pend_pkts++;
-               brcmf_dbg(DATA, "flow-%d: new hole %d (%d), pending %d\n",
-                         flow_id, rfi->cur_idx, rfi->exp_idx, rfi->pend_pkts);
-       } else if (flags & BRCMF_RXREORDER_CURIDX_VALID) {
-               cur_idx = reorder_data[BRCMF_RXREORDER_CURIDX_OFFSET];
-               exp_idx = reorder_data[BRCMF_RXREORDER_EXPIDX_OFFSET];
-
-               if ((exp_idx == rfi->exp_idx) && (cur_idx != rfi->exp_idx)) {
-                       /* still in the current hole */
-                       /* enqueue the current on the buffer chain */
-                       if (rfi->pktslots[cur_idx] != NULL) {
-                               brcmf_dbg(INFO, "HOLE: ERROR buffer pending..free it\n");
-                               brcmu_pkt_buf_free_skb(rfi->pktslots[cur_idx]);
-                               rfi->pktslots[cur_idx] = NULL;
-                       }
-                       rfi->pktslots[cur_idx] = pkt;
-                       rfi->pend_pkts++;
-                       rfi->cur_idx = cur_idx;
-                       brcmf_dbg(DATA, "flow-%d: store pkt %d (%d), pending %d\n",
-                                 flow_id, cur_idx, exp_idx, rfi->pend_pkts);
-
-                       /* can return now as there is no reorder
-                        * list to process.
-                        */
-                       return;
-               }
-               if (rfi->exp_idx == cur_idx) {
-                       if (rfi->pktslots[cur_idx] != NULL) {
-                               brcmf_dbg(INFO, "error buffer pending..free it\n");
-                               brcmu_pkt_buf_free_skb(rfi->pktslots[cur_idx]);
-                               rfi->pktslots[cur_idx] = NULL;
-                       }
-                       rfi->pktslots[cur_idx] = pkt;
-                       rfi->pend_pkts++;
-
-                       /* got the expected one. flush from current to expected
-                        * and update expected
-                        */
-                       brcmf_dbg(DATA, "flow-%d: expected %d (%d), pending %d\n",
-                                 flow_id, cur_idx, exp_idx, rfi->pend_pkts);
-
-                       rfi->cur_idx = cur_idx;
-                       rfi->exp_idx = exp_idx;
-
-                       brcmf_rxreorder_get_skb_list(rfi, cur_idx, exp_idx,
-                                                    &reorder_list);
-                       brcmf_dbg(DATA, "flow-%d: freeing buffers %d, pending %d\n",
-                                 flow_id, skb_queue_len(&reorder_list),
-                                 rfi->pend_pkts);
-               } else {
-                       u8 end_idx;
-
-                       brcmf_dbg(DATA, "flow-%d (0x%x): both moved, old %d/%d, new %d/%d\n",
-                                 flow_id, flags, rfi->cur_idx, rfi->exp_idx,
-                                 cur_idx, exp_idx);
-                       if (flags & BRCMF_RXREORDER_FLUSH_ALL)
-                               end_idx = rfi->exp_idx;
-                       else
-                               end_idx = exp_idx;
-
-                       /* flush pkts first */
-                       brcmf_rxreorder_get_skb_list(rfi, rfi->exp_idx, end_idx,
-                                                    &reorder_list);
-
-                       if (exp_idx == ((cur_idx + 1) % (rfi->max_idx + 1))) {
-                               __skb_queue_tail(&reorder_list, pkt);
-                       } else {
-                               rfi->pktslots[cur_idx] = pkt;
-                               rfi->pend_pkts++;
-                       }
-                       rfi->exp_idx = exp_idx;
-                       rfi->cur_idx = cur_idx;
-               }
+       if (brcmf_proto_is_reorder_skb(skb)) {
+               brcmf_proto_rxreorder(ifp, skb);
        } else {
-               /* explicity window move updating the expected index */
-               exp_idx = reorder_data[BRCMF_RXREORDER_EXPIDX_OFFSET];
-
-               brcmf_dbg(DATA, "flow-%d (0x%x): change expected: %d -> %d\n",
-                         flow_id, flags, rfi->exp_idx, exp_idx);
-               if (flags & BRCMF_RXREORDER_FLUSH_ALL)
-                       end_idx =  rfi->exp_idx;
-               else
-                       end_idx =  exp_idx;
+               /* Process special event packets */
+               if (handle_event)
+                       brcmf_fweh_process_skb(ifp->drvr, skb);
 
-               brcmf_rxreorder_get_skb_list(rfi, rfi->exp_idx, end_idx,
-                                            &reorder_list);
-               __skb_queue_tail(&reorder_list, pkt);
-               /* set the new expected idx */
-               rfi->exp_idx = exp_idx;
-       }
-netif_rx:
-       skb_queue_walk_safe(&reorder_list, pkt, pnext) {
-               __skb_unlink(pkt, &reorder_list);
-               brcmf_netif_rx(ifp, pkt);
+               brcmf_netif_rx(ifp, skb);
        }
 }
 
-void brcmf_rx_frame(struct device *dev, struct sk_buff *skb)
+void brcmf_rx_event(struct device *dev, struct sk_buff *skb)
 {
        struct brcmf_if *ifp;
        struct brcmf_bus *bus_if = dev_get_drvdata(dev);
        struct brcmf_pub *drvr = bus_if->drvr;
-       struct brcmf_skb_reorder_data *rd;
-       int ret;
 
-       brcmf_dbg(DATA, "Enter: %s: rxp=%p\n", dev_name(dev), skb);
-
-       /* process and remove protocol-specific header */
-       ret = brcmf_proto_hdrpull(drvr, true, skb, &ifp);
+       brcmf_dbg(EVENT, "Enter: %s: rxp=%p\n", dev_name(dev), skb);
 
-       if (ret || !ifp || !ifp->ndev) {
-               if (ret != -ENODATA && ifp)
-                       ifp->stats.rx_errors++;
-               brcmu_pkt_buf_free_skb(skb);
+       if (brcmf_rx_hdrpull(drvr, skb, &ifp))
                return;
-       }
 
-       rd = (struct brcmf_skb_reorder_data *)skb->cb;
-       if (rd->reorder)
-               brcmf_rxreorder_process_info(ifp, rd->reorder, skb);
-       else
-               brcmf_netif_rx(ifp, skb);
+       brcmf_fweh_process_skb(ifp->drvr, skb);
+       brcmu_pkt_buf_free_skb(skb);
 }
 
 void brcmf_txfinalize(struct brcmf_if *ifp, struct sk_buff *txp, bool success)
index 7bdb6fe..647d3cc 100644 (file)
@@ -208,10 +208,6 @@ struct brcmf_if {
        u8 ipv6addr_idx;
 };
 
-struct brcmf_skb_reorder_data {
-       u8 *reorder;
-};
-
 int brcmf_netdev_wait_pend8021x(struct brcmf_if *ifp);
 
 /* Return pointer to interface name */
@@ -227,6 +223,7 @@ void brcmf_txflowblock_if(struct brcmf_if *ifp,
 void brcmf_txfinalize(struct brcmf_if *ifp, struct sk_buff *txp, bool success);
 void brcmf_netif_rx(struct brcmf_if *ifp, struct sk_buff *skb);
 void brcmf_net_setcarrier(struct brcmf_if *ifp, bool on);
+void brcmf_c_set_joinpref_default(struct brcmf_if *ifp);
 int __init brcmf_core_init(void);
 void __exit brcmf_core_exit(void);
 
index 7269056..c7c1e99 100644 (file)
@@ -29,6 +29,7 @@
 #define BRCMF_FW_MAX_NVRAM_SIZE                        64000
 #define BRCMF_FW_NVRAM_DEVPATH_LEN             19      /* devpath0=pcie/1/4/ */
 #define BRCMF_FW_NVRAM_PCIEDEV_LEN             10      /* pcie/1/4/ + \0 */
+#define BRCMF_FW_DEFAULT_BOARDREV              "boardrev=0xff"
 
 enum nvram_parser_state {
        IDLE,
@@ -51,6 +52,7 @@ enum nvram_parser_state {
  * @entry: start position of key,value entry.
  * @multi_dev_v1: detect pcie multi device v1 (compressed).
  * @multi_dev_v2: detect pcie multi device v2.
+ * @boardrev_found: nvram contains boardrev information.
  */
 struct nvram_parser {
        enum nvram_parser_state state;
@@ -63,6 +65,7 @@ struct nvram_parser {
        u32 entry;
        bool multi_dev_v1;
        bool multi_dev_v2;
+       bool boardrev_found;
 };
 
 /**
@@ -125,6 +128,8 @@ static enum nvram_parser_state brcmf_nvram_handle_key(struct nvram_parser *nvp)
                        nvp->multi_dev_v1 = true;
                if (strncmp(&nvp->data[nvp->entry], "pcie/", 5) == 0)
                        nvp->multi_dev_v2 = true;
+               if (strncmp(&nvp->data[nvp->entry], "boardrev", 8) == 0)
+                       nvp->boardrev_found = true;
        } else if (!is_nvram_char(c) || c == ' ') {
                brcmf_dbg(INFO, "warning: ln=%d:col=%d: '=' expected, skip invalid key entry\n",
                          nvp->line, nvp->column);
@@ -284,6 +289,8 @@ static void brcmf_fw_strip_multi_v1(struct nvram_parser *nvp, u16 domain_nr,
        while (i < nvp->nvram_len) {
                if ((nvp->nvram[i] - '0' == id) && (nvp->nvram[i + 1] == ':')) {
                        i += 2;
+                       if (strncmp(&nvp->nvram[i], "boardrev", 8) == 0)
+                               nvp->boardrev_found = true;
                        while (nvp->nvram[i] != 0) {
                                nvram[j] = nvp->nvram[i];
                                i++;
@@ -335,6 +342,8 @@ static void brcmf_fw_strip_multi_v2(struct nvram_parser *nvp, u16 domain_nr,
        while (i < nvp->nvram_len - len) {
                if (strncmp(&nvp->nvram[i], prefix, len) == 0) {
                        i += len;
+                       if (strncmp(&nvp->nvram[i], "boardrev", 8) == 0)
+                               nvp->boardrev_found = true;
                        while (nvp->nvram[i] != 0) {
                                nvram[j] = nvp->nvram[i];
                                i++;
@@ -356,6 +365,18 @@ fail:
        nvp->nvram_len = 0;
 }
 
+static void brcmf_fw_add_defaults(struct nvram_parser *nvp)
+{
+       if (nvp->boardrev_found)
+               return;
+
+       memcpy(&nvp->nvram[nvp->nvram_len], &BRCMF_FW_DEFAULT_BOARDREV,
+              strlen(BRCMF_FW_DEFAULT_BOARDREV));
+       nvp->nvram_len += strlen(BRCMF_FW_DEFAULT_BOARDREV);
+       nvp->nvram[nvp->nvram_len] = '\0';
+       nvp->nvram_len++;
+}
+
 /* brcmf_nvram_strip :Takes a buffer of "<var>=<value>\n" lines read from a fil
  * and ending in a NUL. Removes carriage returns, empty lines, comment lines,
  * and converts newlines to NULs. Shortens buffer as needed and pads with NULs.
@@ -377,16 +398,21 @@ static void *brcmf_fw_nvram_strip(const u8 *data, size_t data_len,
                if (nvp.state == END)
                        break;
        }
-       if (nvp.multi_dev_v1)
+       if (nvp.multi_dev_v1) {
+               nvp.boardrev_found = false;
                brcmf_fw_strip_multi_v1(&nvp, domain_nr, bus_nr);
-       else if (nvp.multi_dev_v2)
+       } else if (nvp.multi_dev_v2) {
+               nvp.boardrev_found = false;
                brcmf_fw_strip_multi_v2(&nvp, domain_nr, bus_nr);
+       }
 
        if (nvp.nvram_len == 0) {
                kfree(nvp.nvram);
                return NULL;
        }
 
+       brcmf_fw_add_defaults(&nvp);
+
        pad = nvp.nvram_len;
        *new_length = roundup(nvp.nvram_len + 1, 4);
        while (pad != *new_length) {
index d414fbb..b390561 100644 (file)
@@ -371,6 +371,7 @@ int brcmf_fweh_activate_events(struct brcmf_if *ifp)
        int i, err;
        s8 eventmask[BRCMF_EVENTING_MASK_LEN];
 
+       memset(eventmask, 0, sizeof(eventmask));
        for (i = 0; i < BRCMF_E_LAST; i++) {
                if (ifp->drvr->fweh.evt_handler[i]) {
                        brcmf_dbg(EVENT, "enable event %s\n",
index 6b72df1..3a9a76d 100644 (file)
@@ -78,6 +78,7 @@
 #define BRCMF_C_SET_SCAN_CHANNEL_TIME          185
 #define BRCMF_C_SET_SCAN_UNASSOC_TIME          187
 #define BRCMF_C_SCB_DEAUTHENTICATE_FOR_REASON  201
+#define BRCMF_C_SET_ASSOC_PREFER               205
 #define BRCMF_C_GET_VALID_CHANNELS             217
 #define BRCMF_C_GET_KEY_PRIMARY                        235
 #define BRCMF_C_SET_KEY_PRIMARY                        236
index f82c9ab..5b30922 100644 (file)
@@ -92,6 +92,19 @@ enum brcmf_fws_tlv_len {
 };
 #undef BRCMF_FWS_TLV_DEF
 
+/* AMPDU rx reordering definitions */
+#define BRCMF_RXREORDER_FLOWID_OFFSET          0
+#define BRCMF_RXREORDER_MAXIDX_OFFSET          2
+#define BRCMF_RXREORDER_FLAGS_OFFSET           4
+#define BRCMF_RXREORDER_CURIDX_OFFSET          6
+#define BRCMF_RXREORDER_EXPIDX_OFFSET          8
+
+#define BRCMF_RXREORDER_DEL_FLOW               0x01
+#define BRCMF_RXREORDER_FLUSH_ALL              0x02
+#define BRCMF_RXREORDER_CURIDX_VALID           0x04
+#define BRCMF_RXREORDER_EXPIDX_VALID           0x08
+#define BRCMF_RXREORDER_NEW_HOLE               0x10
+
 #ifdef DEBUG
 /*
  * brcmf_fws_tlv_names - array of tlv names.
@@ -1614,6 +1627,202 @@ static int brcmf_fws_notify_bcmc_credit_support(struct brcmf_if *ifp,
        return 0;
 }
 
+static void brcmf_rxreorder_get_skb_list(struct brcmf_ampdu_rx_reorder *rfi,
+                                        u8 start, u8 end,
+                                        struct sk_buff_head *skb_list)
+{
+       /* initialize return list */
+       __skb_queue_head_init(skb_list);
+
+       if (rfi->pend_pkts == 0) {
+               brcmf_dbg(INFO, "no packets in reorder queue\n");
+               return;
+       }
+
+       do {
+               if (rfi->pktslots[start]) {
+                       __skb_queue_tail(skb_list, rfi->pktslots[start]);
+                       rfi->pktslots[start] = NULL;
+               }
+               start++;
+               if (start > rfi->max_idx)
+                       start = 0;
+       } while (start != end);
+       rfi->pend_pkts -= skb_queue_len(skb_list);
+}
+
+void brcmf_fws_rxreorder(struct brcmf_if *ifp, struct sk_buff *pkt)
+{
+       u8 *reorder_data;
+       u8 flow_id, max_idx, cur_idx, exp_idx, end_idx;
+       struct brcmf_ampdu_rx_reorder *rfi;
+       struct sk_buff_head reorder_list;
+       struct sk_buff *pnext;
+       u8 flags;
+       u32 buf_size;
+
+       reorder_data = ((struct brcmf_skb_reorder_data *)pkt->cb)->reorder;
+       flow_id = reorder_data[BRCMF_RXREORDER_FLOWID_OFFSET];
+       flags = reorder_data[BRCMF_RXREORDER_FLAGS_OFFSET];
+
+       /* validate flags and flow id */
+       if (flags == 0xFF) {
+               brcmf_err("invalid flags...so ignore this packet\n");
+               brcmf_netif_rx(ifp, pkt);
+               return;
+       }
+
+       rfi = ifp->drvr->reorder_flows[flow_id];
+       if (flags & BRCMF_RXREORDER_DEL_FLOW) {
+               brcmf_dbg(INFO, "flow-%d: delete\n",
+                         flow_id);
+
+               if (rfi == NULL) {
+                       brcmf_dbg(INFO, "received flags to cleanup, but no flow (%d) yet\n",
+                                 flow_id);
+                       brcmf_netif_rx(ifp, pkt);
+                       return;
+               }
+
+               brcmf_rxreorder_get_skb_list(rfi, rfi->exp_idx, rfi->exp_idx,
+                                            &reorder_list);
+               /* add the last packet */
+               __skb_queue_tail(&reorder_list, pkt);
+               kfree(rfi);
+               ifp->drvr->reorder_flows[flow_id] = NULL;
+               goto netif_rx;
+       }
+       /* from here on we need a flow reorder instance */
+       if (rfi == NULL) {
+               buf_size = sizeof(*rfi);
+               max_idx = reorder_data[BRCMF_RXREORDER_MAXIDX_OFFSET];
+
+               buf_size += (max_idx + 1) * sizeof(pkt);
+
+               /* allocate space for flow reorder info */
+               brcmf_dbg(INFO, "flow-%d: start, maxidx %d\n",
+                         flow_id, max_idx);
+               rfi = kzalloc(buf_size, GFP_ATOMIC);
+               if (rfi == NULL) {
+                       brcmf_err("failed to alloc buffer\n");
+                       brcmf_netif_rx(ifp, pkt);
+                       return;
+               }
+
+               ifp->drvr->reorder_flows[flow_id] = rfi;
+               rfi->pktslots = (struct sk_buff **)(rfi + 1);
+               rfi->max_idx = max_idx;
+       }
+       if (flags & BRCMF_RXREORDER_NEW_HOLE)  {
+               if (rfi->pend_pkts) {
+                       brcmf_rxreorder_get_skb_list(rfi, rfi->exp_idx,
+                                                    rfi->exp_idx,
+                                                    &reorder_list);
+                       WARN_ON(rfi->pend_pkts);
+               } else {
+                       __skb_queue_head_init(&reorder_list);
+               }
+               rfi->cur_idx = reorder_data[BRCMF_RXREORDER_CURIDX_OFFSET];
+               rfi->exp_idx = reorder_data[BRCMF_RXREORDER_EXPIDX_OFFSET];
+               rfi->max_idx = reorder_data[BRCMF_RXREORDER_MAXIDX_OFFSET];
+               rfi->pktslots[rfi->cur_idx] = pkt;
+               rfi->pend_pkts++;
+               brcmf_dbg(DATA, "flow-%d: new hole %d (%d), pending %d\n",
+                         flow_id, rfi->cur_idx, rfi->exp_idx, rfi->pend_pkts);
+       } else if (flags & BRCMF_RXREORDER_CURIDX_VALID) {
+               cur_idx = reorder_data[BRCMF_RXREORDER_CURIDX_OFFSET];
+               exp_idx = reorder_data[BRCMF_RXREORDER_EXPIDX_OFFSET];
+
+               if ((exp_idx == rfi->exp_idx) && (cur_idx != rfi->exp_idx)) {
+                       /* still in the current hole */
+                       /* enqueue the current on the buffer chain */
+                       if (rfi->pktslots[cur_idx] != NULL) {
+                               brcmf_dbg(INFO, "HOLE: ERROR buffer pending..free it\n");
+                               brcmu_pkt_buf_free_skb(rfi->pktslots[cur_idx]);
+                               rfi->pktslots[cur_idx] = NULL;
+                       }
+                       rfi->pktslots[cur_idx] = pkt;
+                       rfi->pend_pkts++;
+                       rfi->cur_idx = cur_idx;
+                       brcmf_dbg(DATA, "flow-%d: store pkt %d (%d), pending %d\n",
+                                 flow_id, cur_idx, exp_idx, rfi->pend_pkts);
+
+                       /* can return now as there is no reorder
+                        * list to process.
+                        */
+                       return;
+               }
+               if (rfi->exp_idx == cur_idx) {
+                       if (rfi->pktslots[cur_idx] != NULL) {
+                               brcmf_dbg(INFO, "error buffer pending..free it\n");
+                               brcmu_pkt_buf_free_skb(rfi->pktslots[cur_idx]);
+                               rfi->pktslots[cur_idx] = NULL;
+                       }
+                       rfi->pktslots[cur_idx] = pkt;
+                       rfi->pend_pkts++;
+
+                       /* got the expected one. flush from current to expected
+                        * and update expected
+                        */
+                       brcmf_dbg(DATA, "flow-%d: expected %d (%d), pending %d\n",
+                                 flow_id, cur_idx, exp_idx, rfi->pend_pkts);
+
+                       rfi->cur_idx = cur_idx;
+                       rfi->exp_idx = exp_idx;
+
+                       brcmf_rxreorder_get_skb_list(rfi, cur_idx, exp_idx,
+                                                    &reorder_list);
+                       brcmf_dbg(DATA, "flow-%d: freeing buffers %d, pending %d\n",
+                                 flow_id, skb_queue_len(&reorder_list),
+                                 rfi->pend_pkts);
+               } else {
+                       u8 end_idx;
+
+                       brcmf_dbg(DATA, "flow-%d (0x%x): both moved, old %d/%d, new %d/%d\n",
+                                 flow_id, flags, rfi->cur_idx, rfi->exp_idx,
+                                 cur_idx, exp_idx);
+                       if (flags & BRCMF_RXREORDER_FLUSH_ALL)
+                               end_idx = rfi->exp_idx;
+                       else
+                               end_idx = exp_idx;
+
+                       /* flush pkts first */
+                       brcmf_rxreorder_get_skb_list(rfi, rfi->exp_idx, end_idx,
+                                                    &reorder_list);
+
+                       if (exp_idx == ((cur_idx + 1) % (rfi->max_idx + 1))) {
+                               __skb_queue_tail(&reorder_list, pkt);
+                       } else {
+                               rfi->pktslots[cur_idx] = pkt;
+                               rfi->pend_pkts++;
+                       }
+                       rfi->exp_idx = exp_idx;
+                       rfi->cur_idx = cur_idx;
+               }
+       } else {
+               /* explicity window move updating the expected index */
+               exp_idx = reorder_data[BRCMF_RXREORDER_EXPIDX_OFFSET];
+
+               brcmf_dbg(DATA, "flow-%d (0x%x): change expected: %d -> %d\n",
+                         flow_id, flags, rfi->exp_idx, exp_idx);
+               if (flags & BRCMF_RXREORDER_FLUSH_ALL)
+                       end_idx =  rfi->exp_idx;
+               else
+                       end_idx =  exp_idx;
+
+               brcmf_rxreorder_get_skb_list(rfi, rfi->exp_idx, end_idx,
+                                            &reorder_list);
+               __skb_queue_tail(&reorder_list, pkt);
+               /* set the new expected idx */
+               rfi->exp_idx = exp_idx;
+       }
+netif_rx:
+       skb_queue_walk_safe(&reorder_list, pkt, pnext) {
+               __skb_unlink(pkt, &reorder_list);
+               brcmf_netif_rx(ifp, pkt);
+       }
+}
+
 void brcmf_fws_hdrpull(struct brcmf_if *ifp, s16 siglen, struct sk_buff *skb)
 {
        struct brcmf_skb_reorder_data *rd;
index a36bac1..ef0ad85 100644 (file)
@@ -29,5 +29,6 @@ void brcmf_fws_add_interface(struct brcmf_if *ifp);
 void brcmf_fws_del_interface(struct brcmf_if *ifp);
 void brcmf_fws_bustxfail(struct brcmf_fws_info *fws, struct sk_buff *skb);
 void brcmf_fws_bus_blocked(struct brcmf_pub *drvr, bool flow_blocked);
+void brcmf_fws_rxreorder(struct brcmf_if *ifp, struct sk_buff *skb);
 
 #endif /* FWSIGNAL_H_ */
index 9229667..68f1ce0 100644 (file)
@@ -20,6 +20,7 @@
 
 #include <linux/types.h>
 #include <linux/netdevice.h>
+#include <linux/etherdevice.h>
 
 #include <brcmu_utils.h>
 #include <brcmu_wifi.h>
@@ -526,6 +527,9 @@ static int brcmf_msgbuf_hdrpull(struct brcmf_pub *drvr, bool do_fws,
        return -ENODEV;
 }
 
+static void brcmf_msgbuf_rxreorder(struct brcmf_if *ifp, struct sk_buff *skb)
+{
+}
 
 static void
 brcmf_msgbuf_remove_flowring(struct brcmf_msgbuf *msgbuf, u16 flowid)
@@ -1075,28 +1079,13 @@ static void brcmf_msgbuf_rxbuf_event_post(struct brcmf_msgbuf *msgbuf)
 }
 
 
-static void
-brcmf_msgbuf_rx_skb(struct brcmf_msgbuf *msgbuf, struct sk_buff *skb,
-                   u8 ifidx)
-{
-       struct brcmf_if *ifp;
-
-       ifp = brcmf_get_ifp(msgbuf->drvr, ifidx);
-       if (!ifp || !ifp->ndev) {
-               brcmf_err("Received pkt for invalid ifidx %d\n", ifidx);
-               brcmu_pkt_buf_free_skb(skb);
-               return;
-       }
-       brcmf_netif_rx(ifp, skb);
-}
-
-
 static void brcmf_msgbuf_process_event(struct brcmf_msgbuf *msgbuf, void *buf)
 {
        struct msgbuf_rx_event *event;
        u32 idx;
        u16 buflen;
        struct sk_buff *skb;
+       struct brcmf_if *ifp;
 
        event = (struct msgbuf_rx_event *)buf;
        idx = le32_to_cpu(event->msg.request_id);
@@ -1116,7 +1105,19 @@ static void brcmf_msgbuf_process_event(struct brcmf_msgbuf *msgbuf, void *buf)
 
        skb_trim(skb, buflen);
 
-       brcmf_msgbuf_rx_skb(msgbuf, skb, event->msg.ifidx);
+       ifp = brcmf_get_ifp(msgbuf->drvr, event->msg.ifidx);
+       if (!ifp || !ifp->ndev) {
+               brcmf_err("Received pkt for invalid ifidx %d\n",
+                         event->msg.ifidx);
+               goto exit;
+       }
+
+       skb->protocol = eth_type_trans(skb, ifp->ndev);
+
+       brcmf_fweh_process_skb(ifp->drvr, skb);
+
+exit:
+       brcmu_pkt_buf_free_skb(skb);
 }
 
 
@@ -1128,6 +1129,7 @@ brcmf_msgbuf_process_rx_complete(struct brcmf_msgbuf *msgbuf, void *buf)
        u16 data_offset;
        u16 buflen;
        u32 idx;
+       struct brcmf_if *ifp;
 
        brcmf_msgbuf_update_rxbufpost_count(msgbuf, 1);
 
@@ -1148,7 +1150,14 @@ brcmf_msgbuf_process_rx_complete(struct brcmf_msgbuf *msgbuf, void *buf)
 
        skb_trim(skb, buflen);
 
-       brcmf_msgbuf_rx_skb(msgbuf, skb, rx_complete->msg.ifidx);
+       ifp = brcmf_get_ifp(msgbuf->drvr, rx_complete->msg.ifidx);
+       if (!ifp || !ifp->ndev) {
+               brcmf_err("Received pkt for invalid ifidx %d\n",
+                         rx_complete->msg.ifidx);
+               brcmu_pkt_buf_free_skb(skb);
+               return;
+       }
+       brcmf_netif_rx(ifp, skb);
 }
 
 
@@ -1460,6 +1469,7 @@ int brcmf_proto_msgbuf_attach(struct brcmf_pub *drvr)
        drvr->proto->configure_addr_mode = brcmf_msgbuf_configure_addr_mode;
        drvr->proto->delete_peer = brcmf_msgbuf_delete_peer;
        drvr->proto->add_tdls_peer = brcmf_msgbuf_add_tdls_peer;
+       drvr->proto->rxreorder = brcmf_msgbuf_rxreorder;
        drvr->proto->pd = msgbuf;
 
        init_waitqueue_head(&msgbuf->ioctl_resp_wait);
index c2ac91d..a70cda6 100644 (file)
@@ -1266,7 +1266,7 @@ static void
 brcmf_p2p_stop_wait_next_action_frame(struct brcmf_cfg80211_info *cfg)
 {
        struct brcmf_p2p_info *p2p = &cfg->p2p;
-       struct brcmf_if *ifp = cfg->escan_info.ifp;
+       struct brcmf_if *ifp = p2p->bss_idx[P2PAPI_BSSCFG_PRIMARY].vif->ifp;
 
        if (test_bit(BRCMF_P2P_STATUS_SENDING_ACT_FRAME, &p2p->status) &&
            (test_bit(BRCMF_P2P_STATUS_ACTION_TX_COMPLETED, &p2p->status) ||
index d55119d..57531f4 100644 (file)
@@ -22,6 +22,9 @@ enum proto_addr_mode {
        ADDR_DIRECT
 };
 
+struct brcmf_skb_reorder_data {
+       u8 *reorder;
+};
 
 struct brcmf_proto {
        int (*hdrpull)(struct brcmf_pub *drvr, bool do_fws,
@@ -38,6 +41,7 @@ struct brcmf_proto {
                            u8 peer[ETH_ALEN]);
        void (*add_tdls_peer)(struct brcmf_pub *drvr, int ifidx,
                              u8 peer[ETH_ALEN]);
+       void (*rxreorder)(struct brcmf_if *ifp, struct sk_buff *skb);
        void *pd;
 };
 
@@ -91,6 +95,18 @@ brcmf_proto_add_tdls_peer(struct brcmf_pub *drvr, int ifidx, u8 peer[ETH_ALEN])
 {
        drvr->proto->add_tdls_peer(drvr, ifidx, peer);
 }
+static inline bool brcmf_proto_is_reorder_skb(struct sk_buff *skb)
+{
+       struct brcmf_skb_reorder_data *rd;
+
+       rd = (struct brcmf_skb_reorder_data *)skb->cb;
+       return !!rd->reorder;
+}
 
+static inline void
+brcmf_proto_rxreorder(struct brcmf_if *ifp, struct sk_buff *skb)
+{
+       ifp->drvr->proto->rxreorder(ifp, skb);
+}
 
 #endif /* BRCMFMAC_PROTO_H */
index 48d7467..4252fa8 100644 (file)
@@ -1294,6 +1294,17 @@ static inline u8 brcmf_sdio_getdatoffset(u8 *swheader)
        return (u8)((hdrvalue & SDPCM_DOFFSET_MASK) >> SDPCM_DOFFSET_SHIFT);
 }
 
+static inline bool brcmf_sdio_fromevntchan(u8 *swheader)
+{
+       u32 hdrvalue;
+       u8 ret;
+
+       hdrvalue = *(u32 *)swheader;
+       ret = (u8)((hdrvalue & SDPCM_CHANNEL_MASK) >> SDPCM_CHANNEL_SHIFT);
+
+       return (ret == SDPCM_EVENT_CHANNEL);
+}
+
 static int brcmf_sdio_hdparse(struct brcmf_sdio *bus, u8 *header,
                              struct brcmf_sdio_hdrinfo *rd,
                              enum brcmf_sdio_frmtype type)
@@ -1641,7 +1652,11 @@ static u8 brcmf_sdio_rxglom(struct brcmf_sdio *bus, u8 rxseq)
                                           pfirst->len, pfirst->next,
                                           pfirst->prev);
                        skb_unlink(pfirst, &bus->glom);
-                       brcmf_rx_frame(bus->sdiodev->dev, pfirst);
+                       if (brcmf_sdio_fromevntchan(pfirst->data))
+                               brcmf_rx_event(bus->sdiodev->dev, pfirst);
+                       else
+                               brcmf_rx_frame(bus->sdiodev->dev, pfirst,
+                                              false);
                        bus->sdcnt.rxglompkts++;
                }
 
@@ -1967,18 +1982,19 @@ static uint brcmf_sdio_readframes(struct brcmf_sdio *bus, uint maxframes)
                __skb_trim(pkt, rd->len);
                skb_pull(pkt, rd->dat_offset);
 
+               if (pkt->len == 0)
+                       brcmu_pkt_buf_free_skb(pkt);
+               else if (rd->channel == SDPCM_EVENT_CHANNEL)
+                       brcmf_rx_event(bus->sdiodev->dev, pkt);
+               else
+                       brcmf_rx_frame(bus->sdiodev->dev, pkt,
+                                      false);
+
                /* prepare the descriptor for the next read */
                rd->len = rd->len_nxtfrm << 4;
                rd->len_nxtfrm = 0;
                /* treat all packet as event if we don't know */
                rd->channel = SDPCM_EVENT_CHANNEL;
-
-               if (pkt->len == 0) {
-                       brcmu_pkt_buf_free_skb(pkt);
-                       continue;
-               }
-
-               brcmf_rx_frame(bus->sdiodev->dev, pkt);
        }
 
        rxcount = maxframes - rxleft;
index 869eb82..98b15a9 100644 (file)
@@ -514,7 +514,7 @@ static void brcmf_usb_rx_complete(struct urb *urb)
 
        if (devinfo->bus_pub.state == BRCMFMAC_USB_STATE_UP) {
                skb_put(skb, urb->actual_length);
-               brcmf_rx_frame(devinfo->dev, skb);
+               brcmf_rx_frame(devinfo->dev, skb, true);
                brcmf_usb_rx_refill(devinfo, req);
        } else {
                brcmu_pkt_buf_free_skb(skb);
@@ -1368,7 +1368,9 @@ brcmf_usb_probe(struct usb_interface *intf, const struct usb_device_id *id)
 
        devinfo->ifnum = desc->bInterfaceNumber;
 
-       if (usb->speed == USB_SPEED_SUPER)
+       if (usb->speed == USB_SPEED_SUPER_PLUS)
+               brcmf_dbg(USB, "Broadcom super speed plus USB WLAN interface detected\n");
+       else if (usb->speed == USB_SPEED_SUPER)
                brcmf_dbg(USB, "Broadcom super speed USB WLAN interface detected\n");
        else if (usb->speed == USB_SPEED_HIGH)
                brcmf_dbg(USB, "Broadcom high speed USB WLAN interface detected\n");
index 4bd9e2b..55456f7 100644 (file)
@@ -2026,7 +2026,7 @@ static int mpi_send_packet (struct net_device *dev)
        } else {
                *payloadLen = cpu_to_le16(len - sizeof(etherHead));
 
-               dev->trans_start = jiffies;
+               netif_trans_update(dev);
 
                /* copy data into airo dma buffer */
                memcpy(sendbuf, buffer, len);
@@ -2107,7 +2107,7 @@ static void airo_end_xmit(struct net_device *dev) {
 
        i = 0;
        if ( status == SUCCESS ) {
-               dev->trans_start = jiffies;
+               netif_trans_update(dev);
                for (; i < MAX_FIDS / 2 && (priv->fids[i] & 0xffff0000); i++);
        } else {
                priv->fids[fid] &= 0xffff;
@@ -2174,7 +2174,7 @@ static void airo_end_xmit11(struct net_device *dev) {
 
        i = MAX_FIDS / 2;
        if ( status == SUCCESS ) {
-               dev->trans_start = jiffies;
+               netif_trans_update(dev);
                for (; i < MAX_FIDS && (priv->fids[i] & 0xffff0000); i++);
        } else {
                priv->fids[fid] &= 0xffff;
index e1e42ed..bfa542c 100644 (file)
@@ -2954,7 +2954,7 @@ static int __ipw2100_tx_process(struct ipw2100_priv *priv)
 
                /* A packet was processed by the hardware, so update the
                 * watchdog */
-               priv->net_dev->trans_start = jiffies;
+               netif_trans_update(priv->net_dev);
 
                break;
 
index dac13cf..5adb7ce 100644 (file)
@@ -7707,7 +7707,7 @@ static void ipw_handle_data_packet(struct ipw_priv *priv,
        struct ipw_rx_packet *pkt = (struct ipw_rx_packet *)rxb->skb->data;
 
        /* We received data from the HW, so stop the watchdog */
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
 
        /* We only process data packets if the
         * interface is open */
@@ -7770,7 +7770,7 @@ static void ipw_handle_data_packet_monitor(struct ipw_priv *priv,
        unsigned short len = le16_to_cpu(pkt->u.frame.length);
 
        /* We received data from the HW, so stop the watchdog */
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
 
        /* We only process data packets if the
         * interface is open */
@@ -7952,7 +7952,7 @@ static void ipw_handle_promiscuous_rx(struct ipw_priv *priv,
                return;
 
        /* We received data from the HW, so stop the watchdog */
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
 
        if (unlikely((len + IPW_RX_FRAME_SIZE) > skb_tailroom(rxb->skb))) {
                dev->stats.rx_errors++;
index a9212a1..2d20556 100644 (file)
@@ -89,7 +89,7 @@
 #define IWL8260_SMEM_OFFSET            0x400000
 #define IWL8260_SMEM_LEN               0x68000
 
-#define IWL8000_FW_PRE "iwlwifi-8000"
+#define IWL8000_FW_PRE "iwlwifi-8000C-"
 #define IWL8000_MODULE_FIRMWARE(api) \
        IWL8000_FW_PRE "-" __stringify(api) ".ucode"
 
index 48e8737..ff18b06 100644 (file)
@@ -240,19 +240,6 @@ static int iwl_request_firmware(struct iwl_drv *drv, bool first)
        snprintf(drv->firmware_name, sizeof(drv->firmware_name), "%s%s.ucode",
                 name_pre, tag);
 
-       /*
-        * Starting 8000B - FW name format has changed. This overwrites the
-        * previous name and uses the new format.
-        */
-       if (drv->trans->cfg->device_family == IWL_DEVICE_FAMILY_8000) {
-               char rev_step = 'A' + CSR_HW_REV_STEP(drv->trans->hw_rev);
-
-               if (rev_step != 'A')
-                       snprintf(drv->firmware_name,
-                                sizeof(drv->firmware_name), "%s%c-%s.ucode",
-                                name_pre, rev_step, tag);
-       }
-
        IWL_DEBUG_INFO(drv, "attempting to load firmware %s'%s'\n",
                       (drv->fw_index == UCODE_EXPERIMENTAL_INDEX)
                                ? "EXPERIMENTAL " : "",
@@ -1280,7 +1267,10 @@ static void iwl_req_fw_callback(const struct firmware *ucode_raw, void *context)
        if (err)
                goto try_again;
 
-       api_ver = drv->fw.ucode_ver;
+       if (fw_has_api(&drv->fw.ucode_capa, IWL_UCODE_TLV_API_NEW_VERSION))
+               api_ver = drv->fw.ucode_ver;
+       else
+               api_ver = IWL_UCODE_API(drv->fw.ucode_ver);
 
        /*
         * api_ver should match the api version forming part of the
index 843232b..37dc09e 100644 (file)
@@ -251,6 +251,7 @@ typedef unsigned int __bitwise__ iwl_ucode_tlv_api_t;
  * @IWL_UCODE_TLV_API_WIFI_MCC_UPDATE: ucode supports MCC updates with source.
  * @IWL_UCODE_TLV_API_WIDE_CMD_HDR: ucode supports wide command header
  * @IWL_UCODE_TLV_API_LQ_SS_PARAMS: Configure STBC/BFER via LQ CMD ss_params
+ * @IWL_UCODE_TLV_API_NEW_VERSION: new versioning format
  * @IWL_UCODE_TLV_API_EXT_SCAN_PRIORITY: scan APIs use 8-level priority
  *     instead of 3.
  * @IWL_UCODE_TLV_API_TX_POWER_CHAIN: TX power API has larger command size
@@ -263,6 +264,7 @@ enum iwl_ucode_tlv_api {
        IWL_UCODE_TLV_API_WIFI_MCC_UPDATE       = (__force iwl_ucode_tlv_api_t)9,
        IWL_UCODE_TLV_API_WIDE_CMD_HDR          = (__force iwl_ucode_tlv_api_t)14,
        IWL_UCODE_TLV_API_LQ_SS_PARAMS          = (__force iwl_ucode_tlv_api_t)18,
+       IWL_UCODE_TLV_API_NEW_VERSION           = (__force iwl_ucode_tlv_api_t)20,
        IWL_UCODE_TLV_API_EXT_SCAN_PRIORITY     = (__force iwl_ucode_tlv_api_t)24,
        IWL_UCODE_TLV_API_TX_POWER_CHAIN        = (__force iwl_ucode_tlv_api_t)27,
 
index cbb5947..e25171f 100644 (file)
@@ -609,7 +609,8 @@ void iwl_mvm_fw_error_dump(struct iwl_mvm *mvm)
        }
 
        /* Make room for fw's virtual image pages, if it exists */
-       if (mvm->fw->img[mvm->cur_ucode].paging_mem_size)
+       if (mvm->fw->img[mvm->cur_ucode].paging_mem_size &&
+           mvm->fw_paging_db[0].fw_paging_block)
                file_len += mvm->num_of_paging_blk *
                        (sizeof(*dump_data) +
                         sizeof(struct iwl_fw_error_dump_paging) +
@@ -750,7 +751,8 @@ void iwl_mvm_fw_error_dump(struct iwl_mvm *mvm)
        }
 
        /* Dump fw's virtual image */
-       if (mvm->fw->img[mvm->cur_ucode].paging_mem_size) {
+       if (mvm->fw->img[mvm->cur_ucode].paging_mem_size &&
+           mvm->fw_paging_db[0].fw_paging_block) {
                for (i = 1; i < mvm->num_of_paging_blk + 1; i++) {
                        struct iwl_fw_error_dump_paging *paging;
                        struct page *pages =
index 9e97cf4..b70f453 100644 (file)
@@ -149,9 +149,11 @@ void iwl_free_fw_paging(struct iwl_mvm *mvm)
 
                __free_pages(mvm->fw_paging_db[i].fw_paging_block,
                             get_order(mvm->fw_paging_db[i].fw_paging_size));
+               mvm->fw_paging_db[i].fw_paging_block = NULL;
        }
        kfree(mvm->trans->paging_download_buf);
        mvm->trans->paging_download_buf = NULL;
+       mvm->trans->paging_db = NULL;
 
        memset(mvm->fw_paging_db, 0, sizeof(mvm->fw_paging_db));
 }
index 41c6dd5..de42066 100644 (file)
@@ -479,8 +479,18 @@ static const struct pci_device_id iwl_hw_card_ids[] = {
        {IWL_PCI_DEVICE(0x24F3, 0x0930, iwl8260_2ac_cfg)},
        {IWL_PCI_DEVICE(0x24F3, 0x0000, iwl8265_2ac_cfg)},
        {IWL_PCI_DEVICE(0x24FD, 0x0010, iwl8265_2ac_cfg)},
+       {IWL_PCI_DEVICE(0x24FD, 0x0110, iwl8265_2ac_cfg)},
+       {IWL_PCI_DEVICE(0x24FD, 0x1110, iwl8265_2ac_cfg)},
+       {IWL_PCI_DEVICE(0x24FD, 0x1010, iwl8265_2ac_cfg)},
+       {IWL_PCI_DEVICE(0x24FD, 0x0050, iwl8265_2ac_cfg)},
+       {IWL_PCI_DEVICE(0x24FD, 0x0150, iwl8265_2ac_cfg)},
+       {IWL_PCI_DEVICE(0x24FD, 0x9010, iwl8265_2ac_cfg)},
+       {IWL_PCI_DEVICE(0x24FD, 0x8110, iwl8265_2ac_cfg)},
+       {IWL_PCI_DEVICE(0x24FD, 0x8050, iwl8265_2ac_cfg)},
        {IWL_PCI_DEVICE(0x24FD, 0x8010, iwl8265_2ac_cfg)},
        {IWL_PCI_DEVICE(0x24FD, 0x0810, iwl8265_2ac_cfg)},
+       {IWL_PCI_DEVICE(0x24FD, 0x9110, iwl8265_2ac_cfg)},
+       {IWL_PCI_DEVICE(0x24FD, 0x8130, iwl8265_2ac_cfg)},
 
 /* 9000 Series */
        {IWL_PCI_DEVICE(0x9DF0, 0x0A10, iwl9560_2ac_cfg)},
index 515aa3f..a8a9bd8 100644 (file)
@@ -1794,7 +1794,7 @@ static int prism2_transmit(struct net_device *dev, int idx)
                netif_wake_queue(dev);
                return -1;
        }
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
 
        /* Since we did not wait for command completion, the card continues
         * to process on the background and we will finish handling when
index 7b5c554..7afe200 100644 (file)
@@ -1794,7 +1794,7 @@ void orinoco_reset(struct work_struct *work)
                        printk(KERN_ERR "%s: orinoco_reset: Error %d reenabling card\n",
                               dev->name, err);
                } else
-                       dev->trans_start = jiffies;
+                       netif_trans_update(dev);
        }
 
        orinoco_unlock_irq(priv);
index f2cd513..56f109b 100644 (file)
@@ -1275,7 +1275,7 @@ static netdev_tx_t ezusb_xmit(struct sk_buff *skb, struct net_device *dev)
                goto busy;
        }
 
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
        stats->tx_bytes += skb->len;
        goto ok;
 
index 333c1a2..6700387 100644 (file)
@@ -19,6 +19,7 @@
 #include <linux/module.h>
 #include <linux/types.h>
 #include <linux/delay.h>
+#include <linux/ktime.h>
 
 #include <asm/uaccess.h>
 #include <asm/io.h>
@@ -113,7 +114,7 @@ isl38xx_trigger_device(int asleep, void __iomem *device_base)
 
 #if VERBOSE > SHOW_ERROR_MESSAGES
        u32 counter = 0;
-       struct timeval current_time;
+       struct timespec64 current_ts64;
        DEBUG(SHOW_FUNCTION_CALLS, "isl38xx trigger device\n");
 #endif
 
@@ -121,22 +122,22 @@ isl38xx_trigger_device(int asleep, void __iomem *device_base)
        if (asleep) {
                /* device is in powersave, trigger the device for wakeup */
 #if VERBOSE > SHOW_ERROR_MESSAGES
-               do_gettimeofday(&current_time);
-               DEBUG(SHOW_TRACING, "%08li.%08li Device wakeup triggered\n",
-                     current_time.tv_sec, (long)current_time.tv_usec);
+               ktime_get_real_ts64(&current_ts64);
+               DEBUG(SHOW_TRACING, "%lld.%09ld Device wakeup triggered\n",
+                     (s64)current_ts64.tv_sec, current_ts64.tv_nsec);
 
-               DEBUG(SHOW_TRACING, "%08li.%08li Device register read %08x\n",
-                     current_time.tv_sec, (long)current_time.tv_usec,
+               DEBUG(SHOW_TRACING, "%lld.%09ld Device register read %08x\n",
+                     (s64)current_ts64.tv_sec, current_ts64.tv_nsec,
                      readl(device_base + ISL38XX_CTRL_STAT_REG));
 #endif
 
                reg = readl(device_base + ISL38XX_INT_IDENT_REG);
                if (reg == 0xabadface) {
 #if VERBOSE > SHOW_ERROR_MESSAGES
-                       do_gettimeofday(&current_time);
+                       ktime_get_real_ts64(&current_ts64);
                        DEBUG(SHOW_TRACING,
-                             "%08li.%08li Device register abadface\n",
-                             current_time.tv_sec, (long)current_time.tv_usec);
+                             "%lld.%09ld Device register abadface\n",
+                             (s64)current_ts64.tv_sec, current_ts64.tv_nsec);
 #endif
                        /* read the Device Status Register until Sleepmode bit is set */
                        while (reg = readl(device_base + ISL38XX_CTRL_STAT_REG),
@@ -149,13 +150,13 @@ isl38xx_trigger_device(int asleep, void __iomem *device_base)
 
 #if VERBOSE > SHOW_ERROR_MESSAGES
                        DEBUG(SHOW_TRACING,
-                             "%08li.%08li Device register read %08x\n",
-                             current_time.tv_sec, (long)current_time.tv_usec,
+                             "%lld.%09ld Device register read %08x\n",
+                             (s64)current_ts64.tv_sec, current_ts64.tv_nsec,
                              readl(device_base + ISL38XX_CTRL_STAT_REG));
-                       do_gettimeofday(&current_time);
+                       ktime_get_real_ts64(&current_ts64);
                        DEBUG(SHOW_TRACING,
-                             "%08li.%08li Device asleep counter %i\n",
-                             current_time.tv_sec, (long)current_time.tv_usec,
+                             "%lld.%09ld Device asleep counter %i\n",
+                             (s64)current_ts64.tv_sec, current_ts64.tv_nsec,
                              counter);
 #endif
                }
@@ -168,9 +169,9 @@ isl38xx_trigger_device(int asleep, void __iomem *device_base)
 
                /* perform another read on the Device Status Register */
                reg = readl(device_base + ISL38XX_CTRL_STAT_REG);
-               do_gettimeofday(&current_time);
-               DEBUG(SHOW_TRACING, "%08li.%08li Device register read %08x\n",
-                     current_time.tv_sec, (long)current_time.tv_usec, reg);
+               ktime_get_real_ts64(&current_ts64);
+               DEBUG(SHOW_TRACING, "%lld.%00ld Device register read %08x\n",
+                     (s64)current_ts64.tv_sec, current_ts64.tv_nsec, reg);
 #endif
        } else {
                /* device is (still) awake  */
index c757f14..9ed0ed1 100644 (file)
@@ -1030,7 +1030,7 @@ static void mac80211_hwsim_tx_frame_nl(struct ieee80211_hw *hw,
        data->pending_cookie++;
        cookie = data->pending_cookie;
        info->rate_driver_data[0] = (void *)cookie;
-       if (nla_put_u64(skb, HWSIM_ATTR_COOKIE, cookie))
+       if (nla_put_u64_64bit(skb, HWSIM_ATTR_COOKIE, cookie, HWSIM_ATTR_PAD))
                goto nla_put_failure;
 
        genlmsg_end(skb, msg_head);
index 66e1c73..39f2246 100644 (file)
@@ -148,6 +148,7 @@ enum {
        HWSIM_ATTR_RADIO_NAME,
        HWSIM_ATTR_NO_VIF,
        HWSIM_ATTR_FREQ,
+       HWSIM_ATTR_PAD,
        __HWSIM_ATTR_MAX,
 };
 #define HWSIM_ATTR_MAX (__HWSIM_ATTR_MAX - 1)
index 6db202f..ff948a9 100644 (file)
@@ -3344,6 +3344,7 @@ static int mwifiex_cfg80211_resume(struct wiphy *wiphy)
        struct mwifiex_ds_wakeup_reason wakeup_reason;
        struct cfg80211_wowlan_wakeup wakeup_report;
        int i;
+       bool report_wakeup_reason = true;
 
        for (i = 0; i < adapter->priv_num; i++) {
                priv = adapter->priv[i];
@@ -3354,6 +3355,9 @@ static int mwifiex_cfg80211_resume(struct wiphy *wiphy)
                }
        }
 
+       if (!wiphy->wowlan_config)
+               goto done;
+
        priv = mwifiex_get_priv(adapter, MWIFIEX_BSS_ROLE_STA);
        mwifiex_get_wakeup_reason(priv, HostCmd_ACT_GEN_GET, MWIFIEX_SYNC_CMD,
                                  &wakeup_reason);
@@ -3386,23 +3390,20 @@ static int mwifiex_cfg80211_resume(struct wiphy *wiphy)
                if (wiphy->wowlan_config->n_patterns)
                        wakeup_report.pattern_idx = 1;
                break;
-       case CONTROL_FRAME_MATCHED:
-               break;
-       case    MANAGEMENT_FRAME_MATCHED:
-               break;
        case GTK_REKEY_FAILURE:
                if (wiphy->wowlan_config->gtk_rekey_failure)
                        wakeup_report.gtk_rekey_failure = true;
                break;
        default:
+               report_wakeup_reason = false;
                break;
        }
 
-       if ((wakeup_reason.hs_wakeup_reason > 0) &&
-           (wakeup_reason.hs_wakeup_reason <= 7))
+       if (report_wakeup_reason)
                cfg80211_report_wowlan_wakeup(&priv->wdev, &wakeup_report,
                                              GFP_KERNEL);
 
+done:
        if (adapter->nd_info) {
                for (i = 0 ; i < adapter->nd_info->n_matches ; i++)
                        kfree(adapter->nd_info->matches[i]);
index a12adee..6bc2011 100644 (file)
@@ -104,6 +104,47 @@ mwifiex_clean_cmd_node(struct mwifiex_adapter *adapter,
        }
 }
 
+/*
+ * This function returns a command to the command free queue.
+ *
+ * The function also calls the completion callback if required, before
+ * cleaning the command node and re-inserting it into the free queue.
+ */
+static void
+mwifiex_insert_cmd_to_free_q(struct mwifiex_adapter *adapter,
+                            struct cmd_ctrl_node *cmd_node)
+{
+       unsigned long flags;
+
+       if (!cmd_node)
+               return;
+
+       if (cmd_node->wait_q_enabled)
+               mwifiex_complete_cmd(adapter, cmd_node);
+       /* Clean the node */
+       mwifiex_clean_cmd_node(adapter, cmd_node);
+
+       /* Insert node into cmd_free_q */
+       spin_lock_irqsave(&adapter->cmd_free_q_lock, flags);
+       list_add_tail(&cmd_node->list, &adapter->cmd_free_q);
+       spin_unlock_irqrestore(&adapter->cmd_free_q_lock, flags);
+}
+
+/* This function reuses a command node. */
+void mwifiex_recycle_cmd_node(struct mwifiex_adapter *adapter,
+                             struct cmd_ctrl_node *cmd_node)
+{
+       struct host_cmd_ds_command *host_cmd = (void *)cmd_node->cmd_skb->data;
+
+       mwifiex_insert_cmd_to_free_q(adapter, cmd_node);
+
+       atomic_dec(&adapter->cmd_pending);
+       mwifiex_dbg(adapter, CMD,
+                   "cmd: FREE_CMD: cmd=%#x, cmd_pending=%d\n",
+               le16_to_cpu(host_cmd->command),
+               atomic_read(&adapter->cmd_pending));
+}
+
 /*
  * This function sends a host command to the firmware.
  *
@@ -613,47 +654,6 @@ int mwifiex_send_cmd(struct mwifiex_private *priv, u16 cmd_no,
        return ret;
 }
 
-/*
- * This function returns a command to the command free queue.
- *
- * The function also calls the completion callback if required, before
- * cleaning the command node and re-inserting it into the free queue.
- */
-void
-mwifiex_insert_cmd_to_free_q(struct mwifiex_adapter *adapter,
-                            struct cmd_ctrl_node *cmd_node)
-{
-       unsigned long flags;
-
-       if (!cmd_node)
-               return;
-
-       if (cmd_node->wait_q_enabled)
-               mwifiex_complete_cmd(adapter, cmd_node);
-       /* Clean the node */
-       mwifiex_clean_cmd_node(adapter, cmd_node);
-
-       /* Insert node into cmd_free_q */
-       spin_lock_irqsave(&adapter->cmd_free_q_lock, flags);
-       list_add_tail(&cmd_node->list, &adapter->cmd_free_q);
-       spin_unlock_irqrestore(&adapter->cmd_free_q_lock, flags);
-}
-
-/* This function reuses a command node. */
-void mwifiex_recycle_cmd_node(struct mwifiex_adapter *adapter,
-                             struct cmd_ctrl_node *cmd_node)
-{
-       struct host_cmd_ds_command *host_cmd = (void *)cmd_node->cmd_skb->data;
-
-       mwifiex_insert_cmd_to_free_q(adapter, cmd_node);
-
-       atomic_dec(&adapter->cmd_pending);
-       mwifiex_dbg(adapter, CMD,
-                   "cmd: FREE_CMD: cmd=%#x, cmd_pending=%d\n",
-               le16_to_cpu(host_cmd->command),
-               atomic_read(&adapter->cmd_pending));
-}
-
 /*
  * This function queues a command to the command pending queue.
  *
@@ -991,6 +991,23 @@ mwifiex_cmd_timeout_func(unsigned long function_context)
                adapter->if_ops.card_reset(adapter);
 }
 
+void
+mwifiex_cancel_pending_scan_cmd(struct mwifiex_adapter *adapter)
+{
+       struct cmd_ctrl_node *cmd_node = NULL, *tmp_node;
+       unsigned long flags;
+
+       /* Cancel all pending scan command */
+       spin_lock_irqsave(&adapter->scan_pending_q_lock, flags);
+       list_for_each_entry_safe(cmd_node, tmp_node,
+                                &adapter->scan_pending_q, list) {
+               list_del(&cmd_node->list);
+               cmd_node->wait_q_enabled = false;
+               mwifiex_insert_cmd_to_free_q(adapter, cmd_node);
+       }
+       spin_unlock_irqrestore(&adapter->scan_pending_q_lock, flags);
+}
+
 /*
  * This function cancels all the pending commands.
  *
@@ -1009,9 +1026,9 @@ mwifiex_cancel_all_pending_cmd(struct mwifiex_adapter *adapter)
        spin_lock_irqsave(&adapter->mwifiex_cmd_lock, cmd_flags);
        /* Cancel current cmd */
        if ((adapter->curr_cmd) && (adapter->curr_cmd->wait_q_enabled)) {
-               adapter->curr_cmd->wait_q_enabled = false;
                adapter->cmd_wait_q.status = -1;
                mwifiex_complete_cmd(adapter, adapter->curr_cmd);
+               adapter->curr_cmd->wait_q_enabled = false;
                /* no recycle probably wait for response */
        }
        /* Cancel all pending command */
@@ -1029,16 +1046,7 @@ mwifiex_cancel_all_pending_cmd(struct mwifiex_adapter *adapter)
        spin_unlock_irqrestore(&adapter->cmd_pending_q_lock, flags);
        spin_unlock_irqrestore(&adapter->mwifiex_cmd_lock, cmd_flags);
 
-       /* Cancel all pending scan command */
-       spin_lock_irqsave(&adapter->scan_pending_q_lock, flags);
-       list_for_each_entry_safe(cmd_node, tmp_node,
-                                &adapter->scan_pending_q, list) {
-               list_del(&cmd_node->list);
-
-               cmd_node->wait_q_enabled = false;
-               mwifiex_insert_cmd_to_free_q(adapter, cmd_node);
-       }
-       spin_unlock_irqrestore(&adapter->scan_pending_q_lock, flags);
+       mwifiex_cancel_pending_scan_cmd(adapter);
 
        if (adapter->scan_processing) {
                spin_lock_irqsave(&adapter->mwifiex_cmd_lock, cmd_flags);
@@ -1070,9 +1078,8 @@ mwifiex_cancel_all_pending_cmd(struct mwifiex_adapter *adapter)
 void
 mwifiex_cancel_pending_ioctl(struct mwifiex_adapter *adapter)
 {
-       struct cmd_ctrl_node *cmd_node = NULL, *tmp_node = NULL;
+       struct cmd_ctrl_node *cmd_node = NULL;
        unsigned long cmd_flags;
-       unsigned long scan_pending_q_flags;
        struct mwifiex_private *priv;
        int i;
 
@@ -1094,17 +1101,7 @@ mwifiex_cancel_pending_ioctl(struct mwifiex_adapter *adapter)
                mwifiex_recycle_cmd_node(adapter, cmd_node);
        }
 
-       /* Cancel all pending scan command */
-       spin_lock_irqsave(&adapter->scan_pending_q_lock,
-                         scan_pending_q_flags);
-       list_for_each_entry_safe(cmd_node, tmp_node,
-                                &adapter->scan_pending_q, list) {
-               list_del(&cmd_node->list);
-               cmd_node->wait_q_enabled = false;
-               mwifiex_insert_cmd_to_free_q(adapter, cmd_node);
-       }
-       spin_unlock_irqrestore(&adapter->scan_pending_q_lock,
-                              scan_pending_q_flags);
+       mwifiex_cancel_pending_scan_cmd(adapter);
 
        if (adapter->scan_processing) {
                spin_lock_irqsave(&adapter->mwifiex_cmd_lock, cmd_flags);
index 517653b..78c532f 100644 (file)
@@ -317,7 +317,7 @@ void mwifiex_set_trans_start(struct net_device *dev)
        for (i = 0; i < dev->num_tx_queues; i++)
                netdev_get_tx_queue(dev, i)->trans_start = jiffies;
 
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
 }
 
 /*
index 04b975c..8b67a55 100644 (file)
@@ -702,6 +702,13 @@ mwifiex_close(struct net_device *dev)
                priv->scan_aborting = true;
        }
 
+       if (priv->sched_scanning) {
+               mwifiex_dbg(priv->adapter, INFO,
+                           "aborting bgscan on ndo_stop\n");
+               mwifiex_stop_bg_scan(priv);
+               cfg80211_sched_scan_stopped(priv->wdev.wiphy);
+       }
+
        return 0;
 }
 
@@ -753,13 +760,6 @@ int mwifiex_queue_tx_pkt(struct mwifiex_private *priv, struct sk_buff *skb)
 
        mwifiex_queue_main_work(priv->adapter);
 
-       if (priv->sched_scanning) {
-               mwifiex_dbg(priv->adapter, INFO,
-                           "aborting bgscan on ndo_stop\n");
-               mwifiex_stop_bg_scan(priv);
-               cfg80211_sched_scan_stopped(priv->wdev.wiphy);
-       }
-
        return 0;
 }
 
@@ -1434,7 +1434,7 @@ int mwifiex_remove_card(struct mwifiex_adapter *adapter, struct semaphore *sem)
        struct mwifiex_private *priv = NULL;
        int i;
 
-       if (down_interruptible(sem))
+       if (down_trylock(sem))
                goto exit_sem_err;
 
        if (!adapter)
index a159fbe..0207af0 100644 (file)
 #include <linux/idr.h>
 #include <linux/inetdevice.h>
 #include <linux/devcoredump.h>
+#include <linux/err.h>
+#include <linux/gpio.h>
+#include <linux/gfp.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <linux/of_gpio.h>
+#include <linux/of_platform.h>
+#include <linux/platform_device.h>
+#include <linux/pm_runtime.h>
+#include <linux/slab.h>
+#include <linux/of_irq.h>
 
 #include "decl.h"
 #include "ioctl.h"
@@ -100,8 +111,8 @@ enum {
 #define SCAN_BEACON_ENTRY_PAD                  6
 
 #define MWIFIEX_PASSIVE_SCAN_CHAN_TIME 110
-#define MWIFIEX_ACTIVE_SCAN_CHAN_TIME  30
-#define MWIFIEX_SPECIFIC_SCAN_CHAN_TIME        30
+#define MWIFIEX_ACTIVE_SCAN_CHAN_TIME  40
+#define MWIFIEX_SPECIFIC_SCAN_CHAN_TIME        40
 #define MWIFIEX_DEF_SCAN_CHAN_GAP_TIME  50
 
 #define SCAN_RSSI(RSSI)                                        (0x100 - ((u8)(RSSI)))
@@ -1042,9 +1053,8 @@ int mwifiex_alloc_cmd_buffer(struct mwifiex_adapter *adapter);
 int mwifiex_free_cmd_buffer(struct mwifiex_adapter *adapter);
 void mwifiex_cancel_all_pending_cmd(struct mwifiex_adapter *adapter);
 void mwifiex_cancel_pending_ioctl(struct mwifiex_adapter *adapter);
+void mwifiex_cancel_pending_scan_cmd(struct mwifiex_adapter *adapter);
 
-void mwifiex_insert_cmd_to_free_q(struct mwifiex_adapter *adapter,
-                                 struct cmd_ctrl_node *cmd_node);
 void mwifiex_recycle_cmd_node(struct mwifiex_adapter *adapter,
                              struct cmd_ctrl_node *cmd_node);
 
index edf8b07..0c7937e 100644 (file)
@@ -2811,6 +2811,7 @@ static int mwifiex_pcie_request_irq(struct mwifiex_adapter *adapter)
 static void mwifiex_pcie_get_fw_name(struct mwifiex_adapter *adapter)
 {
        int revision_id = 0;
+       int version;
        struct pcie_service_card *card = adapter->card;
 
        switch (card->dev->device) {
@@ -2829,18 +2830,34 @@ static void mwifiex_pcie_get_fw_name(struct mwifiex_adapter *adapter)
                        strcpy(adapter->fw_name, PCIE8897_B0_FW_NAME);
                        break;
                default:
+                       strcpy(adapter->fw_name, PCIE8897_DEFAULT_FW_NAME);
+
                        break;
                }
+               break;
        case PCIE_DEVICE_ID_MARVELL_88W8997:
                mwifiex_read_reg(adapter, 0x0c48, &revision_id);
+               mwifiex_read_reg(adapter, 0x0cd0, &version);
+               version &= 0x7;
                switch (revision_id) {
                case PCIE8997_V2:
-                       strcpy(adapter->fw_name, PCIE8997_FW_NAME_V2);
+                       if (version == CHIP_VER_PCIEUSB)
+                               strcpy(adapter->fw_name,
+                                      PCIEUSB8997_FW_NAME_V2);
+                       else
+                               strcpy(adapter->fw_name,
+                                      PCIEUART8997_FW_NAME_V2);
                        break;
                case PCIE8997_Z:
-                       strcpy(adapter->fw_name, PCIE8997_FW_NAME_Z);
+                       if (version == CHIP_VER_PCIEUSB)
+                               strcpy(adapter->fw_name,
+                                      PCIEUSB8997_FW_NAME_Z);
+                       else
+                               strcpy(adapter->fw_name,
+                                      PCIEUART8997_FW_NAME_Z);
                        break;
                default:
+                       strcpy(adapter->fw_name, PCIE8997_DEFAULT_FW_NAME);
                        break;
                }
        default:
index cc7a5df..5770b43 100644 (file)
 #include    "main.h"
 
 #define PCIE8766_DEFAULT_FW_NAME "mrvl/pcie8766_uapsta.bin"
+#define PCIE8897_DEFAULT_FW_NAME "mrvl/pcie8897_uapsta.bin"
 #define PCIE8897_A0_FW_NAME "mrvl/pcie8897_uapsta_a0.bin"
 #define PCIE8897_B0_FW_NAME "mrvl/pcie8897_uapsta.bin"
-#define PCIE8997_FW_NAME_Z "mrvl/pcieusb8997_combo.bin"
-#define PCIE8997_FW_NAME_V2 "mrvl/pcieusb8997_combo_v2.bin"
+#define PCIE8997_DEFAULT_FW_NAME "mrvl/pcieuart8997_combo_v2.bin"
+#define PCIEUART8997_FW_NAME_Z "mrvl/pcieuart8997_combo.bin"
+#define PCIEUART8997_FW_NAME_V2 "mrvl/pcieuart8997_combo_v2.bin"
+#define PCIEUSB8997_FW_NAME_Z "mrvl/pcieusb8997_combo.bin"
+#define PCIEUSB8997_FW_NAME_V2 "mrvl/pcieusb8997_combo_v2.bin"
 
 #define PCIE_VENDOR_ID_MARVELL              (0x11ab)
 #define PCIE_VENDOR_ID_V2_MARVELL           (0x1b4b)
@@ -45,6 +49,7 @@
 #define PCIE8897_B0    0x1200
 #define PCIE8997_Z     0x0
 #define PCIE8997_V2    0x471
+#define CHIP_VER_PCIEUSB       0x2
 
 /* Constants for Buffer Descriptor (BD) rings */
 #define MWIFIEX_MAX_TXRX_BD                    0x20
index 624b0a9..bc5e52c 100644 (file)
@@ -76,6 +76,39 @@ static u8 mwifiex_rsn_oui[CIPHER_SUITE_MAX][4] = {
        { 0x00, 0x0f, 0xac, 0x04 },     /* AES  */
 };
 
+static void
+_dbg_security_flags(int log_level, const char *func, const char *desc,
+                   struct mwifiex_private *priv,
+                   struct mwifiex_bssdescriptor *bss_desc)
+{
+       _mwifiex_dbg(priv->adapter, log_level,
+                    "info: %s: %s:\twpa_ie=%#x wpa2_ie=%#x WEP=%s WPA=%s WPA2=%s\tEncMode=%#x privacy=%#x\n",
+                    func, desc,
+                    bss_desc->bcn_wpa_ie ?
+                    bss_desc->bcn_wpa_ie->vend_hdr.element_id : 0,
+                    bss_desc->bcn_rsn_ie ?
+                    bss_desc->bcn_rsn_ie->ieee_hdr.element_id : 0,
+                    priv->sec_info.wep_enabled ? "e" : "d",
+                    priv->sec_info.wpa_enabled ? "e" : "d",
+                    priv->sec_info.wpa2_enabled ? "e" : "d",
+                    priv->sec_info.encryption_mode,
+                    bss_desc->privacy);
+}
+#define dbg_security_flags(mask, desc, priv, bss_desc) \
+       _dbg_security_flags(MWIFIEX_DBG_##mask, desc, __func__, priv, bss_desc)
+
+static bool
+has_ieee_hdr(struct ieee_types_generic *ie, u8 key)
+{
+       return (ie && ie->ieee_hdr.element_id == key);
+}
+
+static bool
+has_vendor_hdr(struct ieee_types_vendor_specific *ie, u8 key)
+{
+       return (ie && ie->vend_hdr.element_id == key);
+}
+
 /*
  * This function parses a given IE for a given OUI.
  *
@@ -121,8 +154,7 @@ mwifiex_is_rsn_oui_present(struct mwifiex_bssdescriptor *bss_desc, u32 cipher)
        struct ie_body *iebody;
        u8 ret = MWIFIEX_OUI_NOT_PRESENT;
 
-       if (((bss_desc->bcn_rsn_ie) && ((*(bss_desc->bcn_rsn_ie)).
-                                       ieee_hdr.element_id == WLAN_EID_RSN))) {
+       if (has_ieee_hdr(bss_desc->bcn_rsn_ie, WLAN_EID_RSN)) {
                iebody = (struct ie_body *)
                         (((u8 *) bss_desc->bcn_rsn_ie->data) +
                          RSN_GTK_OUI_OFFSET);
@@ -148,9 +180,7 @@ mwifiex_is_wpa_oui_present(struct mwifiex_bssdescriptor *bss_desc, u32 cipher)
        struct ie_body *iebody;
        u8 ret = MWIFIEX_OUI_NOT_PRESENT;
 
-       if (((bss_desc->bcn_wpa_ie) &&
-            ((*(bss_desc->bcn_wpa_ie)).vend_hdr.element_id ==
-             WLAN_EID_VENDOR_SPECIFIC))) {
+       if (has_vendor_hdr(bss_desc->bcn_wpa_ie, WLAN_EID_VENDOR_SPECIFIC)) {
                iebody = (struct ie_body *) bss_desc->bcn_wpa_ie->data;
                oui = &mwifiex_wpa_oui[cipher][0];
                ret = mwifiex_search_oui_in_ie(iebody, oui);
@@ -180,11 +210,8 @@ mwifiex_is_bss_wapi(struct mwifiex_private *priv,
                    struct mwifiex_bssdescriptor *bss_desc)
 {
        if (priv->sec_info.wapi_enabled &&
-           (bss_desc->bcn_wapi_ie &&
-            ((*(bss_desc->bcn_wapi_ie)).ieee_hdr.element_id ==
-                       WLAN_EID_BSS_AC_ACCESS_DELAY))) {
+           has_ieee_hdr(bss_desc->bcn_wapi_ie, WLAN_EID_BSS_AC_ACCESS_DELAY))
                return true;
-       }
        return false;
 }
 
@@ -197,12 +224,9 @@ mwifiex_is_bss_no_sec(struct mwifiex_private *priv,
                      struct mwifiex_bssdescriptor *bss_desc)
 {
        if (!priv->sec_info.wep_enabled && !priv->sec_info.wpa_enabled &&
-           !priv->sec_info.wpa2_enabled && ((!bss_desc->bcn_wpa_ie) ||
-               ((*(bss_desc->bcn_wpa_ie)).vend_hdr.element_id !=
-                WLAN_EID_VENDOR_SPECIFIC)) &&
-           ((!bss_desc->bcn_rsn_ie) ||
-               ((*(bss_desc->bcn_rsn_ie)).ieee_hdr.element_id !=
-                WLAN_EID_RSN)) &&
+           !priv->sec_info.wpa2_enabled &&
+           !has_vendor_hdr(bss_desc->bcn_wpa_ie, WLAN_EID_VENDOR_SPECIFIC) &&
+           !has_ieee_hdr(bss_desc->bcn_rsn_ie, WLAN_EID_RSN) &&
            !priv->sec_info.encryption_mode && !bss_desc->privacy) {
                return true;
        }
@@ -233,29 +257,14 @@ mwifiex_is_bss_wpa(struct mwifiex_private *priv,
                   struct mwifiex_bssdescriptor *bss_desc)
 {
        if (!priv->sec_info.wep_enabled && priv->sec_info.wpa_enabled &&
-           !priv->sec_info.wpa2_enabled && ((bss_desc->bcn_wpa_ie) &&
-           ((*(bss_desc->bcn_wpa_ie)).
-            vend_hdr.element_id == WLAN_EID_VENDOR_SPECIFIC))
+           !priv->sec_info.wpa2_enabled &&
+           has_vendor_hdr(bss_desc->bcn_wpa_ie, WLAN_EID_VENDOR_SPECIFIC)
           /*
            * Privacy bit may NOT be set in some APs like
            * LinkSys WRT54G && bss_desc->privacy
            */
         ) {
-               mwifiex_dbg(priv->adapter, INFO,
-                           "info: %s: WPA:\t"
-                           "wpa_ie=%#x wpa2_ie=%#x WEP=%s WPA=%s WPA2=%s\t"
-                           "EncMode=%#x privacy=%#x\n", __func__,
-                           (bss_desc->bcn_wpa_ie) ?
-                           (*bss_desc->bcn_wpa_ie).
-                           vend_hdr.element_id : 0,
-                           (bss_desc->bcn_rsn_ie) ?
-                           (*bss_desc->bcn_rsn_ie).
-                           ieee_hdr.element_id : 0,
-                           (priv->sec_info.wep_enabled) ? "e" : "d",
-                           (priv->sec_info.wpa_enabled) ? "e" : "d",
-                           (priv->sec_info.wpa2_enabled) ? "e" : "d",
-                           priv->sec_info.encryption_mode,
-                           bss_desc->privacy);
+               dbg_security_flags(INFO, "WPA", priv, bss_desc);
                return true;
        }
        return false;
@@ -269,30 +278,14 @@ static bool
 mwifiex_is_bss_wpa2(struct mwifiex_private *priv,
                    struct mwifiex_bssdescriptor *bss_desc)
 {
-       if (!priv->sec_info.wep_enabled &&
-           !priv->sec_info.wpa_enabled &&
+       if (!priv->sec_info.wep_enabled && !priv->sec_info.wpa_enabled &&
            priv->sec_info.wpa2_enabled &&
-           ((bss_desc->bcn_rsn_ie) &&
-            ((*(bss_desc->bcn_rsn_ie)).ieee_hdr.element_id == WLAN_EID_RSN))) {
+           has_ieee_hdr(bss_desc->bcn_rsn_ie, WLAN_EID_RSN)) {
                /*
                 * Privacy bit may NOT be set in some APs like
                 * LinkSys WRT54G && bss_desc->privacy
                 */
-               mwifiex_dbg(priv->adapter, INFO,
-                           "info: %s: WPA2:\t"
-                           "wpa_ie=%#x wpa2_ie=%#x WEP=%s WPA=%s WPA2=%s\t"
-                           "EncMode=%#x privacy=%#x\n", __func__,
-                           (bss_desc->bcn_wpa_ie) ?
-                           (*bss_desc->bcn_wpa_ie).
-                           vend_hdr.element_id : 0,
-                           (bss_desc->bcn_rsn_ie) ?
-                           (*bss_desc->bcn_rsn_ie).
-                           ieee_hdr.element_id : 0,
-                           (priv->sec_info.wep_enabled) ? "e" : "d",
-                           (priv->sec_info.wpa_enabled) ? "e" : "d",
-                           (priv->sec_info.wpa2_enabled) ? "e" : "d",
-                           priv->sec_info.encryption_mode,
-                           bss_desc->privacy);
+               dbg_security_flags(INFO, "WAP2", priv, bss_desc);
                return true;
        }
        return false;
@@ -308,11 +301,8 @@ mwifiex_is_bss_adhoc_aes(struct mwifiex_private *priv,
 {
        if (!priv->sec_info.wep_enabled && !priv->sec_info.wpa_enabled &&
            !priv->sec_info.wpa2_enabled &&
-           ((!bss_desc->bcn_wpa_ie) ||
-            ((*(bss_desc->bcn_wpa_ie)).
-             vend_hdr.element_id != WLAN_EID_VENDOR_SPECIFIC)) &&
-           ((!bss_desc->bcn_rsn_ie) ||
-            ((*(bss_desc->bcn_rsn_ie)).ieee_hdr.element_id != WLAN_EID_RSN)) &&
+           !has_vendor_hdr(bss_desc->bcn_wpa_ie, WLAN_EID_VENDOR_SPECIFIC) &&
+           !has_ieee_hdr(bss_desc->bcn_rsn_ie, WLAN_EID_RSN) &&
            !priv->sec_info.encryption_mode && bss_desc->privacy) {
                return true;
        }
@@ -329,25 +319,10 @@ mwifiex_is_bss_dynamic_wep(struct mwifiex_private *priv,
 {
        if (!priv->sec_info.wep_enabled && !priv->sec_info.wpa_enabled &&
            !priv->sec_info.wpa2_enabled &&
-           ((!bss_desc->bcn_wpa_ie) ||
-            ((*(bss_desc->bcn_wpa_ie)).
-             vend_hdr.element_id != WLAN_EID_VENDOR_SPECIFIC)) &&
-           ((!bss_desc->bcn_rsn_ie) ||
-            ((*(bss_desc->bcn_rsn_ie)).ieee_hdr.element_id != WLAN_EID_RSN)) &&
+           !has_vendor_hdr(bss_desc->bcn_wpa_ie, WLAN_EID_VENDOR_SPECIFIC) &&
+           !has_ieee_hdr(bss_desc->bcn_rsn_ie, WLAN_EID_RSN) &&
            priv->sec_info.encryption_mode && bss_desc->privacy) {
-               mwifiex_dbg(priv->adapter, INFO,
-                           "info: %s: dynamic\t"
-                           "WEP: wpa_ie=%#x wpa2_ie=%#x\t"
-                           "EncMode=%#x privacy=%#x\n",
-                           __func__,
-                           (bss_desc->bcn_wpa_ie) ?
-                           (*bss_desc->bcn_wpa_ie).
-                           vend_hdr.element_id : 0,
-                           (bss_desc->bcn_rsn_ie) ?
-                           (*bss_desc->bcn_rsn_ie).
-                           ieee_hdr.element_id : 0,
-                           priv->sec_info.encryption_mode,
-                           bss_desc->privacy);
+               dbg_security_flags(INFO, "dynamic", priv, bss_desc);
                return true;
        }
        return false;
@@ -460,18 +435,7 @@ mwifiex_is_network_compatible(struct mwifiex_private *priv,
                }
 
                /* Security doesn't match */
-               mwifiex_dbg(adapter, ERROR,
-                           "info: %s: failed: wpa_ie=%#x wpa2_ie=%#x WEP=%s\t"
-                           "WPA=%s WPA2=%s EncMode=%#x privacy=%#x\n",
-                           __func__,
-                           (bss_desc->bcn_wpa_ie) ?
-                           (*bss_desc->bcn_wpa_ie).vend_hdr.element_id : 0,
-                           (bss_desc->bcn_rsn_ie) ?
-                           (*bss_desc->bcn_rsn_ie).ieee_hdr.element_id : 0,
-                           (priv->sec_info.wep_enabled) ? "e" : "d",
-                           (priv->sec_info.wpa_enabled) ? "e" : "d",
-                           (priv->sec_info.wpa2_enabled) ? "e" : "d",
-                           priv->sec_info.encryption_mode, bss_desc->privacy);
+               dbg_security_flags(ERROR, "failed", priv, bss_desc);
                return -1;
        }
 
@@ -534,11 +498,13 @@ mwifiex_scan_create_channel_list(struct mwifiex_private *priv,
                                        &= ~MWIFIEX_PASSIVE_SCAN;
                        scan_chan_list[chan_idx].chan_number =
                                                        (u32) ch->hw_value;
+
+                       scan_chan_list[chan_idx].chan_scan_mode_bitmap
+                                       |= MWIFIEX_DISABLE_CHAN_FILT;
+
                        if (filtered_scan) {
                                scan_chan_list[chan_idx].max_scan_time =
                                cpu_to_le16(adapter->specific_scan_time);
-                               scan_chan_list[chan_idx].chan_scan_mode_bitmap
-                                       |= MWIFIEX_DISABLE_CHAN_FILT;
                        }
                        chan_idx++;
                }
@@ -655,8 +621,6 @@ mwifiex_scan_channel_list(struct mwifiex_private *priv,
        int ret = 0;
        struct mwifiex_chan_scan_param_set *tmp_chan_list;
        struct mwifiex_chan_scan_param_set *start_chan;
-       struct cmd_ctrl_node *cmd_node, *tmp_node;
-       unsigned long flags;
        u32 tlv_idx, rates_size, cmd_no;
        u32 total_scan_time;
        u32 done_early;
@@ -813,16 +777,7 @@ mwifiex_scan_channel_list(struct mwifiex_private *priv,
                            sizeof(struct mwifiex_ie_types_header) + rates_size;
 
                if (ret) {
-                       spin_lock_irqsave(&adapter->scan_pending_q_lock, flags);
-                       list_for_each_entry_safe(cmd_node, tmp_node,
-                                                &adapter->scan_pending_q,
-                                                list) {
-                               list_del(&cmd_node->list);
-                               cmd_node->wait_q_enabled = false;
-                               mwifiex_insert_cmd_to_free_q(adapter, cmd_node);
-                       }
-                       spin_unlock_irqrestore(&adapter->scan_pending_q_lock,
-                                              flags);
+                       mwifiex_cancel_pending_scan_cmd(adapter);
                        break;
                }
        }
@@ -912,14 +867,11 @@ mwifiex_config_scan(struct mwifiex_private *priv,
                /* Set the BSS type scan filter, use Adapter setting if
                   unset */
                scan_cfg_out->bss_mode =
-                       (user_scan_in->bss_mode ? (u8) user_scan_in->
-                        bss_mode : (u8) adapter->scan_mode);
+                       (u8)(user_scan_in->bss_mode ?: adapter->scan_mode);
 
                /* Set the number of probes to send, use Adapter setting
                   if unset */
-               num_probes =
-                       (user_scan_in->num_probes ? user_scan_in->
-                        num_probes : adapter->scan_probes);
+               num_probes = user_scan_in->num_probes ?: adapter->scan_probes;
 
                /*
                 * Set the BSSID filter to the incoming configuration,
@@ -1094,28 +1046,24 @@ mwifiex_config_scan(struct mwifiex_private *priv,
                     chan_idx++) {
 
                        channel = user_scan_in->chan_list[chan_idx].chan_number;
-                       (scan_chan_list + chan_idx)->chan_number = channel;
+                       scan_chan_list[chan_idx].chan_number = channel;
 
                        radio_type =
                                user_scan_in->chan_list[chan_idx].radio_type;
-                       (scan_chan_list + chan_idx)->radio_type = radio_type;
+                       scan_chan_list[chan_idx].radio_type = radio_type;
 
                        scan_type = user_scan_in->chan_list[chan_idx].scan_type;
 
                        if (scan_type == MWIFIEX_SCAN_TYPE_PASSIVE)
-                               (scan_chan_list +
-                                chan_idx)->chan_scan_mode_bitmap
+                               scan_chan_list[chan_idx].chan_scan_mode_bitmap
                                        |= (MWIFIEX_PASSIVE_SCAN |
                                            MWIFIEX_HIDDEN_SSID_REPORT);
                        else
-                               (scan_chan_list +
-                                chan_idx)->chan_scan_mode_bitmap
+                               scan_chan_list[chan_idx].chan_scan_mode_bitmap
                                        &= ~MWIFIEX_PASSIVE_SCAN;
 
-                       if (*filtered_scan)
-                               (scan_chan_list +
-                                chan_idx)->chan_scan_mode_bitmap
-                                       |= MWIFIEX_DISABLE_CHAN_FILT;
+                       scan_chan_list[chan_idx].chan_scan_mode_bitmap
+                               |= MWIFIEX_DISABLE_CHAN_FILT;
 
                        if (user_scan_in->chan_list[chan_idx].scan_time) {
                                scan_dur = (u16) user_scan_in->
@@ -1129,9 +1077,9 @@ mwifiex_config_scan(struct mwifiex_private *priv,
                                        scan_dur = adapter->active_scan_time;
                        }
 
-                       (scan_chan_list + chan_idx)->min_scan_time =
+                       scan_chan_list[chan_idx].min_scan_time =
                                cpu_to_le16(scan_dur);
-                       (scan_chan_list + chan_idx)->max_scan_time =
+                       scan_chan_list[chan_idx].max_scan_time =
                                cpu_to_le16(scan_dur);
                }
 
@@ -1991,12 +1939,13 @@ mwifiex_active_scan_req_for_passive_chan(struct mwifiex_private *priv)
 static void mwifiex_check_next_scan_command(struct mwifiex_private *priv)
 {
        struct mwifiex_adapter *adapter = priv->adapter;
-       struct cmd_ctrl_node *cmd_node, *tmp_node;
+       struct cmd_ctrl_node *cmd_node;
        unsigned long flags;
 
        spin_lock_irqsave(&adapter->scan_pending_q_lock, flags);
        if (list_empty(&adapter->scan_pending_q)) {
                spin_unlock_irqrestore(&adapter->scan_pending_q_lock, flags);
+
                spin_lock_irqsave(&adapter->mwifiex_cmd_lock, flags);
                adapter->scan_processing = false;
                spin_unlock_irqrestore(&adapter->mwifiex_cmd_lock, flags);
@@ -2018,13 +1967,10 @@ static void mwifiex_check_next_scan_command(struct mwifiex_private *priv)
                }
        } else if ((priv->scan_aborting && !priv->scan_request) ||
                   priv->scan_block) {
-               list_for_each_entry_safe(cmd_node, tmp_node,
-                                        &adapter->scan_pending_q, list) {
-                       list_del(&cmd_node->list);
-                       mwifiex_insert_cmd_to_free_q(adapter, cmd_node);
-               }
                spin_unlock_irqrestore(&adapter->scan_pending_q_lock, flags);
 
+               mwifiex_cancel_pending_scan_cmd(adapter);
+
                spin_lock_irqsave(&adapter->mwifiex_cmd_lock, flags);
                adapter->scan_processing = false;
                spin_unlock_irqrestore(&adapter->mwifiex_cmd_lock, flags);
index a0aec3e..099722e 100644 (file)
@@ -73,6 +73,66 @@ static struct memory_type_mapping mem_type_mapping_tbl[] = {
        {"EXTLAST", NULL, 0, 0xFE},
 };
 
+static const struct of_device_id mwifiex_sdio_of_match_table[] = {
+       { .compatible = "marvell,sd8897" },
+       { .compatible = "marvell,sd8997" },
+       { }
+};
+
+static irqreturn_t mwifiex_wake_irq_wifi(int irq, void *priv)
+{
+       struct mwifiex_plt_wake_cfg *cfg = priv;
+
+       if (cfg->irq_wifi >= 0) {
+               pr_info("%s: wake by wifi", __func__);
+               cfg->wake_by_wifi = true;
+               disable_irq_nosync(irq);
+       }
+
+       return IRQ_HANDLED;
+}
+
+/* This function parse device tree node using mmc subnode devicetree API.
+ * The device node is saved in card->plt_of_node.
+ * if the device tree node exist and include interrupts attributes, this
+ * function will also request platform specific wakeup interrupt.
+ */
+static int mwifiex_sdio_probe_of(struct device *dev, struct sdio_mmc_card *card)
+{
+       struct mwifiex_plt_wake_cfg *cfg;
+       int ret;
+
+       if (!dev->of_node ||
+           !of_match_node(mwifiex_sdio_of_match_table, dev->of_node)) {
+               pr_err("sdio platform data not available");
+               return -1;
+       }
+
+       card->plt_of_node = dev->of_node;
+       card->plt_wake_cfg = devm_kzalloc(dev, sizeof(*card->plt_wake_cfg),
+                                         GFP_KERNEL);
+       cfg = card->plt_wake_cfg;
+       if (cfg && card->plt_of_node) {
+               cfg->irq_wifi = irq_of_parse_and_map(card->plt_of_node, 0);
+               if (!cfg->irq_wifi) {
+                       dev_err(dev, "fail to parse irq_wifi from device tree");
+               } else {
+                       ret = devm_request_irq(dev, cfg->irq_wifi,
+                                              mwifiex_wake_irq_wifi,
+                                              IRQF_TRIGGER_LOW,
+                                              "wifi_wake", cfg);
+                       if (ret) {
+                               dev_err(dev,
+                                       "Failed to request irq_wifi %d (%d)\n",
+                                       cfg->irq_wifi, ret);
+                       }
+                       disable_irq(cfg->irq_wifi);
+               }
+       }
+
+       return 0;
+}
+
 /*
  * SDIO probe.
  *
@@ -127,6 +187,9 @@ mwifiex_sdio_probe(struct sdio_func *func, const struct sdio_device_id *id)
                return -EIO;
        }
 
+       /* device tree node parsing and platform specific configuration*/
+       mwifiex_sdio_probe_of(&func->dev, card);
+
        if (mwifiex_add_card(card, &add_remove_card_sem, &sdio_ops,
                             MWIFIEX_SDIO)) {
                pr_err("%s: add card failed\n", __func__);
@@ -183,6 +246,13 @@ static int mwifiex_sdio_resume(struct device *dev)
        mwifiex_cancel_hs(mwifiex_get_priv(adapter, MWIFIEX_BSS_ROLE_STA),
                          MWIFIEX_SYNC_CMD);
 
+       /* Disable platform specific wakeup interrupt */
+       if (card->plt_wake_cfg && card->plt_wake_cfg->irq_wifi >= 0) {
+               disable_irq_wake(card->plt_wake_cfg->irq_wifi);
+               if (!card->plt_wake_cfg->wake_by_wifi)
+                       disable_irq(card->plt_wake_cfg->irq_wifi);
+       }
+
        return 0;
 }
 
@@ -262,6 +332,13 @@ static int mwifiex_sdio_suspend(struct device *dev)
 
        adapter = card->adapter;
 
+       /* Enable platform specific wakeup interrupt */
+       if (card->plt_wake_cfg && card->plt_wake_cfg->irq_wifi >= 0) {
+               card->plt_wake_cfg->wake_by_wifi = false;
+               enable_irq(card->plt_wake_cfg->irq_wifi);
+               enable_irq_wake(card->plt_wake_cfg->irq_wifi);
+       }
+
        /* Enable the Host Sleep */
        if (!mwifiex_enable_hs(adapter)) {
                mwifiex_dbg(adapter, ERROR,
@@ -1026,13 +1103,12 @@ static int mwifiex_prog_fw_w_helper(struct mwifiex_adapter *adapter,
                offset += txlen;
        } while (true);
 
-       sdio_release_host(card->func);
-
        mwifiex_dbg(adapter, MSG,
                    "info: FW download over, size %d bytes\n", offset);
 
        ret = 0;
 done:
+       sdio_release_host(card->func);
        kfree(fwbuf);
        return ret;
 }
index b9fbc5c..db837f1 100644 (file)
        a->mpa_rx.start_port = 0;                                       \
 } while (0)
 
+struct mwifiex_plt_wake_cfg {
+       int irq_wifi;
+       bool wake_by_wifi;
+};
+
 /* data structure for SDIO MPA TX */
 struct mwifiex_sdio_mpa_tx {
        /* multiport tx aggregation buffer pointer */
@@ -237,6 +242,8 @@ struct mwifiex_sdio_card_reg {
 struct sdio_mmc_card {
        struct sdio_func *func;
        struct mwifiex_adapter *adapter;
+       struct device_node *plt_of_node;
+       struct mwifiex_plt_wake_cfg *plt_wake_cfg;
 
        const char *firmware;
        const struct mwifiex_sdio_card_reg *reg;
index 8cb895b..e436574 100644 (file)
@@ -2162,6 +2162,7 @@ int mwifiex_sta_init_cmd(struct mwifiex_private *priv, u8 first_sta, bool init)
        enum state_11d_t state_11d;
        struct mwifiex_ds_11n_tx_cfg tx_cfg;
        u8 sdio_sp_rx_aggr_enable;
+       int data;
 
        if (first_sta) {
                if (priv->adapter->iface_type == MWIFIEX_PCIE) {
@@ -2182,9 +2183,16 @@ int mwifiex_sta_init_cmd(struct mwifiex_private *priv, u8 first_sta, bool init)
                 * The cal-data can be read from device tree and/or
                 * a configuration file and downloaded to firmware.
                 */
-               adapter->dt_node =
-                               of_find_node_by_name(NULL, "marvell_cfgdata");
-               if (adapter->dt_node) {
+               if (priv->adapter->iface_type == MWIFIEX_SDIO &&
+                   adapter->dev->of_node) {
+                       adapter->dt_node = adapter->dev->of_node;
+                       if (of_property_read_u32(adapter->dt_node,
+                                                "marvell,wakeup-pin",
+                                                &data) == 0) {
+                               pr_debug("Wakeup pin = 0x%x\n", data);
+                               adapter->hs_cfg.gpio = data;
+                       }
+
                        ret = mwifiex_dnld_dt_cfgdata(priv, adapter->dt_node,
                                                      "marvell,caldata");
                        if (ret)
index 434b977..d18c797 100644 (file)
@@ -44,7 +44,6 @@ static void
 mwifiex_process_cmdresp_error(struct mwifiex_private *priv,
                              struct host_cmd_ds_command *resp)
 {
-       struct cmd_ctrl_node *cmd_node = NULL, *tmp_node;
        struct mwifiex_adapter *adapter = priv->adapter;
        struct host_cmd_ds_802_11_ps_mode_enh *pm;
        unsigned long flags;
@@ -71,17 +70,7 @@ mwifiex_process_cmdresp_error(struct mwifiex_private *priv,
                break;
        case HostCmd_CMD_802_11_SCAN:
        case HostCmd_CMD_802_11_SCAN_EXT:
-               /* Cancel all pending scan command */
-               spin_lock_irqsave(&adapter->scan_pending_q_lock, flags);
-               list_for_each_entry_safe(cmd_node, tmp_node,
-                                        &adapter->scan_pending_q, list) {
-                       list_del(&cmd_node->list);
-                       spin_unlock_irqrestore(&adapter->scan_pending_q_lock,
-                                              flags);
-                       mwifiex_insert_cmd_to_free_q(adapter, cmd_node);
-                       spin_lock_irqsave(&adapter->scan_pending_q_lock, flags);
-               }
-               spin_unlock_irqrestore(&adapter->scan_pending_q_lock, flags);
+               mwifiex_cancel_pending_scan_cmd(adapter);
 
                spin_lock_irqsave(&adapter->mwifiex_cmd_lock, flags);
                adapter->scan_processing = false;
index d8de432..8e08626 100644 (file)
@@ -146,6 +146,7 @@ int mwifiex_fill_new_bss_desc(struct mwifiex_private *priv,
        size_t beacon_ie_len;
        struct mwifiex_bss_priv *bss_priv = (void *)bss->priv;
        const struct cfg80211_bss_ies *ies;
+       int ret;
 
        rcu_read_lock();
        ies = rcu_dereference(bss->ies);
@@ -189,7 +190,48 @@ int mwifiex_fill_new_bss_desc(struct mwifiex_private *priv,
        if (bss_desc->cap_info_bitmap & WLAN_CAPABILITY_SPECTRUM_MGMT)
                bss_desc->sensed_11h = true;
 
-       return mwifiex_update_bss_desc_with_ie(priv->adapter, bss_desc);
+       ret = mwifiex_update_bss_desc_with_ie(priv->adapter, bss_desc);
+       if (ret)
+               return ret;
+
+       /* Update HT40 capability based on current channel information */
+       if (bss_desc->bcn_ht_oper && bss_desc->bcn_ht_cap) {
+               u8 ht_param = bss_desc->bcn_ht_oper->ht_param;
+               u8 radio = mwifiex_band_to_radio_type(bss_desc->bss_band);
+               struct ieee80211_supported_band *sband =
+                                               priv->wdev.wiphy->bands[radio];
+               int freq = ieee80211_channel_to_frequency(bss_desc->channel,
+                                                         radio);
+               struct ieee80211_channel *chan =
+                       ieee80211_get_channel(priv->adapter->wiphy, freq);
+
+               switch (ht_param & IEEE80211_HT_PARAM_CHA_SEC_OFFSET) {
+               case IEEE80211_HT_PARAM_CHA_SEC_ABOVE:
+                       if (chan->flags & IEEE80211_CHAN_NO_HT40PLUS) {
+                               sband->ht_cap.cap &=
+                                       ~IEEE80211_HT_CAP_SUP_WIDTH_20_40;
+                               sband->ht_cap.cap &= ~IEEE80211_HT_CAP_SGI_40;
+                       } else {
+                               sband->ht_cap.cap |=
+                                       IEEE80211_HT_CAP_SUP_WIDTH_20_40 |
+                                       IEEE80211_HT_CAP_SGI_40;
+                       }
+                       break;
+               case IEEE80211_HT_PARAM_CHA_SEC_BELOW:
+                       if (chan->flags & IEEE80211_CHAN_NO_HT40MINUS) {
+                               sband->ht_cap.cap &=
+                                       ~IEEE80211_HT_CAP_SUP_WIDTH_20_40;
+                               sband->ht_cap.cap &= ~IEEE80211_HT_CAP_SGI_40;
+                       } else {
+                               sband->ht_cap.cap |=
+                                       IEEE80211_HT_CAP_SUP_WIDTH_20_40 |
+                                       IEEE80211_HT_CAP_SGI_40;
+                       }
+                       break;
+               }
+       }
+
+       return 0;
 }
 
 void mwifiex_dnld_txpwr_table(struct mwifiex_private *priv)
index bf6182b..abdd0cf 100644 (file)
@@ -297,6 +297,13 @@ int mwifiex_write_data_complete(struct mwifiex_adapter *adapter,
                goto done;
 
        mwifiex_set_trans_start(priv->netdev);
+
+       if (tx_info->flags & MWIFIEX_BUF_FLAG_BRIDGED_PKT)
+               atomic_dec_return(&adapter->pending_bridged_pkts);
+
+       if (tx_info->flags & MWIFIEX_BUF_FLAG_AGGR_PKT)
+               goto done;
+
        if (!status) {
                priv->stats.tx_packets++;
                priv->stats.tx_bytes += tx_info->pkt_len;
@@ -306,12 +313,6 @@ int mwifiex_write_data_complete(struct mwifiex_adapter *adapter,
                priv->stats.tx_errors++;
        }
 
-       if (tx_info->flags & MWIFIEX_BUF_FLAG_BRIDGED_PKT)
-               atomic_dec_return(&adapter->pending_bridged_pkts);
-
-       if (tx_info->flags & MWIFIEX_BUF_FLAG_AGGR_PKT)
-               goto done;
-
        if (aggr)
                /* For skb_aggr, do not wake up tx queue */
                goto done;
index c95b61d..666e91a 100644 (file)
@@ -102,6 +102,7 @@ static void mwifiex_uap_queue_bridged_pkt(struct mwifiex_private *priv,
        int hdr_chop;
        struct ethhdr *p_ethhdr;
        struct mwifiex_sta_node *src_node;
+       int index;
 
        uap_rx_pd = (struct uap_rxpd *)(skb->data);
        rx_pkt_hdr = (void *)uap_rx_pd + le16_to_cpu(uap_rx_pd->rx_pkt_offset);
@@ -208,6 +209,9 @@ static void mwifiex_uap_queue_bridged_pkt(struct mwifiex_private *priv,
        }
 
        __net_timestamp(skb);
+
+       index = mwifiex_1d_to_wmm_queue[skb->priority];
+       atomic_inc(&priv->wmm_tx_pending[index]);
        mwifiex_wmm_add_buf_txqueue(priv, skb);
        atomic_inc(&adapter->tx_pending);
        atomic_inc(&adapter->pending_bridged_pkts);
index 0510861..0857575 100644 (file)
@@ -995,7 +995,8 @@ static int mwifiex_prog_fw_w_helper(struct mwifiex_adapter *adapter,
 {
        int ret = 0;
        u8 *firmware = fw->fw_buf, *recv_buff;
-       u32 retries = USB8XXX_FW_MAX_RETRY, dlen;
+       u32 retries = USB8XXX_FW_MAX_RETRY + 1;
+       u32 dlen;
        u32 fw_seqnum = 0, tlen = 0, dnld_cmd = 0;
        struct fw_data *fwdata;
        struct fw_sync_header sync_fw;
@@ -1017,8 +1018,10 @@ static int mwifiex_prog_fw_w_helper(struct mwifiex_adapter *adapter,
 
        /* Allocate memory for receive */
        recv_buff = kzalloc(FW_DNLD_RX_BUF_SIZE, GFP_KERNEL);
-       if (!recv_buff)
+       if (!recv_buff) {
+               ret = -ENOMEM;
                goto cleanup;
+       }
 
        do {
                /* Send pseudo data to check winner status first */
@@ -1041,7 +1044,7 @@ static int mwifiex_prog_fw_w_helper(struct mwifiex_adapter *adapter,
                }
 
                /* If the send/receive fails or CRC occurs then retry */
-               while (retries--) {
+               while (--retries) {
                        u8 *buf = (u8 *)fwdata;
                        u32 len = FW_DATA_XMIT_SIZE;
 
@@ -1101,7 +1104,7 @@ static int mwifiex_prog_fw_w_helper(struct mwifiex_adapter *adapter,
                                continue;
                        }
 
-                       retries = USB8XXX_FW_MAX_RETRY;
+                       retries = USB8XXX_FW_MAX_RETRY + 1;
                        break;
                }
                fw_seqnum++;
index c36fa4e..bf3f0a3 100644 (file)
@@ -7492,6 +7492,10 @@ static int rt2800_probe_hw_mode(struct rt2x00_dev *rt2x00dev)
        if (!rt2x00_is_usb(rt2x00dev))
                ieee80211_hw_set(rt2x00dev->hw, HOST_BROADCAST_PS_BUFFERING);
 
+       /* Set MFP if HW crypto is disabled. */
+       if (rt2800_hwcrypt_disabled(rt2x00dev))
+               ieee80211_hw_set(rt2x00dev->hw, MFP_CAPABLE);
+
        SET_IEEE80211_DEV(rt2x00dev->hw, rt2x00dev->dev);
        SET_IEEE80211_PERM_ADDR(rt2x00dev->hw,
                                rt2800_eeprom_addr(rt2x00dev,
index ba242d0..e895a84 100644 (file)
@@ -1018,6 +1018,8 @@ static int rtl8180_init_rx_ring(struct ieee80211_hw *dev)
                dma_addr_t *mapping;
                entry = priv->rx_ring + priv->rx_ring_sz*i;
                if (!skb) {
+                       pci_free_consistent(priv->pdev, priv->rx_ring_sz * 32,
+                                       priv->rx_ring, priv->rx_ring_dma);
                        wiphy_err(dev->wiphy, "Cannot allocate RX skb\n");
                        return -ENOMEM;
                }
@@ -1028,6 +1030,8 @@ static int rtl8180_init_rx_ring(struct ieee80211_hw *dev)
 
                if (pci_dma_mapping_error(priv->pdev, *mapping)) {
                        kfree_skb(skb);
+                       pci_free_consistent(priv->pdev, priv->rx_ring_sz * 32,
+                                       priv->rx_ring, priv->rx_ring_dma);
                        wiphy_err(dev->wiphy, "Cannot map DMA for RX skb\n");
                        return -ENOMEM;
                }
index db8433a..f2ce8c9 100644 (file)
@@ -1,7 +1,7 @@
 /*
  * RTL8XXXU mac80211 USB driver
  *
- * Copyright (c) 2014 - 2015 Jes Sorensen <Jes.Sorensen@redhat.com>
+ * Copyright (c) 2014 - 2016 Jes Sorensen <Jes.Sorensen@redhat.com>
  *
  * Portions, notably calibration code:
  * Copyright(c) 2007 - 2011 Realtek Corporation. All rights reserved.
@@ -128,7 +128,7 @@ static struct ieee80211_supported_band rtl8xxxu_supported_band = {
        .n_bitrates = ARRAY_SIZE(rtl8xxxu_rates),
 };
 
-static struct rtl8xxxu_reg8val rtl8723a_mac_init_table[] = {
+static struct rtl8xxxu_reg8val rtl8xxxu_gen1_mac_init_table[] = {
        {0x420, 0x80}, {0x423, 0x00}, {0x430, 0x00}, {0x431, 0x00},
        {0x432, 0x00}, {0x433, 0x01}, {0x434, 0x04}, {0x435, 0x05},
        {0x436, 0x06}, {0x437, 0x07}, {0x438, 0x00}, {0x439, 0x00},
@@ -184,6 +184,104 @@ static struct rtl8xxxu_reg8val rtl8723b_mac_init_table[] = {
        {0xffff, 0xff},
 };
 
+static struct rtl8xxxu_reg8val rtl8192e_mac_init_table[] = {
+       {0x011, 0xeb}, {0x012, 0x07}, {0x014, 0x75}, {0x303, 0xa7},
+       {0x428, 0x0a}, {0x429, 0x10}, {0x430, 0x00}, {0x431, 0x00},
+       {0x432, 0x00}, {0x433, 0x01}, {0x434, 0x04}, {0x435, 0x05},
+       {0x436, 0x07}, {0x437, 0x08}, {0x43c, 0x04}, {0x43d, 0x05},
+       {0x43e, 0x07}, {0x43f, 0x08}, {0x440, 0x5d}, {0x441, 0x01},
+       {0x442, 0x00}, {0x444, 0x10}, {0x445, 0x00}, {0x446, 0x00},
+       {0x447, 0x00}, {0x448, 0x00}, {0x449, 0xf0}, {0x44a, 0x0f},
+       {0x44b, 0x3e}, {0x44c, 0x10}, {0x44d, 0x00}, {0x44e, 0x00},
+       {0x44f, 0x00}, {0x450, 0x00}, {0x451, 0xf0}, {0x452, 0x0f},
+       {0x453, 0x00}, {0x456, 0x5e}, {0x460, 0x66}, {0x461, 0x66},
+       {0x4c8, 0xff}, {0x4c9, 0x08}, {0x4cc, 0xff}, {0x4cd, 0xff},
+       {0x4ce, 0x01}, {0x500, 0x26}, {0x501, 0xa2}, {0x502, 0x2f},
+       {0x503, 0x00}, {0x504, 0x28}, {0x505, 0xa3}, {0x506, 0x5e},
+       {0x507, 0x00}, {0x508, 0x2b}, {0x509, 0xa4}, {0x50a, 0x5e},
+       {0x50b, 0x00}, {0x50c, 0x4f}, {0x50d, 0xa4}, {0x50e, 0x00},
+       {0x50f, 0x00}, {0x512, 0x1c}, {0x514, 0x0a}, {0x516, 0x0a},
+       {0x525, 0x4f}, {0x540, 0x12}, {0x541, 0x64}, {0x550, 0x10},
+       {0x551, 0x10}, {0x559, 0x02}, {0x55c, 0x50}, {0x55d, 0xff},
+       {0x605, 0x30}, {0x608, 0x0e}, {0x609, 0x2a}, {0x620, 0xff},
+       {0x621, 0xff}, {0x622, 0xff}, {0x623, 0xff}, {0x624, 0xff},
+       {0x625, 0xff}, {0x626, 0xff}, {0x627, 0xff}, {0x638, 0x50},
+       {0x63c, 0x0a}, {0x63d, 0x0a}, {0x63e, 0x0e}, {0x63f, 0x0e},
+       {0x640, 0x40}, {0x642, 0x40}, {0x643, 0x00}, {0x652, 0xc8},
+       {0x66e, 0x05}, {0x700, 0x21}, {0x701, 0x43}, {0x702, 0x65},
+       {0x703, 0x87}, {0x708, 0x21}, {0x709, 0x43}, {0x70a, 0x65},
+       {0x70b, 0x87},
+       {0xffff, 0xff},
+};
+
+#ifdef CONFIG_RTL8XXXU_UNTESTED
+static struct rtl8xxxu_power_base rtl8188r_power_base = {
+       .reg_0e00 = 0x06080808,
+       .reg_0e04 = 0x00040406,
+       .reg_0e08 = 0x00000000,
+       .reg_086c = 0x00000000,
+
+       .reg_0e10 = 0x04060608,
+       .reg_0e14 = 0x00020204,
+       .reg_0e18 = 0x04060608,
+       .reg_0e1c = 0x00020204,
+
+       .reg_0830 = 0x06080808,
+       .reg_0834 = 0x00040406,
+       .reg_0838 = 0x00000000,
+       .reg_086c_2 = 0x00000000,
+
+       .reg_083c = 0x04060608,
+       .reg_0848 = 0x00020204,
+       .reg_084c = 0x04060608,
+       .reg_0868 = 0x00020204,
+};
+
+static struct rtl8xxxu_power_base rtl8192c_power_base = {
+       .reg_0e00 = 0x07090c0c,
+       .reg_0e04 = 0x01020405,
+       .reg_0e08 = 0x00000000,
+       .reg_086c = 0x00000000,
+
+       .reg_0e10 = 0x0b0c0c0e,
+       .reg_0e14 = 0x01030506,
+       .reg_0e18 = 0x0b0c0d0e,
+       .reg_0e1c = 0x01030509,
+
+       .reg_0830 = 0x07090c0c,
+       .reg_0834 = 0x01020405,
+       .reg_0838 = 0x00000000,
+       .reg_086c_2 = 0x00000000,
+
+       .reg_083c = 0x0b0c0d0e,
+       .reg_0848 = 0x01030509,
+       .reg_084c = 0x0b0c0d0e,
+       .reg_0868 = 0x01030509,
+};
+#endif
+
+static struct rtl8xxxu_power_base rtl8723a_power_base = {
+       .reg_0e00 = 0x0a0c0c0c,
+       .reg_0e04 = 0x02040608,
+       .reg_0e08 = 0x00000000,
+       .reg_086c = 0x00000000,
+
+       .reg_0e10 = 0x0a0c0d0e,
+       .reg_0e14 = 0x02040608,
+       .reg_0e18 = 0x0a0c0d0e,
+       .reg_0e1c = 0x02040608,
+
+       .reg_0830 = 0x0a0c0c0c,
+       .reg_0834 = 0x02040608,
+       .reg_0838 = 0x00000000,
+       .reg_086c_2 = 0x00000000,
+
+       .reg_083c = 0x0a0c0d0e,
+       .reg_0848 = 0x02040608,
+       .reg_084c = 0x0a0c0d0e,
+       .reg_0868 = 0x02040608,
+};
+
 static struct rtl8xxxu_reg32val rtl8723a_phy_1t_init_table[] = {
        {0x800, 0x80040000}, {0x804, 0x00000003},
        {0x808, 0x0000fc00}, {0x80c, 0x0000000a},
@@ -580,6 +678,138 @@ static struct rtl8xxxu_reg32val rtl8188ru_phy_1t_highpa_table[] = {
        {0xffff, 0xffffffff},
 };
 
+static struct rtl8xxxu_reg32val rtl8192eu_phy_init_table[] = {
+       {0x800, 0x80040000}, {0x804, 0x00000003},
+       {0x808, 0x0000fc00}, {0x80c, 0x0000000a},
+       {0x810, 0x10001331}, {0x814, 0x020c3d10},
+       {0x818, 0x02220385}, {0x81c, 0x00000000},
+       {0x820, 0x01000100}, {0x824, 0x00390204},
+       {0x828, 0x01000100}, {0x82c, 0x00390204},
+       {0x830, 0x32323232}, {0x834, 0x30303030},
+       {0x838, 0x30303030}, {0x83c, 0x30303030},
+       {0x840, 0x00010000}, {0x844, 0x00010000},
+       {0x848, 0x28282828}, {0x84c, 0x28282828},
+       {0x850, 0x00000000}, {0x854, 0x00000000},
+       {0x858, 0x009a009a}, {0x85c, 0x01000014},
+       {0x860, 0x66f60000}, {0x864, 0x061f0000},
+       {0x868, 0x30303030}, {0x86c, 0x30303030},
+       {0x870, 0x00000000}, {0x874, 0x55004200},
+       {0x878, 0x08080808}, {0x87c, 0x00000000},
+       {0x880, 0xb0000c1c}, {0x884, 0x00000001},
+       {0x888, 0x00000000}, {0x88c, 0xcc0000c0},
+       {0x890, 0x00000800}, {0x894, 0xfffffffe},
+       {0x898, 0x40302010}, {0x900, 0x00000000},
+       {0x904, 0x00000023}, {0x908, 0x00000000},
+       {0x90c, 0x81121313}, {0x910, 0x806c0001},
+       {0x914, 0x00000001}, {0x918, 0x00000000},
+       {0x91c, 0x00010000}, {0x924, 0x00000001},
+       {0x928, 0x00000000}, {0x92c, 0x00000000},
+       {0x930, 0x00000000}, {0x934, 0x00000000},
+       {0x938, 0x00000000}, {0x93c, 0x00000000},
+       {0x940, 0x00000000}, {0x944, 0x00000000},
+       {0x94c, 0x00000008}, {0xa00, 0x00d0c7c8},
+       {0xa04, 0x81ff000c}, {0xa08, 0x8c838300},
+       {0xa0c, 0x2e68120f}, {0xa10, 0x95009b78},
+       {0xa14, 0x1114d028}, {0xa18, 0x00881117},
+       {0xa1c, 0x89140f00}, {0xa20, 0x1a1b0000},
+       {0xa24, 0x090e1317}, {0xa28, 0x00000204},
+       {0xa2c, 0x00d30000}, {0xa70, 0x101fff00},
+       {0xa74, 0x00000007}, {0xa78, 0x00000900},
+       {0xa7c, 0x225b0606}, {0xa80, 0x218075b1},
+       {0xb38, 0x00000000}, {0xc00, 0x48071d40},
+       {0xc04, 0x03a05633}, {0xc08, 0x000000e4},
+       {0xc0c, 0x6c6c6c6c}, {0xc10, 0x08800000},
+       {0xc14, 0x40000100}, {0xc18, 0x08800000},
+       {0xc1c, 0x40000100}, {0xc20, 0x00000000},
+       {0xc24, 0x00000000}, {0xc28, 0x00000000},
+       {0xc2c, 0x00000000}, {0xc30, 0x69e9ac47},
+       {0xc34, 0x469652af}, {0xc38, 0x49795994},
+       {0xc3c, 0x0a97971c}, {0xc40, 0x1f7c403f},
+       {0xc44, 0x000100b7}, {0xc48, 0xec020107},
+       {0xc4c, 0x007f037f},
+#ifdef EXT_PA_8192EU
+       /* External PA or external LNA */
+       {0xc50, 0x00340220},
+#else
+       {0xc50, 0x00340020},
+#endif
+       {0xc54, 0x0080801f},
+#ifdef EXT_PA_8192EU
+       /* External PA or external LNA */
+       {0xc58, 0x00000220},
+#else
+       {0xc58, 0x00000020},
+#endif
+       {0xc5c, 0x00248492}, {0xc60, 0x00000000},
+       {0xc64, 0x7112848b}, {0xc68, 0x47c00bff},
+       {0xc6c, 0x00000036}, {0xc70, 0x00000600},
+       {0xc74, 0x02013169}, {0xc78, 0x0000001f},
+       {0xc7c, 0x00b91612},
+#ifdef EXT_PA_8192EU
+       /* External PA or external LNA */
+       {0xc80, 0x2d4000b5},
+#else
+       {0xc80, 0x40000100},
+#endif
+       {0xc84, 0x21f60000},
+#ifdef EXT_PA_8192EU
+       /* External PA or external LNA */
+       {0xc88, 0x2d4000b5},
+#else
+       {0xc88, 0x40000100},
+#endif
+       {0xc8c, 0xa0e40000}, {0xc90, 0x00121820},
+       {0xc94, 0x00000000}, {0xc98, 0x00121820},
+       {0xc9c, 0x00007f7f}, {0xca0, 0x00000000},
+       {0xca4, 0x000300a0}, {0xca8, 0x00000000},
+       {0xcac, 0x00000000}, {0xcb0, 0x00000000},
+       {0xcb4, 0x00000000}, {0xcb8, 0x00000000},
+       {0xcbc, 0x28000000}, {0xcc0, 0x00000000},
+       {0xcc4, 0x00000000}, {0xcc8, 0x00000000},
+       {0xccc, 0x00000000}, {0xcd0, 0x00000000},
+       {0xcd4, 0x00000000}, {0xcd8, 0x64b22427},
+       {0xcdc, 0x00766932}, {0xce0, 0x00222222},
+       {0xce4, 0x00040000}, {0xce8, 0x77644302},
+       {0xcec, 0x2f97d40c}, {0xd00, 0x00080740},
+       {0xd04, 0x00020403}, {0xd08, 0x0000907f},
+       {0xd0c, 0x20010201}, {0xd10, 0xa0633333},
+       {0xd14, 0x3333bc43}, {0xd18, 0x7a8f5b6b},
+       {0xd1c, 0x0000007f}, {0xd2c, 0xcc979975},
+       {0xd30, 0x00000000}, {0xd34, 0x80608000},
+       {0xd38, 0x00000000}, {0xd3c, 0x00127353},
+       {0xd40, 0x00000000}, {0xd44, 0x00000000},
+       {0xd48, 0x00000000}, {0xd4c, 0x00000000},
+       {0xd50, 0x6437140a}, {0xd54, 0x00000000},
+       {0xd58, 0x00000282}, {0xd5c, 0x30032064},
+       {0xd60, 0x4653de68}, {0xd64, 0x04518a3c},
+       {0xd68, 0x00002101}, {0xd6c, 0x2a201c16},
+       {0xd70, 0x1812362e}, {0xd74, 0x322c2220},
+       {0xd78, 0x000e3c24}, {0xd80, 0x01081008},
+       {0xd84, 0x00000800}, {0xd88, 0xf0b50000},
+       {0xe00, 0x30303030}, {0xe04, 0x30303030},
+       {0xe08, 0x03903030}, {0xe10, 0x30303030},
+       {0xe14, 0x30303030}, {0xe18, 0x30303030},
+       {0xe1c, 0x30303030}, {0xe28, 0x00000000},
+       {0xe30, 0x1000dc1f}, {0xe34, 0x10008c1f},
+       {0xe38, 0x02140102}, {0xe3c, 0x681604c2},
+       {0xe40, 0x01007c00}, {0xe44, 0x01004800},
+       {0xe48, 0xfb000000}, {0xe4c, 0x000028d1},
+       {0xe50, 0x1000dc1f}, {0xe54, 0x10008c1f},
+       {0xe58, 0x02140102}, {0xe5c, 0x28160d05},
+       {0xe60, 0x00000008}, {0xe68, 0x0fc05656},
+       {0xe6c, 0x03c09696}, {0xe70, 0x03c09696},
+       {0xe74, 0x0c005656}, {0xe78, 0x0c005656},
+       {0xe7c, 0x0c005656}, {0xe80, 0x0c005656},
+       {0xe84, 0x03c09696}, {0xe88, 0x0c005656},
+       {0xe8c, 0x03c09696}, {0xed0, 0x03c09696},
+       {0xed4, 0x03c09696}, {0xed8, 0x03c09696},
+       {0xedc, 0x0000d6d6}, {0xee0, 0x0000d6d6},
+       {0xeec, 0x0fc01616}, {0xee4, 0xb0000c1c},
+       {0xee8, 0x00000001}, {0xf14, 0x00000003},
+       {0xf4c, 0x00000000}, {0xf00, 0x00000300},
+       {0xffff, 0xffffffff},
+};
+
 static struct rtl8xxxu_reg32val rtl8xxx_agc_standard_table[] = {
        {0xc78, 0x7b000001}, {0xc78, 0x7b010001},
        {0xc78, 0x7b020001}, {0xc78, 0x7b030001},
@@ -819,6 +1049,144 @@ static struct rtl8xxxu_reg32val rtl8xxx_agc_8723bu_table[] = {
        {0xffff, 0xffffffff}
 };
 
+static struct rtl8xxxu_reg32val rtl8xxx_agc_8192eu_std_table[] = {
+       {0xc78, 0xfb000001}, {0xc78, 0xfb010001},
+       {0xc78, 0xfb020001}, {0xc78, 0xfb030001},
+       {0xc78, 0xfb040001}, {0xc78, 0xfb050001},
+       {0xc78, 0xfa060001}, {0xc78, 0xf9070001},
+       {0xc78, 0xf8080001}, {0xc78, 0xf7090001},
+       {0xc78, 0xf60a0001}, {0xc78, 0xf50b0001},
+       {0xc78, 0xf40c0001}, {0xc78, 0xf30d0001},
+       {0xc78, 0xf20e0001}, {0xc78, 0xf10f0001},
+       {0xc78, 0xf0100001}, {0xc78, 0xef110001},
+       {0xc78, 0xee120001}, {0xc78, 0xed130001},
+       {0xc78, 0xec140001}, {0xc78, 0xeb150001},
+       {0xc78, 0xea160001}, {0xc78, 0xe9170001},
+       {0xc78, 0xe8180001}, {0xc78, 0xe7190001},
+       {0xc78, 0xc81a0001}, {0xc78, 0xc71b0001},
+       {0xc78, 0xc61c0001}, {0xc78, 0x071d0001},
+       {0xc78, 0x061e0001}, {0xc78, 0x051f0001},
+       {0xc78, 0x04200001}, {0xc78, 0x03210001},
+       {0xc78, 0xaa220001}, {0xc78, 0xa9230001},
+       {0xc78, 0xa8240001}, {0xc78, 0xa7250001},
+       {0xc78, 0xa6260001}, {0xc78, 0x85270001},
+       {0xc78, 0x84280001}, {0xc78, 0x83290001},
+       {0xc78, 0x252a0001}, {0xc78, 0x242b0001},
+       {0xc78, 0x232c0001}, {0xc78, 0x222d0001},
+       {0xc78, 0x672e0001}, {0xc78, 0x662f0001},
+       {0xc78, 0x65300001}, {0xc78, 0x64310001},
+       {0xc78, 0x63320001}, {0xc78, 0x62330001},
+       {0xc78, 0x61340001}, {0xc78, 0x45350001},
+       {0xc78, 0x44360001}, {0xc78, 0x43370001},
+       {0xc78, 0x42380001}, {0xc78, 0x41390001},
+       {0xc78, 0x403a0001}, {0xc78, 0x403b0001},
+       {0xc78, 0x403c0001}, {0xc78, 0x403d0001},
+       {0xc78, 0x403e0001}, {0xc78, 0x403f0001},
+       {0xc78, 0xfb400001}, {0xc78, 0xfb410001},
+       {0xc78, 0xfb420001}, {0xc78, 0xfb430001},
+       {0xc78, 0xfb440001}, {0xc78, 0xfb450001},
+       {0xc78, 0xfa460001}, {0xc78, 0xf9470001},
+       {0xc78, 0xf8480001}, {0xc78, 0xf7490001},
+       {0xc78, 0xf64a0001}, {0xc78, 0xf54b0001},
+       {0xc78, 0xf44c0001}, {0xc78, 0xf34d0001},
+       {0xc78, 0xf24e0001}, {0xc78, 0xf14f0001},
+       {0xc78, 0xf0500001}, {0xc78, 0xef510001},
+       {0xc78, 0xee520001}, {0xc78, 0xed530001},
+       {0xc78, 0xec540001}, {0xc78, 0xeb550001},
+       {0xc78, 0xea560001}, {0xc78, 0xe9570001},
+       {0xc78, 0xe8580001}, {0xc78, 0xe7590001},
+       {0xc78, 0xe65a0001}, {0xc78, 0xe55b0001},
+       {0xc78, 0xe45c0001}, {0xc78, 0xe35d0001},
+       {0xc78, 0xe25e0001}, {0xc78, 0xe15f0001},
+       {0xc78, 0x8a600001}, {0xc78, 0x89610001},
+       {0xc78, 0x88620001}, {0xc78, 0x87630001},
+       {0xc78, 0x86640001}, {0xc78, 0x85650001},
+       {0xc78, 0x84660001}, {0xc78, 0x83670001},
+       {0xc78, 0x82680001}, {0xc78, 0x6b690001},
+       {0xc78, 0x6a6a0001}, {0xc78, 0x696b0001},
+       {0xc78, 0x686c0001}, {0xc78, 0x676d0001},
+       {0xc78, 0x666e0001}, {0xc78, 0x656f0001},
+       {0xc78, 0x64700001}, {0xc78, 0x63710001},
+       {0xc78, 0x62720001}, {0xc78, 0x61730001},
+       {0xc78, 0x49740001}, {0xc78, 0x48750001},
+       {0xc78, 0x47760001}, {0xc78, 0x46770001},
+       {0xc78, 0x45780001}, {0xc78, 0x44790001},
+       {0xc78, 0x437a0001}, {0xc78, 0x427b0001},
+       {0xc78, 0x417c0001}, {0xc78, 0x407d0001},
+       {0xc78, 0x407e0001}, {0xc78, 0x407f0001},
+       {0xc50, 0x00040022}, {0xc50, 0x00040020},
+       {0xffff, 0xffffffff}
+};
+
+static struct rtl8xxxu_reg32val rtl8xxx_agc_8192eu_highpa_table[] = {
+       {0xc78, 0xfa000001}, {0xc78, 0xf9010001},
+       {0xc78, 0xf8020001}, {0xc78, 0xf7030001},
+       {0xc78, 0xf6040001}, {0xc78, 0xf5050001},
+       {0xc78, 0xf4060001}, {0xc78, 0xf3070001},
+       {0xc78, 0xf2080001}, {0xc78, 0xf1090001},
+       {0xc78, 0xf00a0001}, {0xc78, 0xef0b0001},
+       {0xc78, 0xee0c0001}, {0xc78, 0xed0d0001},
+       {0xc78, 0xec0e0001}, {0xc78, 0xeb0f0001},
+       {0xc78, 0xea100001}, {0xc78, 0xe9110001},
+       {0xc78, 0xe8120001}, {0xc78, 0xe7130001},
+       {0xc78, 0xe6140001}, {0xc78, 0xe5150001},
+       {0xc78, 0xe4160001}, {0xc78, 0xe3170001},
+       {0xc78, 0xe2180001}, {0xc78, 0xe1190001},
+       {0xc78, 0x8a1a0001}, {0xc78, 0x891b0001},
+       {0xc78, 0x881c0001}, {0xc78, 0x871d0001},
+       {0xc78, 0x861e0001}, {0xc78, 0x851f0001},
+       {0xc78, 0x84200001}, {0xc78, 0x83210001},
+       {0xc78, 0x82220001}, {0xc78, 0x6a230001},
+       {0xc78, 0x69240001}, {0xc78, 0x68250001},
+       {0xc78, 0x67260001}, {0xc78, 0x66270001},
+       {0xc78, 0x65280001}, {0xc78, 0x64290001},
+       {0xc78, 0x632a0001}, {0xc78, 0x622b0001},
+       {0xc78, 0x612c0001}, {0xc78, 0x602d0001},
+       {0xc78, 0x472e0001}, {0xc78, 0x462f0001},
+       {0xc78, 0x45300001}, {0xc78, 0x44310001},
+       {0xc78, 0x43320001}, {0xc78, 0x42330001},
+       {0xc78, 0x41340001}, {0xc78, 0x40350001},
+       {0xc78, 0x40360001}, {0xc78, 0x40370001},
+       {0xc78, 0x40380001}, {0xc78, 0x40390001},
+       {0xc78, 0x403a0001}, {0xc78, 0x403b0001},
+       {0xc78, 0x403c0001}, {0xc78, 0x403d0001},
+       {0xc78, 0x403e0001}, {0xc78, 0x403f0001},
+       {0xc78, 0xfa400001}, {0xc78, 0xf9410001},
+       {0xc78, 0xf8420001}, {0xc78, 0xf7430001},
+       {0xc78, 0xf6440001}, {0xc78, 0xf5450001},
+       {0xc78, 0xf4460001}, {0xc78, 0xf3470001},
+       {0xc78, 0xf2480001}, {0xc78, 0xf1490001},
+       {0xc78, 0xf04a0001}, {0xc78, 0xef4b0001},
+       {0xc78, 0xee4c0001}, {0xc78, 0xed4d0001},
+       {0xc78, 0xec4e0001}, {0xc78, 0xeb4f0001},
+       {0xc78, 0xea500001}, {0xc78, 0xe9510001},
+       {0xc78, 0xe8520001}, {0xc78, 0xe7530001},
+       {0xc78, 0xe6540001}, {0xc78, 0xe5550001},
+       {0xc78, 0xe4560001}, {0xc78, 0xe3570001},
+       {0xc78, 0xe2580001}, {0xc78, 0xe1590001},
+       {0xc78, 0x8a5a0001}, {0xc78, 0x895b0001},
+       {0xc78, 0x885c0001}, {0xc78, 0x875d0001},
+       {0xc78, 0x865e0001}, {0xc78, 0x855f0001},
+       {0xc78, 0x84600001}, {0xc78, 0x83610001},
+       {0xc78, 0x82620001}, {0xc78, 0x6a630001},
+       {0xc78, 0x69640001}, {0xc78, 0x68650001},
+       {0xc78, 0x67660001}, {0xc78, 0x66670001},
+       {0xc78, 0x65680001}, {0xc78, 0x64690001},
+       {0xc78, 0x636a0001}, {0xc78, 0x626b0001},
+       {0xc78, 0x616c0001}, {0xc78, 0x606d0001},
+       {0xc78, 0x476e0001}, {0xc78, 0x466f0001},
+       {0xc78, 0x45700001}, {0xc78, 0x44710001},
+       {0xc78, 0x43720001}, {0xc78, 0x42730001},
+       {0xc78, 0x41740001}, {0xc78, 0x40750001},
+       {0xc78, 0x40760001}, {0xc78, 0x40770001},
+       {0xc78, 0x40780001}, {0xc78, 0x40790001},
+       {0xc78, 0x407a0001}, {0xc78, 0x407b0001},
+       {0xc78, 0x407c0001}, {0xc78, 0x407d0001},
+       {0xc78, 0x407e0001}, {0xc78, 0x407f0001},
+       {0xc50, 0x00040222}, {0xc50, 0x00040220},
+       {0xffff, 0xffffffff}
+};
+
 static struct rtl8xxxu_rfregval rtl8723au_radioa_1t_init_table[] = {
        {0x00, 0x00030159}, {0x01, 0x00031284},
        {0x02, 0x00098000}, {0x03, 0x00039c63},
@@ -963,6 +1331,7 @@ static struct rtl8xxxu_rfregval rtl8723bu_radioa_1t_init_table[] = {
        {0xff, 0xffffffff}
 };
 
+#ifdef CONFIG_RTL8XXXU_UNTESTED
 static struct rtl8xxxu_rfregval rtl8192cu_radioa_2t_init_table[] = {
        {0x00, 0x00030159}, {0x01, 0x00031284},
        {0x02, 0x00098000}, {0x03, 0x00018c63},
@@ -1211,6 +1580,153 @@ static struct rtl8xxxu_rfregval rtl8188ru_radioa_1t_highpa_table[] = {
        {0x00, 0x00030159},
        {0xff, 0xffffffff}
 };
+#endif
+
+static struct rtl8xxxu_rfregval rtl8192eu_radioa_init_table[] = {
+       {0x7f, 0x00000082}, {0x81, 0x0003fc00},
+       {0x00, 0x00030000}, {0x08, 0x00008400},
+       {0x18, 0x00000407}, {0x19, 0x00000012},
+       {0x1b, 0x00000064}, {0x1e, 0x00080009},
+       {0x1f, 0x00000880}, {0x2f, 0x0001a060},
+       {0x3f, 0x00000000}, {0x42, 0x000060c0},
+       {0x57, 0x000d0000}, {0x58, 0x000be180},
+       {0x67, 0x00001552}, {0x83, 0x00000000},
+       {0xb0, 0x000ff9f1}, {0xb1, 0x00055418},
+       {0xb2, 0x0008cc00}, {0xb4, 0x00043083},
+       {0xb5, 0x00008166}, {0xb6, 0x0000803e},
+       {0xb7, 0x0001c69f}, {0xb8, 0x0000407f},
+       {0xb9, 0x00080001}, {0xba, 0x00040001},
+       {0xbb, 0x00000400}, {0xbf, 0x000c0000},
+       {0xc2, 0x00002400}, {0xc3, 0x00000009},
+       {0xc4, 0x00040c91}, {0xc5, 0x00099999},
+       {0xc6, 0x000000a3}, {0xc7, 0x00088820},
+       {0xc8, 0x00076c06}, {0xc9, 0x00000000},
+       {0xca, 0x00080000}, {0xdf, 0x00000180},
+       {0xef, 0x000001a0}, {0x51, 0x00069545},
+       {0x52, 0x0007e45e}, {0x53, 0x00000071},
+       {0x56, 0x00051ff3}, {0x35, 0x000000a8},
+       {0x35, 0x000001e2}, {0x35, 0x000002a8},
+       {0x36, 0x00001c24}, {0x36, 0x00009c24},
+       {0x36, 0x00011c24}, {0x36, 0x00019c24},
+       {0x18, 0x00000c07}, {0x5a, 0x00048000},
+       {0x19, 0x000739d0},
+#ifdef EXT_PA_8192EU
+       /* External PA or external LNA */
+       {0x34, 0x0000a093}, {0x34, 0x0000908f},
+       {0x34, 0x0000808c}, {0x34, 0x0000704d},
+       {0x34, 0x0000604a}, {0x34, 0x00005047},
+       {0x34, 0x0000400a}, {0x34, 0x00003007},
+       {0x34, 0x00002004}, {0x34, 0x00001001},
+       {0x34, 0x00000000},
+#else
+       /* Regular */
+       {0x34, 0x0000add7}, {0x34, 0x00009dd4},
+       {0x34, 0x00008dd1}, {0x34, 0x00007dce},
+       {0x34, 0x00006dcb}, {0x34, 0x00005dc8},
+       {0x34, 0x00004dc5}, {0x34, 0x000034cc},
+       {0x34, 0x0000244f}, {0x34, 0x0000144c},
+       {0x34, 0x00000014},
+#endif
+       {0x00, 0x00030159},
+       {0x84, 0x00068180},
+       {0x86, 0x0000014e},
+       {0x87, 0x00048e00},
+       {0x8e, 0x00065540},
+       {0x8f, 0x00088000},
+       {0xef, 0x000020a0},
+#ifdef EXT_PA_8192EU
+       /* External PA or external LNA */
+       {0x3b, 0x000f07b0},
+#else
+       {0x3b, 0x000f02b0},
+#endif
+       {0x3b, 0x000ef7b0}, {0x3b, 0x000d4fb0},
+       {0x3b, 0x000cf060}, {0x3b, 0x000b0090},
+       {0x3b, 0x000a0080}, {0x3b, 0x00090080},
+       {0x3b, 0x0008f780},
+#ifdef EXT_PA_8192EU
+       /* External PA or external LNA */
+       {0x3b, 0x000787b0},
+#else
+       {0x3b, 0x00078730},
+#endif
+       {0x3b, 0x00060fb0}, {0x3b, 0x0005ffa0},
+       {0x3b, 0x00040620}, {0x3b, 0x00037090},
+       {0x3b, 0x00020080}, {0x3b, 0x0001f060},
+       {0x3b, 0x0000ffb0}, {0xef, 0x000000a0},
+       {0xfe, 0x00000000}, {0x18, 0x0000fc07},
+       {0xfe, 0x00000000}, {0xfe, 0x00000000},
+       {0xfe, 0x00000000}, {0xfe, 0x00000000},
+       {0x1e, 0x00000001}, {0x1f, 0x00080000},
+       {0x00, 0x00033e70},
+       {0xff, 0xffffffff}
+};
+
+static struct rtl8xxxu_rfregval rtl8192eu_radiob_init_table[] = {
+       {0x7f, 0x00000082}, {0x81, 0x0003fc00},
+       {0x00, 0x00030000}, {0x08, 0x00008400},
+       {0x18, 0x00000407}, {0x19, 0x00000012},
+       {0x1b, 0x00000064}, {0x1e, 0x00080009},
+       {0x1f, 0x00000880}, {0x2f, 0x0001a060},
+       {0x3f, 0x00000000}, {0x42, 0x000060c0},
+       {0x57, 0x000d0000}, {0x58, 0x000be180},
+       {0x67, 0x00001552}, {0x7f, 0x00000082},
+       {0x81, 0x0003f000}, {0x83, 0x00000000},
+       {0xdf, 0x00000180}, {0xef, 0x000001a0},
+       {0x51, 0x00069545}, {0x52, 0x0007e42e},
+       {0x53, 0x00000071}, {0x56, 0x00051ff3},
+       {0x35, 0x000000a8}, {0x35, 0x000001e0},
+       {0x35, 0x000002a8}, {0x36, 0x00001ca8},
+       {0x36, 0x00009c24}, {0x36, 0x00011c24},
+       {0x36, 0x00019c24}, {0x18, 0x00000c07},
+       {0x5a, 0x00048000}, {0x19, 0x000739d0},
+#ifdef EXT_PA_8192EU
+       /* External PA or external LNA */
+       {0x34, 0x0000a093}, {0x34, 0x0000908f},
+       {0x34, 0x0000808c}, {0x34, 0x0000704d},
+       {0x34, 0x0000604a}, {0x34, 0x00005047},
+       {0x34, 0x0000400a}, {0x34, 0x00003007},
+       {0x34, 0x00002004}, {0x34, 0x00001001},
+       {0x34, 0x00000000},
+#else
+       {0x34, 0x0000add7}, {0x34, 0x00009dd4},
+       {0x34, 0x00008dd1}, {0x34, 0x00007dce},
+       {0x34, 0x00006dcb}, {0x34, 0x00005dc8},
+       {0x34, 0x00004dc5}, {0x34, 0x000034cc},
+       {0x34, 0x0000244f}, {0x34, 0x0000144c},
+       {0x34, 0x00000014},
+#endif
+       {0x00, 0x00030159}, {0x84, 0x00068180},
+       {0x86, 0x000000ce}, {0x87, 0x00048a00},
+       {0x8e, 0x00065540}, {0x8f, 0x00088000},
+       {0xef, 0x000020a0},
+#ifdef EXT_PA_8192EU
+       /* External PA or external LNA */
+       {0x3b, 0x000f07b0},
+#else
+       {0x3b, 0x000f02b0},
+#endif
+
+       {0x3b, 0x000ef7b0}, {0x3b, 0x000d4fb0},
+       {0x3b, 0x000cf060}, {0x3b, 0x000b0090},
+       {0x3b, 0x000a0080}, {0x3b, 0x00090080},
+       {0x3b, 0x0008f780},
+#ifdef EXT_PA_8192EU
+       /* External PA or external LNA */
+       {0x3b, 0x000787b0},
+#else
+       {0x3b, 0x00078730},
+#endif
+       {0x3b, 0x00060fb0}, {0x3b, 0x0005ffa0},
+       {0x3b, 0x00040620}, {0x3b, 0x00037090},
+       {0x3b, 0x00020080}, {0x3b, 0x0001f060},
+       {0x3b, 0x0000ffb0}, {0xef, 0x000000a0},
+       {0x00, 0x00010159}, {0xfe, 0x00000000},
+       {0xfe, 0x00000000}, {0xfe, 0x00000000},
+       {0xfe, 0x00000000}, {0x1e, 0x00000001},
+       {0x1f, 0x00080000}, {0x00, 0x00033e70},
+       {0xff, 0xffffffff}
+};
 
 static struct rtl8xxxu_rfregs rtl8xxxu_rfregs[] = {
        {       /* RF_A */
@@ -1231,7 +1747,7 @@ static struct rtl8xxxu_rfregs rtl8xxxu_rfregs[] = {
        },
 };
 
-static const u32 rtl8723au_iqk_phy_iq_bb_reg[RTL8XXXU_BB_REGS] = {
+static const u32 rtl8xxxu_iqk_phy_iq_bb_reg[RTL8XXXU_BB_REGS] = {
        REG_OFDM0_XA_RX_IQ_IMBALANCE,
        REG_OFDM0_XB_RX_IQ_IMBALANCE,
        REG_OFDM0_ENERGY_CCA_THRES,
@@ -1450,7 +1966,7 @@ static int rtl8xxxu_write_rfreg(struct rtl8xxxu_priv *priv,
                                enum rtl8xxxu_rfpath path, u8 reg, u32 data)
 {
        int ret, retval;
-       u32 dataaddr;
+       u32 dataaddr, val32;
 
        if (rtl8xxxu_debug & RTL8XXXU_DEBUG_RFREG_WRITE)
                dev_info(&priv->udev->dev, "%s(%02x) = 0x%06x\n",
@@ -1459,6 +1975,12 @@ static int rtl8xxxu_write_rfreg(struct rtl8xxxu_priv *priv,
        data &= FPGA0_LSSI_PARM_DATA_MASK;
        dataaddr = (reg << FPGA0_LSSI_PARM_ADDR_SHIFT) | data;
 
+       if (priv->rtl_chip == RTL8192E) {
+               val32 = rtl8xxxu_read32(priv, REG_FPGA0_POWER_SAVE);
+               val32 &= ~0x20000;
+               rtl8xxxu_write32(priv, REG_FPGA0_POWER_SAVE, val32);
+       }
+
        /* Use XB for path B */
        ret = rtl8xxxu_write32(priv, rtl8xxxu_rfregs[path].lssiparm, dataaddr);
        if (ret != sizeof(dataaddr))
@@ -1468,6 +1990,12 @@ static int rtl8xxxu_write_rfreg(struct rtl8xxxu_priv *priv,
 
        udelay(1);
 
+       if (priv->rtl_chip == RTL8192E) {
+               val32 = rtl8xxxu_read32(priv, REG_FPGA0_POWER_SAVE);
+               val32 |= 0x20000;
+               rtl8xxxu_write32(priv, REG_FPGA0_POWER_SAVE, val32);
+       }
+
        return retval;
 }
 
@@ -1552,7 +2080,7 @@ static void rtl8723bu_write_btreg(struct rtl8xxxu_priv *priv, u8 reg, u8 data)
        rtl8723a_h2c_cmd(priv, &h2c, sizeof(h2c.bt_mp_oper));
 }
 
-static void rtl8723a_enable_rf(struct rtl8xxxu_priv *priv)
+static void rtl8xxxu_gen1_enable_rf(struct rtl8xxxu_priv *priv)
 {
        u8 val8;
        u32 val32;
@@ -1596,13 +2124,11 @@ static void rtl8723a_enable_rf(struct rtl8xxxu_priv *priv)
        rtl8xxxu_write8(priv, REG_TXPAUSE, 0x00);
 }
 
-static void rtl8723a_disable_rf(struct rtl8xxxu_priv *priv)
+static void rtl8xxxu_gen1_disable_rf(struct rtl8xxxu_priv *priv)
 {
        u8 sps0;
        u32 val32;
 
-       rtl8xxxu_write8(priv, REG_TXPAUSE, 0xff);
-
        sps0 = rtl8xxxu_read8(priv, REG_SPS0_CTRL);
 
        /* RF RX code for preamble power saving */
@@ -1676,7 +2202,10 @@ static int rtl8723a_channel_to_group(int channel)
        return group;
 }
 
-static int rtl8723b_channel_to_group(int channel)
+/*
+ * Valid for rtl8723bu and rtl8192eu
+ */
+static int rtl8xxxu_gen2_channel_to_group(int channel)
 {
        int group;
 
@@ -1694,7 +2223,7 @@ static int rtl8723b_channel_to_group(int channel)
        return group;
 }
 
-static void rtl8723au_config_channel(struct ieee80211_hw *hw)
+static void rtl8xxxu_gen1_config_channel(struct ieee80211_hw *hw)
 {
        struct rtl8xxxu_priv *priv = hw->priv;
        u32 val32, rsr;
@@ -1816,7 +2345,7 @@ static void rtl8723au_config_channel(struct ieee80211_hw *hw)
        }
 }
 
-static void rtl8723bu_config_channel(struct ieee80211_hw *hw)
+static void rtl8xxxu_gen2_config_channel(struct ieee80211_hw *hw)
 {
        struct rtl8xxxu_priv *priv = hw->priv;
        u32 val32, rsr;
@@ -1947,8 +2476,9 @@ static void rtl8723bu_config_channel(struct ieee80211_hw *hw)
 }
 
 static void
-rtl8723a_set_tx_power(struct rtl8xxxu_priv *priv, int channel, bool ht40)
+rtl8xxxu_gen1_set_tx_power(struct rtl8xxxu_priv *priv, int channel, bool ht40)
 {
+       struct rtl8xxxu_power_base *power_base = priv->power_base;
        u8 cck[RTL8723A_MAX_RF_PATHS], ofdm[RTL8723A_MAX_RF_PATHS];
        u8 ofdmbase[RTL8723A_MAX_RF_PATHS], mcsbase[RTL8723A_MAX_RF_PATHS];
        u32 val32, ofdm_a, ofdm_b, mcs_a, mcs_b;
@@ -1957,11 +2487,22 @@ rtl8723a_set_tx_power(struct rtl8xxxu_priv *priv, int channel, bool ht40)
 
        group = rtl8723a_channel_to_group(channel);
 
-       cck[0] = priv->cck_tx_power_index_A[group];
-       cck[1] = priv->cck_tx_power_index_B[group];
+       cck[0] = priv->cck_tx_power_index_A[group] - 1;
+       cck[1] = priv->cck_tx_power_index_B[group] - 1;
+
+       if (priv->hi_pa) {
+               if (cck[0] > 0x20)
+                       cck[0] = 0x20;
+               if (cck[1] > 0x20)
+                       cck[1] = 0x20;
+       }
 
        ofdm[0] = priv->ht40_1s_tx_power_index_A[group];
        ofdm[1] = priv->ht40_1s_tx_power_index_B[group];
+       if (ofdm[0])
+               ofdm[0] -= 1;
+       if (ofdm[1])
+               ofdm[1] -= 1;
 
        ofdmbase[0] = ofdm[0] + priv->ofdm_tx_power_index_diff[group].a;
        ofdmbase[1] = ofdm[1] + priv->ofdm_tx_power_index_diff[group].b;
@@ -2017,27 +2558,39 @@ rtl8723a_set_tx_power(struct rtl8xxxu_priv *priv, int channel, bool ht40)
                ofdmbase[0] << 16 | ofdmbase[0] << 24;
        ofdm_b = ofdmbase[1] | ofdmbase[1] << 8 |
                ofdmbase[1] << 16 | ofdmbase[1] << 24;
-       rtl8xxxu_write32(priv, REG_TX_AGC_A_RATE18_06, ofdm_a);
-       rtl8xxxu_write32(priv, REG_TX_AGC_B_RATE18_06, ofdm_b);
 
-       rtl8xxxu_write32(priv, REG_TX_AGC_A_RATE54_24, ofdm_a);
-       rtl8xxxu_write32(priv, REG_TX_AGC_B_RATE54_24, ofdm_b);
+       rtl8xxxu_write32(priv, REG_TX_AGC_A_RATE18_06,
+                        ofdm_a + power_base->reg_0e00);
+       rtl8xxxu_write32(priv, REG_TX_AGC_B_RATE18_06,
+                        ofdm_b + power_base->reg_0830);
+
+       rtl8xxxu_write32(priv, REG_TX_AGC_A_RATE54_24,
+                        ofdm_a + power_base->reg_0e04);
+       rtl8xxxu_write32(priv, REG_TX_AGC_B_RATE54_24,
+                        ofdm_b + power_base->reg_0834);
 
        mcs_a = mcsbase[0] | mcsbase[0] << 8 |
                mcsbase[0] << 16 | mcsbase[0] << 24;
        mcs_b = mcsbase[1] | mcsbase[1] << 8 |
                mcsbase[1] << 16 | mcsbase[1] << 24;
 
-       rtl8xxxu_write32(priv, REG_TX_AGC_A_MCS03_MCS00, mcs_a);
-       rtl8xxxu_write32(priv, REG_TX_AGC_B_MCS03_MCS00, mcs_b);
+       rtl8xxxu_write32(priv, REG_TX_AGC_A_MCS03_MCS00,
+                        mcs_a + power_base->reg_0e10);
+       rtl8xxxu_write32(priv, REG_TX_AGC_B_MCS03_MCS00,
+                        mcs_b + power_base->reg_083c);
 
-       rtl8xxxu_write32(priv, REG_TX_AGC_A_MCS07_MCS04, mcs_a);
-       rtl8xxxu_write32(priv, REG_TX_AGC_B_MCS07_MCS04, mcs_b);
+       rtl8xxxu_write32(priv, REG_TX_AGC_A_MCS07_MCS04,
+                        mcs_a + power_base->reg_0e14);
+       rtl8xxxu_write32(priv, REG_TX_AGC_B_MCS07_MCS04,
+                        mcs_b + power_base->reg_0848);
 
-       rtl8xxxu_write32(priv, REG_TX_AGC_A_MCS11_MCS08, mcs_a);
-       rtl8xxxu_write32(priv, REG_TX_AGC_B_MCS11_MCS08, mcs_b);
+       rtl8xxxu_write32(priv, REG_TX_AGC_A_MCS11_MCS08,
+                        mcs_a + power_base->reg_0e18);
+       rtl8xxxu_write32(priv, REG_TX_AGC_B_MCS11_MCS08,
+                        mcs_b + power_base->reg_084c);
 
-       rtl8xxxu_write32(priv, REG_TX_AGC_A_MCS15_MCS12, mcs_a);
+       rtl8xxxu_write32(priv, REG_TX_AGC_A_MCS15_MCS12,
+                        mcs_a + power_base->reg_0e1c);
        for (i = 0; i < 3; i++) {
                if (i != 2)
                        val8 = (mcsbase[0] > 8) ? (mcsbase[0] - 8) : 0;
@@ -2045,7 +2598,8 @@ rtl8723a_set_tx_power(struct rtl8xxxu_priv *priv, int channel, bool ht40)
                        val8 = (mcsbase[0] > 6) ? (mcsbase[0] - 6) : 0;
                rtl8xxxu_write8(priv, REG_OFDM0_XC_TX_IQ_IMBALANCE + i, val8);
        }
-       rtl8xxxu_write32(priv, REG_TX_AGC_B_MCS15_MCS12, mcs_b);
+       rtl8xxxu_write32(priv, REG_TX_AGC_B_MCS15_MCS12,
+                        mcs_b + power_base->reg_0868);
        for (i = 0; i < 3; i++) {
                if (i != 2)
                        val8 = (mcsbase[1] > 8) ? (mcsbase[1] - 8) : 0;
@@ -2063,7 +2617,7 @@ rtl8723b_set_tx_power(struct rtl8xxxu_priv *priv, int channel, bool ht40)
        int group, tx_idx;
 
        tx_idx = 0;
-       group = rtl8723b_channel_to_group(channel);
+       group = rtl8xxxu_gen2_channel_to_group(channel);
 
        cck = priv->cck_tx_power_index_B[group];
        val32 = rtl8xxxu_read32(priv, REG_TX_AGC_A_CCK1_MCS32);
@@ -2094,6 +2648,82 @@ rtl8723b_set_tx_power(struct rtl8xxxu_priv *priv, int channel, bool ht40)
        rtl8xxxu_write32(priv, REG_TX_AGC_A_MCS07_MCS04, mcs);
 }
 
+static void
+rtl8192e_set_tx_power(struct rtl8xxxu_priv *priv, int channel, bool ht40)
+{
+       u32 val32, ofdm, mcs;
+       u8 cck, ofdmbase, mcsbase;
+       int group, tx_idx;
+
+       tx_idx = 0;
+       group = rtl8xxxu_gen2_channel_to_group(channel);
+
+       cck = priv->cck_tx_power_index_A[group];
+
+       val32 = rtl8xxxu_read32(priv, REG_TX_AGC_A_CCK1_MCS32);
+       val32 &= 0xffff00ff;
+       val32 |= (cck << 8);
+       rtl8xxxu_write32(priv, REG_TX_AGC_A_CCK1_MCS32, val32);
+
+       val32 = rtl8xxxu_read32(priv, REG_TX_AGC_B_CCK11_A_CCK2_11);
+       val32 &= 0xff;
+       val32 |= ((cck << 8) | (cck << 16) | (cck << 24));
+       rtl8xxxu_write32(priv, REG_TX_AGC_B_CCK11_A_CCK2_11, val32);
+
+       ofdmbase = priv->ht40_1s_tx_power_index_A[group];
+       ofdmbase += priv->ofdm_tx_power_diff[tx_idx].a;
+       ofdm = ofdmbase | ofdmbase << 8 | ofdmbase << 16 | ofdmbase << 24;
+
+       rtl8xxxu_write32(priv, REG_TX_AGC_A_RATE18_06, ofdm);
+       rtl8xxxu_write32(priv, REG_TX_AGC_A_RATE54_24, ofdm);
+
+       mcsbase = priv->ht40_1s_tx_power_index_A[group];
+       if (ht40)
+               mcsbase += priv->ht40_tx_power_diff[tx_idx++].a;
+       else
+               mcsbase += priv->ht20_tx_power_diff[tx_idx++].a;
+       mcs = mcsbase | mcsbase << 8 | mcsbase << 16 | mcsbase << 24;
+
+       rtl8xxxu_write32(priv, REG_TX_AGC_A_MCS03_MCS00, mcs);
+       rtl8xxxu_write32(priv, REG_TX_AGC_A_MCS07_MCS04, mcs);
+       rtl8xxxu_write32(priv, REG_TX_AGC_A_MCS11_MCS08, mcs);
+       rtl8xxxu_write32(priv, REG_TX_AGC_A_MCS15_MCS12, mcs);
+
+       if (priv->tx_paths > 1) {
+               cck = priv->cck_tx_power_index_B[group];
+
+               val32 = rtl8xxxu_read32(priv, REG_TX_AGC_B_CCK1_55_MCS32);
+               val32 &= 0xff;
+               val32 |= ((cck << 8) | (cck << 16) | (cck << 24));
+               rtl8xxxu_write32(priv, REG_TX_AGC_B_CCK1_55_MCS32, val32);
+
+               val32 = rtl8xxxu_read32(priv, REG_TX_AGC_B_CCK11_A_CCK2_11);
+               val32 &= 0xffffff00;
+               val32 |= cck;
+               rtl8xxxu_write32(priv, REG_TX_AGC_B_CCK11_A_CCK2_11, val32);
+
+               ofdmbase = priv->ht40_1s_tx_power_index_B[group];
+               ofdmbase += priv->ofdm_tx_power_diff[tx_idx].b;
+               ofdm = ofdmbase | ofdmbase << 8 |
+                       ofdmbase << 16 | ofdmbase << 24;
+
+               rtl8xxxu_write32(priv, REG_TX_AGC_B_RATE18_06, ofdm);
+               rtl8xxxu_write32(priv, REG_TX_AGC_B_RATE54_24, ofdm);
+
+               mcsbase = priv->ht40_1s_tx_power_index_B[group];
+               if (ht40)
+                       mcsbase += priv->ht40_tx_power_diff[tx_idx++].b;
+               else
+                       mcsbase += priv->ht20_tx_power_diff[tx_idx++].b;
+               mcs = mcsbase | mcsbase << 8 | mcsbase << 16 | mcsbase << 24;
+
+               rtl8xxxu_write32(priv, REG_TX_AGC_B_MCS03_MCS00, mcs);
+               rtl8xxxu_write32(priv, REG_TX_AGC_B_MCS07_MCS04, mcs);
+               rtl8xxxu_write32(priv, REG_TX_AGC_B_MCS11_MCS08, mcs);
+               rtl8xxxu_write32(priv, REG_TX_AGC_B_MCS15_MCS12, mcs);
+       }
+}
+
 static void rtl8xxxu_set_linktype(struct rtl8xxxu_priv *priv,
                                  enum nl80211_iftype linktype)
 {
@@ -2221,7 +2851,8 @@ static int rtl8xxxu_identify_chip(struct rtl8xxxu_priv *priv)
        } else if (val32 & SYS_CFG_TYPE_ID) {
                bonding = rtl8xxxu_read32(priv, REG_HPON_FSM);
                bonding &= HPON_FSM_BONDING_MASK;
-               if (priv->fops->has_s0s1) {
+               if (priv->fops->tx_desc_size ==
+                   sizeof(struct rtl8xxxu_txdesc40)) {
                        if (bonding == HPON_FSM_BONDING_1T2R) {
                                sprintf(priv->chip_name, "8191EU");
                                priv->rf_paths = 2;
@@ -2375,6 +3006,9 @@ static int rtl8723au_parse_efuse(struct rtl8xxxu_priv *priv)
                priv->has_xtalk = 1;
                priv->xtalk = priv->efuse_wifi.efuse8723.xtal_k & 0x3f;
        }
+
+       priv->power_base = &rtl8723a_power_base;
+
        dev_info(&priv->udev->dev, "Vendor: %.7s\n",
                 efuse->vendor_name);
        dev_info(&priv->udev->dev, "Product: %.41s\n",
@@ -2507,9 +3141,14 @@ static int rtl8192cu_parse_efuse(struct rtl8xxxu_priv *priv)
        dev_info(&priv->udev->dev, "Product: %.20s\n",
                 efuse->device_name);
 
+       priv->power_base = &rtl8192c_power_base;
+
        if (efuse->rf_regulatory & 0x20) {
                sprintf(priv->chip_name, "8188RU");
+               priv->rtl_chip = RTL8188R;
                priv->hi_pa = 1;
+               priv->no_pape = 1;
+               priv->power_base = &rtl8188r_power_base;
        }
 
        if (rtl8xxxu_debug & RTL8XXXU_DEBUG_EFUSE) {
@@ -2541,6 +3180,43 @@ static int rtl8192eu_parse_efuse(struct rtl8xxxu_priv *priv)
 
        ether_addr_copy(priv->mac_addr, efuse->mac_addr);
 
+       memcpy(priv->cck_tx_power_index_A, efuse->tx_power_index_A.cck_base,
+              sizeof(efuse->tx_power_index_A.cck_base));
+       memcpy(priv->cck_tx_power_index_B, efuse->tx_power_index_B.cck_base,
+              sizeof(efuse->tx_power_index_B.cck_base));
+
+       memcpy(priv->ht40_1s_tx_power_index_A,
+              efuse->tx_power_index_A.ht40_base,
+              sizeof(efuse->tx_power_index_A.ht40_base));
+       memcpy(priv->ht40_1s_tx_power_index_B,
+              efuse->tx_power_index_B.ht40_base,
+              sizeof(efuse->tx_power_index_B.ht40_base));
+
+       priv->ht20_tx_power_diff[0].a =
+               efuse->tx_power_index_A.ht20_ofdm_1s_diff.b;
+       priv->ht20_tx_power_diff[0].b =
+               efuse->tx_power_index_B.ht20_ofdm_1s_diff.b;
+
+       priv->ht40_tx_power_diff[0].a = 0;
+       priv->ht40_tx_power_diff[0].b = 0;
+
+       for (i = 1; i < RTL8723B_TX_COUNT; i++) {
+               priv->ofdm_tx_power_diff[i].a =
+                       efuse->tx_power_index_A.pwr_diff[i - 1].ofdm;
+               priv->ofdm_tx_power_diff[i].b =
+                       efuse->tx_power_index_B.pwr_diff[i - 1].ofdm;
+
+               priv->ht20_tx_power_diff[i].a =
+                       efuse->tx_power_index_A.pwr_diff[i - 1].ht20;
+               priv->ht20_tx_power_diff[i].b =
+                       efuse->tx_power_index_B.pwr_diff[i - 1].ht20;
+
+               priv->ht40_tx_power_diff[i].a =
+                       efuse->tx_power_index_A.pwr_diff[i - 1].ht40;
+               priv->ht40_tx_power_diff[i].b =
+                       efuse->tx_power_index_B.pwr_diff[i - 1].ht40;
+       }
+
        priv->has_xtalk = 1;
        priv->xtalk = priv->efuse_wifi.efuse8192eu.xtal_k & 0x3f;
 
@@ -2562,10 +3238,6 @@ static int rtl8192eu_parse_efuse(struct rtl8xxxu_priv *priv)
                                 raw[i + 6], raw[i + 7]);
                }
        }
-       /*
-        * Temporarily disable 8192eu support
-        */
-       return -EINVAL;
        return 0;
 }
 
@@ -3052,9 +3724,9 @@ static void rtl8723bu_phy_init_antenna_selection(struct rtl8xxxu_priv *priv)
 {
        u32 val32;
 
-       val32 = rtl8xxxu_read32(priv, 0x64);
+       val32 = rtl8xxxu_read32(priv, REG_PAD_CTRL1);
        val32 &= ~(BIT(20) | BIT(24));
-       rtl8xxxu_write32(priv, 0x64, val32);
+       rtl8xxxu_write32(priv, REG_PAD_CTRL1, val32);
 
        val32 = rtl8xxxu_read32(priv, REG_GPIO_MUXCFG);
        val32 &= ~BIT(4);
@@ -3087,8 +3759,9 @@ static void rtl8723bu_phy_init_antenna_selection(struct rtl8xxxu_priv *priv)
 }
 
 static int
-rtl8xxxu_init_mac(struct rtl8xxxu_priv *priv, struct rtl8xxxu_reg8val *array)
+rtl8xxxu_init_mac(struct rtl8xxxu_priv *priv)
 {
+       struct rtl8xxxu_reg8val *array = priv->fops->mactable;
        int i, ret;
        u16 reg;
        u8 val;
@@ -3103,12 +3776,13 @@ rtl8xxxu_init_mac(struct rtl8xxxu_priv *priv, struct rtl8xxxu_reg8val *array)
                ret = rtl8xxxu_write8(priv, reg, val);
                if (ret != 1) {
                        dev_warn(&priv->udev->dev,
-                                "Failed to initialize MAC\n");
+                                "Failed to initialize MAC "
+                                "(reg: %04x, val %02x)\n", reg, val);
                        return -EAGAIN;
                }
        }
 
-       if (priv->rtl_chip != RTL8723B)
+       if (priv->rtl_chip != RTL8723B && priv->rtl_chip != RTL8192E)
                rtl8xxxu_write8(priv, REG_MAX_AGGR_NUM, 0x0a);
 
        return 0;
@@ -3140,50 +3814,30 @@ static int rtl8xxxu_init_phy_regs(struct rtl8xxxu_priv *priv,
        return 0;
 }
 
-/*
- * Most of this is black magic retrieved from the old rtl8723au driver
- */
-static int rtl8xxxu_init_phy_bb(struct rtl8xxxu_priv *priv)
+static void rtl8xxxu_gen1_init_phy_bb(struct rtl8xxxu_priv *priv)
 {
        u8 val8, ldoa15, ldov12d, lpldo, ldohci12;
        u16 val16;
        u32 val32;
 
-       /*
-        * Todo: The vendor driver maintains a table of PHY register
-        *       addresses, which is initialized here. Do we need this?
-        */
-
-       if (priv->rtl_chip == RTL8723B) {
-               val16 = rtl8xxxu_read16(priv, REG_SYS_FUNC);
-               val16 |= SYS_FUNC_BB_GLB_RSTN | SYS_FUNC_BBRSTB |
-                       SYS_FUNC_DIO_RF;
-               rtl8xxxu_write16(priv, REG_SYS_FUNC, val16);
-
-               rtl8xxxu_write32(priv, REG_S0S1_PATH_SWITCH, 0x00);
-       } else {
-               val8 = rtl8xxxu_read8(priv, REG_AFE_PLL_CTRL);
-               udelay(2);
-               val8 |= AFE_PLL_320_ENABLE;
-               rtl8xxxu_write8(priv, REG_AFE_PLL_CTRL, val8);
-               udelay(2);
+       val8 = rtl8xxxu_read8(priv, REG_AFE_PLL_CTRL);
+       udelay(2);
+       val8 |= AFE_PLL_320_ENABLE;
+       rtl8xxxu_write8(priv, REG_AFE_PLL_CTRL, val8);
+       udelay(2);
 
-               rtl8xxxu_write8(priv, REG_AFE_PLL_CTRL + 1, 0xff);
-               udelay(2);
+       rtl8xxxu_write8(priv, REG_AFE_PLL_CTRL + 1, 0xff);
+       udelay(2);
 
-               val16 = rtl8xxxu_read16(priv, REG_SYS_FUNC);
-               val16 |= SYS_FUNC_BB_GLB_RSTN | SYS_FUNC_BBRSTB;
-               rtl8xxxu_write16(priv, REG_SYS_FUNC, val16);
-       }
+       val16 = rtl8xxxu_read16(priv, REG_SYS_FUNC);
+       val16 |= SYS_FUNC_BB_GLB_RSTN | SYS_FUNC_BBRSTB;
+       rtl8xxxu_write16(priv, REG_SYS_FUNC, val16);
 
-       if (priv->rtl_chip != RTL8723B) {
-               /* AFE_XTAL_RF_GATE (bit 14) if addressing as 32 bit register */
-               val32 = rtl8xxxu_read32(priv, REG_AFE_XTAL_CTRL);
-               val32 &= ~AFE_XTAL_RF_GATE;
-               if (priv->has_bluetooth)
-                       val32 &= ~AFE_XTAL_BT_GATE;
-               rtl8xxxu_write32(priv, REG_AFE_XTAL_CTRL, val32);
-       }
+       val32 = rtl8xxxu_read32(priv, REG_AFE_XTAL_CTRL);
+       val32 &= ~AFE_XTAL_RF_GATE;
+       if (priv->has_bluetooth)
+               val32 &= ~AFE_XTAL_BT_GATE;
+       rtl8xxxu_write32(priv, REG_AFE_XTAL_CTRL, val32);
 
        /* 6. 0x1f[7:0] = 0x07 */
        val8 = RF_ENABLE | RF_RSTB | RF_SDMRSTB;
@@ -3193,43 +3847,110 @@ static int rtl8xxxu_init_phy_bb(struct rtl8xxxu_priv *priv)
                rtl8xxxu_init_phy_regs(priv, rtl8188ru_phy_1t_highpa_table);
        else if (priv->tx_paths == 2)
                rtl8xxxu_init_phy_regs(priv, rtl8192cu_phy_2t_init_table);
-       else if (priv->rtl_chip == RTL8723B) {
-               /*
-                * Why?
-                */
-               rtl8xxxu_write8(priv, REG_SYS_FUNC, 0xe3);
-               rtl8xxxu_write8(priv, REG_AFE_XTAL_CTRL + 1, 0x80);
-               rtl8xxxu_init_phy_regs(priv, rtl8723b_phy_1t_init_table);
-       } else
+       else
                rtl8xxxu_init_phy_regs(priv, rtl8723a_phy_1t_init_table);
 
-
-       if (priv->rtl_chip == RTL8188C && priv->hi_pa &&
+       if (priv->rtl_chip == RTL8188R && priv->hi_pa &&
            priv->vendor_umc && priv->chip_cut == 1)
                rtl8xxxu_write8(priv, REG_OFDM0_AGC_PARM1 + 2, 0x50);
 
-       if (priv->tx_paths == 1 && priv->rx_paths == 2) {
-               /*
-                * For 1T2R boards, patch the registers.
-                *
-                * It looks like 8191/2 1T2R boards use path B for TX
-                */
-               val32 = rtl8xxxu_read32(priv, REG_FPGA0_TX_INFO);
-               val32 &= ~(BIT(0) | BIT(1));
-               val32 |= BIT(1);
-               rtl8xxxu_write32(priv, REG_FPGA0_TX_INFO, val32);
+       if (priv->hi_pa)
+               rtl8xxxu_init_phy_regs(priv, rtl8xxx_agc_highpa_table);
+       else
+               rtl8xxxu_init_phy_regs(priv, rtl8xxx_agc_standard_table);
 
-               val32 = rtl8xxxu_read32(priv, REG_FPGA1_TX_INFO);
-               val32 &= ~0x300033;
-               val32 |= 0x200022;
-               rtl8xxxu_write32(priv, REG_FPGA1_TX_INFO, val32);
+       ldoa15 = LDOA15_ENABLE | LDOA15_OBUF;
+       ldov12d = LDOV12D_ENABLE | BIT(2) | (2 << LDOV12D_VADJ_SHIFT);
+       ldohci12 = 0x57;
+       lpldo = 1;
+       val32 = (lpldo << 24) | (ldohci12 << 16) | (ldov12d << 8) | ldoa15;
+       rtl8xxxu_write32(priv, REG_LDOA15_CTRL, val32);
+}
 
-               val32 = rtl8xxxu_read32(priv, REG_CCK0_AFE_SETTING);
-               val32 &= 0xff000000;
-               val32 |= 0x45000000;
-               rtl8xxxu_write32(priv, REG_CCK0_AFE_SETTING, val32);
+static void rtl8723bu_init_phy_bb(struct rtl8xxxu_priv *priv)
+{
+       u8 val8;
+       u16 val16;
 
-               val32 = rtl8xxxu_read32(priv, REG_OFDM0_TRX_PATH_ENABLE);
+       val16 = rtl8xxxu_read16(priv, REG_SYS_FUNC);
+       val16 |= SYS_FUNC_BB_GLB_RSTN | SYS_FUNC_BBRSTB | SYS_FUNC_DIO_RF;
+       rtl8xxxu_write16(priv, REG_SYS_FUNC, val16);
+
+       rtl8xxxu_write32(priv, REG_S0S1_PATH_SWITCH, 0x00);
+
+       /* 6. 0x1f[7:0] = 0x07 */
+       val8 = RF_ENABLE | RF_RSTB | RF_SDMRSTB;
+       rtl8xxxu_write8(priv, REG_RF_CTRL, val8);
+
+       /* Why? */
+       rtl8xxxu_write8(priv, REG_SYS_FUNC, 0xe3);
+       rtl8xxxu_write8(priv, REG_AFE_XTAL_CTRL + 1, 0x80);
+       rtl8xxxu_init_phy_regs(priv, rtl8723b_phy_1t_init_table);
+
+       rtl8xxxu_init_phy_regs(priv, rtl8xxx_agc_8723bu_table);
+}
+
+static void rtl8192eu_init_phy_bb(struct rtl8xxxu_priv *priv)
+{
+       u8 val8;
+       u16 val16;
+
+       val16 = rtl8xxxu_read16(priv, REG_SYS_FUNC);
+       val16 |= SYS_FUNC_BB_GLB_RSTN | SYS_FUNC_BBRSTB | SYS_FUNC_DIO_RF;
+       rtl8xxxu_write16(priv, REG_SYS_FUNC, val16);
+
+       /* 6. 0x1f[7:0] = 0x07 */
+       val8 = RF_ENABLE | RF_RSTB | RF_SDMRSTB;
+       rtl8xxxu_write8(priv, REG_RF_CTRL, val8);
+
+       val16 = rtl8xxxu_read16(priv, REG_SYS_FUNC);
+       val16 |= (SYS_FUNC_USBA | SYS_FUNC_USBD | SYS_FUNC_DIO_RF |
+                 SYS_FUNC_BB_GLB_RSTN | SYS_FUNC_BBRSTB);
+       rtl8xxxu_write16(priv, REG_SYS_FUNC, val16);
+       val8 = RF_ENABLE | RF_RSTB | RF_SDMRSTB;
+       rtl8xxxu_write8(priv, REG_RF_CTRL, val8);
+       rtl8xxxu_init_phy_regs(priv, rtl8192eu_phy_init_table);
+
+       if (priv->hi_pa)
+               rtl8xxxu_init_phy_regs(priv, rtl8xxx_agc_8192eu_highpa_table);
+       else
+               rtl8xxxu_init_phy_regs(priv, rtl8xxx_agc_8192eu_std_table);
+}
+
+/*
+ * Most of this is black magic retrieved from the old rtl8723au driver
+ */
+static int rtl8xxxu_init_phy_bb(struct rtl8xxxu_priv *priv)
+{
+       u8 val8;
+       u32 val32;
+
+       priv->fops->init_phy_bb(priv);
+
+       if (priv->tx_paths == 1 && priv->rx_paths == 2) {
+               /*
+                * For 1T2R boards, patch the registers.
+                *
+                * It looks like 8191/2 1T2R boards use path B for TX
+                */
+               val32 = rtl8xxxu_read32(priv, REG_FPGA0_TX_INFO);
+               val32 &= ~(BIT(0) | BIT(1));
+               val32 |= BIT(1);
+               rtl8xxxu_write32(priv, REG_FPGA0_TX_INFO, val32);
+
+               val32 = rtl8xxxu_read32(priv, REG_FPGA1_TX_INFO);
+               val32 &= ~0x300033;
+               val32 |= 0x200022;
+               rtl8xxxu_write32(priv, REG_FPGA1_TX_INFO, val32);
+
+               val32 = rtl8xxxu_read32(priv, REG_CCK0_AFE_SETTING);
+               val32 &= ~CCK0_AFE_RX_MASK;
+               val32 &= 0x00ffffff;
+               val32 |= 0x40000000;
+               val32 |= CCK0_AFE_RX_ANT_B;
+               rtl8xxxu_write32(priv, REG_CCK0_AFE_SETTING, val32);
+
+               val32 = rtl8xxxu_read32(priv, REG_OFDM0_TRX_PATH_ENABLE);
                val32 &= ~(OFDM_RF_PATH_RX_MASK | OFDM_RF_PATH_TX_MASK);
                val32 |= (OFDM_RF_PATH_RX_A | OFDM_RF_PATH_RX_B |
                          OFDM_RF_PATH_TX_B);
@@ -3266,13 +3987,6 @@ static int rtl8xxxu_init_phy_bb(struct rtl8xxxu_priv *priv)
                rtl8xxxu_write32(priv, REG_TX_TO_TX, val32);
        }
 
-       if (priv->rtl_chip == RTL8723B)
-               rtl8xxxu_init_phy_regs(priv, rtl8xxx_agc_8723bu_table);
-       else if (priv->hi_pa)
-               rtl8xxxu_init_phy_regs(priv, rtl8xxx_agc_highpa_table);
-       else
-               rtl8xxxu_init_phy_regs(priv, rtl8xxx_agc_standard_table);
-
        if (priv->has_xtalk) {
                val32 = rtl8xxxu_read32(priv, REG_MAC_PHY_CTRL);
 
@@ -3283,16 +3997,8 @@ static int rtl8xxxu_init_phy_bb(struct rtl8xxxu_priv *priv)
                rtl8xxxu_write32(priv, REG_MAC_PHY_CTRL, val32);
        }
 
-       if (priv->rtl_chip != RTL8723B && priv->rtl_chip != RTL8192E) {
-               ldoa15 = LDOA15_ENABLE | LDOA15_OBUF;
-               ldov12d = LDOV12D_ENABLE | BIT(2) | (2 << LDOV12D_VADJ_SHIFT);
-               ldohci12 = 0x57;
-               lpldo = 1;
-               val32 = (lpldo << 24) | (ldohci12 << 16) |
-                       (ldov12d << 8) | ldoa15;
-
-               rtl8xxxu_write32(priv, REG_LDOA15_CTRL, val32);
-       }
+       if (priv->rtl_chip == RTL8192E)
+               rtl8xxxu_write32(priv, REG_AFE_XTAL_CTRL, 0x000f81fb);
 
        return 0;
 }
@@ -3410,6 +4116,77 @@ static int rtl8xxxu_init_phy_rf(struct rtl8xxxu_priv *priv,
        return 0;
 }
 
+static int rtl8723au_init_phy_rf(struct rtl8xxxu_priv *priv)
+{
+       int ret;
+
+       ret = rtl8xxxu_init_phy_rf(priv, rtl8723au_radioa_1t_init_table, RF_A);
+
+       /* Reduce 80M spur */
+       rtl8xxxu_write32(priv, REG_AFE_XTAL_CTRL, 0x0381808d);
+       rtl8xxxu_write32(priv, REG_AFE_PLL_CTRL, 0xf0ffff83);
+       rtl8xxxu_write32(priv, REG_AFE_PLL_CTRL, 0xf0ffff82);
+       rtl8xxxu_write32(priv, REG_AFE_PLL_CTRL, 0xf0ffff83);
+
+       return ret;
+}
+
+static int rtl8723bu_init_phy_rf(struct rtl8xxxu_priv *priv)
+{
+       int ret;
+
+       ret = rtl8xxxu_init_phy_rf(priv, rtl8723bu_radioa_1t_init_table, RF_A);
+       /*
+        * PHY LCK
+        */
+       rtl8xxxu_write_rfreg(priv, RF_A, 0xb0, 0xdfbe0);
+       rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_MODE_AG, 0x8c01);
+       msleep(200);
+       rtl8xxxu_write_rfreg(priv, RF_A, 0xb0, 0xdffe0);
+
+       return ret;
+}
+
+#ifdef CONFIG_RTL8XXXU_UNTESTED
+static int rtl8192cu_init_phy_rf(struct rtl8xxxu_priv *priv)
+{
+       struct rtl8xxxu_rfregval *rftable;
+       int ret;
+
+       if (priv->rtl_chip == RTL8188R) {
+               rftable = rtl8188ru_radioa_1t_highpa_table;
+               ret = rtl8xxxu_init_phy_rf(priv, rftable, RF_A);
+       } else if (priv->rf_paths == 1) {
+               rftable = rtl8192cu_radioa_1t_init_table;
+               ret = rtl8xxxu_init_phy_rf(priv, rftable, RF_A);
+       } else {
+               rftable = rtl8192cu_radioa_2t_init_table;
+               ret = rtl8xxxu_init_phy_rf(priv, rftable, RF_A);
+               if (ret)
+                       goto exit;
+               rftable = rtl8192cu_radiob_2t_init_table;
+               ret = rtl8xxxu_init_phy_rf(priv, rftable, RF_B);
+       }
+
+exit:
+       return ret;
+}
+#endif
+
+static int rtl8192eu_init_phy_rf(struct rtl8xxxu_priv *priv)
+{
+       int ret;
+
+       ret = rtl8xxxu_init_phy_rf(priv, rtl8192eu_radioa_init_table, RF_A);
+       if (ret)
+               goto exit;
+
+       ret = rtl8xxxu_init_phy_rf(priv, rtl8192eu_radiob_init_table, RF_B);
+
+exit:
+       return ret;
+}
+
 static int rtl8xxxu_llt_write(struct rtl8xxxu_priv *priv, u8 address, u8 data)
 {
        int ret = -EBUSY;
@@ -3818,8 +4595,8 @@ static bool rtl8xxxu_simularity_compare(struct rtl8xxxu_priv *priv,
        return false;
 }
 
-static bool rtl8723bu_simularity_compare(struct rtl8xxxu_priv *priv,
-                                        int result[][8], int c1, int c2)
+static bool rtl8xxxu_gen2_simularity_compare(struct rtl8xxxu_priv *priv,
+                                            int result[][8], int c1, int c2)
 {
        u32 i, j, diff, simubitmap, bound = 0;
        int candidate[2] = {-1, -1};    /* for path A and path B */
@@ -4389,138 +5166,425 @@ out:
        return result;
 }
 
-#ifdef RTL8723BU_PATH_B
-static int rtl8723bu_iqk_path_b(struct rtl8xxxu_priv *priv)
+static int rtl8192eu_iqk_path_a(struct rtl8xxxu_priv *priv)
 {
-       u32 reg_eac, reg_eb4, reg_ebc, reg_ec4, reg_ecc, path_sel;
+       u32 reg_eac, reg_e94, reg_e9c;
        int result = 0;
 
-       path_sel = rtl8xxxu_read32(priv, REG_S0S1_PATH_SWITCH);
+       /*
+        * TX IQK
+        * PA/PAD controlled by 0x0
+        */
+       rtl8xxxu_write32(priv, REG_FPGA0_IQK, 0x00000000);
+       rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_UNKNOWN_DF, 0x00180);
+       rtl8xxxu_write32(priv, REG_FPGA0_IQK, 0x80800000);
 
-       val32 = rtl8xxxu_read32(priv, REG_FPGA0_IQK);
-       val32 &= 0x000000ff;
-       rtl8xxxu_write32(priv, REG_FPGA0_IQK, val32);
+       /* Path A IQK setting */
+       rtl8xxxu_write32(priv, REG_TX_IQK_TONE_A, 0x18008c1c);
+       rtl8xxxu_write32(priv, REG_RX_IQK_TONE_A, 0x38008c1c);
+       rtl8xxxu_write32(priv, REG_TX_IQK_TONE_B, 0x38008c1c);
+       rtl8xxxu_write32(priv, REG_RX_IQK_TONE_B, 0x38008c1c);
 
-       /* One shot, path B LOK & IQK */
-       rtl8xxxu_write32(priv, REG_IQK_AGC_CONT, 0x00000002);
-       rtl8xxxu_write32(priv, REG_IQK_AGC_CONT, 0x00000000);
+       rtl8xxxu_write32(priv, REG_TX_IQK_PI_A, 0x82140303);
+       rtl8xxxu_write32(priv, REG_RX_IQK_PI_A, 0x68160000);
 
-       mdelay(1);
+       /* LO calibration setting */
+       rtl8xxxu_write32(priv, REG_IQK_AGC_RSP, 0x00462911);
+
+       /* One shot, path A LOK & IQK */
+       rtl8xxxu_write32(priv, REG_IQK_AGC_PTS, 0xf9000000);
+       rtl8xxxu_write32(priv, REG_IQK_AGC_PTS, 0xf8000000);
+
+       mdelay(10);
 
        /* Check failed */
        reg_eac = rtl8xxxu_read32(priv, REG_RX_POWER_AFTER_IQK_A_2);
-       reg_eb4 = rtl8xxxu_read32(priv, REG_TX_POWER_BEFORE_IQK_B);
-       reg_ebc = rtl8xxxu_read32(priv, REG_TX_POWER_AFTER_IQK_B);
-       reg_ec4 = rtl8xxxu_read32(priv, REG_RX_POWER_BEFORE_IQK_B_2);
-       reg_ecc = rtl8xxxu_read32(priv, REG_RX_POWER_AFTER_IQK_B_2);
+       reg_e94 = rtl8xxxu_read32(priv, REG_TX_POWER_BEFORE_IQK_A);
+       reg_e9c = rtl8xxxu_read32(priv, REG_TX_POWER_AFTER_IQK_A);
 
-       if (!(reg_eac & BIT(31)) &&
-           ((reg_eb4 & 0x03ff0000) != 0x01420000) &&
-           ((reg_ebc & 0x03ff0000) != 0x00420000))
+       if (!(reg_eac & BIT(28)) &&
+           ((reg_e94 & 0x03ff0000) != 0x01420000) &&
+           ((reg_e9c & 0x03ff0000) != 0x00420000))
                result |= 0x01;
-       else
-               goto out;
 
-       if (!(reg_eac & BIT(30)) &&
-           (((reg_ec4 & 0x03ff0000) >> 16) != 0x132) &&
-           (((reg_ecc & 0x03ff0000) >> 16) != 0x36))
-               result |= 0x02;
-       else
-               dev_warn(&priv->udev->dev, "%s: Path B RX IQK failed!\n",
-                        __func__);
-out:
        return result;
 }
-#endif
 
-static void rtl8xxxu_phy_iqcalibrate(struct rtl8xxxu_priv *priv,
-                                    int result[][8], int t)
+static int rtl8192eu_rx_iqk_path_a(struct rtl8xxxu_priv *priv)
 {
-       struct device *dev = &priv->udev->dev;
-       u32 i, val32;
-       int path_a_ok, path_b_ok;
-       int retry = 2;
-       const u32 adda_regs[RTL8XXXU_ADDA_REGS] = {
-               REG_FPGA0_XCD_SWITCH_CTRL, REG_BLUETOOTH,
-               REG_RX_WAIT_CCA, REG_TX_CCK_RFON,
-               REG_TX_CCK_BBON, REG_TX_OFDM_RFON,
-               REG_TX_OFDM_BBON, REG_TX_TO_RX,
-               REG_TX_TO_TX, REG_RX_CCK,
-               REG_RX_OFDM, REG_RX_WAIT_RIFS,
-               REG_RX_TO_RX, REG_STANDBY,
-               REG_SLEEP, REG_PMPD_ANAEN
-       };
-       const u32 iqk_mac_regs[RTL8XXXU_MAC_REGS] = {
-               REG_TXPAUSE, REG_BEACON_CTRL,
-               REG_BEACON_CTRL_1, REG_GPIO_MUXCFG
-       };
-       const u32 iqk_bb_regs[RTL8XXXU_BB_REGS] = {
-               REG_OFDM0_TRX_PATH_ENABLE, REG_OFDM0_TR_MUX_PAR,
-               REG_FPGA0_XCD_RF_SW_CTRL, REG_CONFIG_ANT_A, REG_CONFIG_ANT_B,
-               REG_FPGA0_XAB_RF_SW_CTRL, REG_FPGA0_XA_RF_INT_OE,
-               REG_FPGA0_XB_RF_INT_OE, REG_FPGA0_RF_MODE
-       };
-
-       /*
-        * Note: IQ calibration must be performed after loading
-        *       PHY_REG.txt , and radio_a, radio_b.txt
-        */
+       u32 reg_ea4, reg_eac, reg_e94, reg_e9c, val32;
+       int result = 0;
 
-       if (t == 0) {
-               /* Save ADDA parameters, turn Path A ADDA on */
-               rtl8xxxu_save_regs(priv, adda_regs, priv->adda_backup,
-                                  RTL8XXXU_ADDA_REGS);
-               rtl8xxxu_save_mac_regs(priv, iqk_mac_regs, priv->mac_backup);
-               rtl8xxxu_save_regs(priv, iqk_bb_regs,
-                                  priv->bb_backup, RTL8XXXU_BB_REGS);
-       }
+       /* Leave IQK mode */
+       rtl8xxxu_write32(priv, REG_FPGA0_IQK, 0x00);
 
-       rtl8xxxu_path_adda_on(priv, adda_regs, true);
+       /* Enable path A PA in TX IQK mode */
+       rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_WE_LUT, 0x800a0);
+       rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_RCK_OS, 0x30000);
+       rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_TXPA_G1, 0x0000f);
+       rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_TXPA_G2, 0xf117b);
 
-       if (t == 0) {
-               val32 = rtl8xxxu_read32(priv, REG_FPGA0_XA_HSSI_PARM1);
-               if (val32 & FPGA0_HSSI_PARM1_PI)
-                       priv->pi_enabled = 1;
-       }
+       /* PA/PAD control by 0x56, and set = 0x0 */
+       rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_UNKNOWN_DF, 0x00980);
+       rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_UNKNOWN_56, 0x51000);
 
-       if (!priv->pi_enabled) {
-               /* Switch BB to PI mode to do IQ Calibration. */
-               rtl8xxxu_write32(priv, REG_FPGA0_XA_HSSI_PARM1, 0x01000100);
-               rtl8xxxu_write32(priv, REG_FPGA0_XB_HSSI_PARM1, 0x01000100);
-       }
+       /* Enter IQK mode */
+       rtl8xxxu_write32(priv, REG_FPGA0_IQK, 0x80800000);
 
-       val32 = rtl8xxxu_read32(priv, REG_FPGA0_RF_MODE);
-       val32 &= ~FPGA_RF_MODE_CCK;
-       rtl8xxxu_write32(priv, REG_FPGA0_RF_MODE, val32);
+       /* TX IQK setting */
+       rtl8xxxu_write32(priv, REG_TX_IQK, 0x01007c00);
+       rtl8xxxu_write32(priv, REG_RX_IQK, 0x01004800);
 
-       rtl8xxxu_write32(priv, REG_OFDM0_TRX_PATH_ENABLE, 0x03a05600);
-       rtl8xxxu_write32(priv, REG_OFDM0_TR_MUX_PAR, 0x000800e4);
-       rtl8xxxu_write32(priv, REG_FPGA0_XCD_RF_SW_CTRL, 0x22204000);
+       /* path-A IQK setting */
+       rtl8xxxu_write32(priv, REG_TX_IQK_TONE_A, 0x18008c1c);
+       rtl8xxxu_write32(priv, REG_RX_IQK_TONE_A, 0x38008c1c);
+       rtl8xxxu_write32(priv, REG_TX_IQK_TONE_B, 0x38008c1c);
+       rtl8xxxu_write32(priv, REG_RX_IQK_TONE_B, 0x38008c1c);
 
-       val32 = rtl8xxxu_read32(priv, REG_FPGA0_XAB_RF_SW_CTRL);
-       val32 |= (FPGA0_RF_PAPE | (FPGA0_RF_PAPE << FPGA0_RF_BD_CTRL_SHIFT));
-       rtl8xxxu_write32(priv, REG_FPGA0_XAB_RF_SW_CTRL, val32);
+       rtl8xxxu_write32(priv, REG_TX_IQK_PI_A, 0x82160c1f);
+       rtl8xxxu_write32(priv, REG_RX_IQK_PI_A, 0x68160c1f);
 
-       val32 = rtl8xxxu_read32(priv, REG_FPGA0_XA_RF_INT_OE);
-       val32 &= ~BIT(10);
-       rtl8xxxu_write32(priv, REG_FPGA0_XA_RF_INT_OE, val32);
-       val32 = rtl8xxxu_read32(priv, REG_FPGA0_XB_RF_INT_OE);
-       val32 &= ~BIT(10);
-       rtl8xxxu_write32(priv, REG_FPGA0_XB_RF_INT_OE, val32);
+       /* LO calibration setting */
+       rtl8xxxu_write32(priv, REG_IQK_AGC_RSP, 0x0046a911);
 
-       if (priv->tx_paths > 1) {
-               rtl8xxxu_write32(priv, REG_FPGA0_XA_LSSI_PARM, 0x00010000);
-               rtl8xxxu_write32(priv, REG_FPGA0_XB_LSSI_PARM, 0x00010000);
-       }
+       /* One shot, path A LOK & IQK */
+       rtl8xxxu_write32(priv, REG_IQK_AGC_PTS, 0xfa000000);
+       rtl8xxxu_write32(priv, REG_IQK_AGC_PTS, 0xf8000000);
 
-       /* MAC settings */
-       rtl8xxxu_mac_calibration(priv, iqk_mac_regs, priv->mac_backup);
+       mdelay(10);
 
-       /* Page B init */
-       rtl8xxxu_write32(priv, REG_CONFIG_ANT_A, 0x00080000);
+       /* Check failed */
+       reg_eac = rtl8xxxu_read32(priv, REG_RX_POWER_AFTER_IQK_A_2);
+       reg_e94 = rtl8xxxu_read32(priv, REG_TX_POWER_BEFORE_IQK_A);
+       reg_e9c = rtl8xxxu_read32(priv, REG_TX_POWER_AFTER_IQK_A);
 
-       if (priv->tx_paths > 1)
-               rtl8xxxu_write32(priv, REG_CONFIG_ANT_B, 0x00080000);
+       if (!(reg_eac & BIT(28)) &&
+           ((reg_e94 & 0x03ff0000) != 0x01420000) &&
+           ((reg_e9c & 0x03ff0000) != 0x00420000)) {
+               result |= 0x01;
+       } else {
+               /* PA/PAD controlled by 0x0 */
+               rtl8xxxu_write32(priv, REG_FPGA0_IQK, 0x00000000);
+               rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_UNKNOWN_DF, 0x180);
+               goto out;
+       }
+
+       val32 = 0x80007c00 |
+               (reg_e94 & 0x03ff0000) | ((reg_e9c >> 16) & 0x03ff);
+       rtl8xxxu_write32(priv, REG_TX_IQK, val32);
+
+       /* Modify RX IQK mode table */
+       rtl8xxxu_write32(priv, REG_FPGA0_IQK, 0x00000000);
+
+       rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_WE_LUT, 0x800a0);
+       rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_RCK_OS, 0x30000);
+       rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_TXPA_G1, 0x0000f);
+       rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_TXPA_G2, 0xf7ffa);
+
+       /* PA/PAD control by 0x56, and set = 0x0 */
+       rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_UNKNOWN_DF, 0x00980);
+       rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_UNKNOWN_56, 0x51000);
+
+       /* Enter IQK mode */
+       rtl8xxxu_write32(priv, REG_FPGA0_IQK, 0x80800000);
+
+       /* IQK setting */
+       rtl8xxxu_write32(priv, REG_RX_IQK, 0x01004800);
+
+       /* Path A IQK setting */
+       rtl8xxxu_write32(priv, REG_TX_IQK_TONE_A, 0x38008c1c);
+       rtl8xxxu_write32(priv, REG_RX_IQK_TONE_A, 0x18008c1c);
+       rtl8xxxu_write32(priv, REG_TX_IQK_TONE_B, 0x38008c1c);
+       rtl8xxxu_write32(priv, REG_RX_IQK_TONE_B, 0x38008c1c);
+
+       rtl8xxxu_write32(priv, REG_TX_IQK_PI_A, 0x82160c1f);
+       rtl8xxxu_write32(priv, REG_RX_IQK_PI_A, 0x28160c1f);
+
+       /* LO calibration setting */
+       rtl8xxxu_write32(priv, REG_IQK_AGC_RSP, 0x0046a891);
+
+       /* One shot, path A LOK & IQK */
+       rtl8xxxu_write32(priv, REG_IQK_AGC_PTS, 0xfa000000);
+       rtl8xxxu_write32(priv, REG_IQK_AGC_PTS, 0xf8000000);
+
+       mdelay(10);
+
+       reg_eac = rtl8xxxu_read32(priv, REG_RX_POWER_AFTER_IQK_A_2);
+       reg_ea4 = rtl8xxxu_read32(priv, REG_RX_POWER_BEFORE_IQK_A_2);
+
+       rtl8xxxu_write32(priv, REG_FPGA0_IQK, 0x00000000);
+       rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_UNKNOWN_DF, 0x180);
+
+       if (!(reg_eac & BIT(27)) &&
+           ((reg_ea4 & 0x03ff0000) != 0x01320000) &&
+           ((reg_eac & 0x03ff0000) != 0x00360000))
+               result |= 0x02;
+       else
+               dev_warn(&priv->udev->dev, "%s: Path A RX IQK failed!\n",
+                        __func__);
+
+out:
+       return result;
+}
+
+static int rtl8192eu_iqk_path_b(struct rtl8xxxu_priv *priv)
+{
+       u32 reg_eac, reg_eb4, reg_ebc;
+       int result = 0;
+
+       rtl8xxxu_write32(priv, REG_FPGA0_IQK, 0x00000000);
+       rtl8xxxu_write_rfreg(priv, RF_B, RF6052_REG_UNKNOWN_DF, 0x00180);
+       rtl8xxxu_write32(priv, REG_FPGA0_IQK, 0x80800000);
+
+       rtl8xxxu_write32(priv, REG_FPGA0_IQK, 0x00000000);
+       rtl8xxxu_write32(priv, REG_FPGA0_IQK, 0x80800000);
+
+       /* Path B IQK setting */
+       rtl8xxxu_write32(priv, REG_TX_IQK_TONE_A, 0x38008c1c);
+       rtl8xxxu_write32(priv, REG_RX_IQK_TONE_A, 0x38008c1c);
+       rtl8xxxu_write32(priv, REG_TX_IQK_TONE_B, 0x18008c1c);
+       rtl8xxxu_write32(priv, REG_RX_IQK_TONE_B, 0x38008c1c);
+
+       rtl8xxxu_write32(priv, REG_TX_IQK_PI_B, 0x821403e2);
+       rtl8xxxu_write32(priv, REG_RX_IQK_PI_B, 0x68160000);
+
+       /* LO calibration setting */
+       rtl8xxxu_write32(priv, REG_IQK_AGC_RSP, 0x00492911);
+
+       /* One shot, path A LOK & IQK */
+       rtl8xxxu_write32(priv, REG_IQK_AGC_PTS, 0xfa000000);
+       rtl8xxxu_write32(priv, REG_IQK_AGC_PTS, 0xf8000000);
+
+       mdelay(1);
+
+       /* Check failed */
+       reg_eac = rtl8xxxu_read32(priv, REG_RX_POWER_AFTER_IQK_A_2);
+       reg_eb4 = rtl8xxxu_read32(priv, REG_TX_POWER_BEFORE_IQK_B);
+       reg_ebc = rtl8xxxu_read32(priv, REG_TX_POWER_AFTER_IQK_B);
+
+       if (!(reg_eac & BIT(31)) &&
+           ((reg_eb4 & 0x03ff0000) != 0x01420000) &&
+           ((reg_ebc & 0x03ff0000) != 0x00420000))
+               result |= 0x01;
+       else
+               dev_warn(&priv->udev->dev, "%s: Path B IQK failed!\n",
+                        __func__);
+
+       return result;
+}
+
+static int rtl8192eu_rx_iqk_path_b(struct rtl8xxxu_priv *priv)
+{
+       u32 reg_eac, reg_eb4, reg_ebc, reg_ec4, reg_ecc, val32;
+       int result = 0;
+
+       /* Leave IQK mode */
+       rtl8xxxu_write32(priv, REG_FPGA0_IQK, 0x00000000);
+
+       /* Enable path A PA in TX IQK mode */
+       rtl8xxxu_write_rfreg(priv, RF_B, RF6052_REG_WE_LUT, 0x800a0);
+       rtl8xxxu_write_rfreg(priv, RF_B, RF6052_REG_RCK_OS, 0x30000);
+       rtl8xxxu_write_rfreg(priv, RF_B, RF6052_REG_TXPA_G1, 0x0000f);
+       rtl8xxxu_write_rfreg(priv, RF_B, RF6052_REG_TXPA_G2, 0xf117b);
+
+       /* PA/PAD control by 0x56, and set = 0x0 */
+       rtl8xxxu_write_rfreg(priv, RF_B, RF6052_REG_UNKNOWN_DF, 0x00980);
+       rtl8xxxu_write_rfreg(priv, RF_B, RF6052_REG_UNKNOWN_56, 0x51000);
+
+       /* Enter IQK mode */
+       rtl8xxxu_write32(priv, REG_FPGA0_IQK, 0x80800000);
+
+       /* TX IQK setting */
+       rtl8xxxu_write32(priv, REG_TX_IQK, 0x01007c00);
+       rtl8xxxu_write32(priv, REG_RX_IQK, 0x01004800);
+
+       /* path-A IQK setting */
+       rtl8xxxu_write32(priv, REG_TX_IQK_TONE_A, 0x38008c1c);
+       rtl8xxxu_write32(priv, REG_RX_IQK_TONE_A, 0x38008c1c);
+       rtl8xxxu_write32(priv, REG_TX_IQK_TONE_B, 0x18008c1c);
+       rtl8xxxu_write32(priv, REG_RX_IQK_TONE_B, 0x38008c1c);
+
+       rtl8xxxu_write32(priv, REG_TX_IQK_PI_B, 0x82160c1f);
+       rtl8xxxu_write32(priv, REG_RX_IQK_PI_B, 0x68160c1f);
+
+       /* LO calibration setting */
+       rtl8xxxu_write32(priv, REG_IQK_AGC_RSP, 0x0046a911);
+
+       /* One shot, path A LOK & IQK */
+       rtl8xxxu_write32(priv, REG_IQK_AGC_PTS, 0xfa000000);
+       rtl8xxxu_write32(priv, REG_IQK_AGC_PTS, 0xf8000000);
+
+       mdelay(10);
+
+       /* Check failed */
+       reg_eac = rtl8xxxu_read32(priv, REG_RX_POWER_AFTER_IQK_A_2);
+       reg_eb4 = rtl8xxxu_read32(priv, REG_TX_POWER_BEFORE_IQK_B);
+       reg_ebc = rtl8xxxu_read32(priv, REG_TX_POWER_AFTER_IQK_B);
+
+       if (!(reg_eac & BIT(31)) &&
+           ((reg_eb4 & 0x03ff0000) != 0x01420000) &&
+           ((reg_ebc & 0x03ff0000) != 0x00420000)) {
+               result |= 0x01;
+       } else {
+               /*
+                * PA/PAD controlled by 0x0
+                * Vendor driver restores RF_A here which I believe is a bug
+                */
+               rtl8xxxu_write32(priv, REG_FPGA0_IQK, 0x00000000);
+               rtl8xxxu_write_rfreg(priv, RF_B, RF6052_REG_UNKNOWN_DF, 0x180);
+               goto out;
+       }
+
+       val32 = 0x80007c00 |
+               (reg_eb4 & 0x03ff0000) | ((reg_ebc >> 16) & 0x03ff);
+       rtl8xxxu_write32(priv, REG_TX_IQK, val32);
+
+       /* Modify RX IQK mode table */
+       rtl8xxxu_write32(priv, REG_FPGA0_IQK, 0x00000000);
+
+       rtl8xxxu_write_rfreg(priv, RF_B, RF6052_REG_WE_LUT, 0x800a0);
+       rtl8xxxu_write_rfreg(priv, RF_B, RF6052_REG_RCK_OS, 0x30000);
+       rtl8xxxu_write_rfreg(priv, RF_B, RF6052_REG_TXPA_G1, 0x0000f);
+       rtl8xxxu_write_rfreg(priv, RF_B, RF6052_REG_TXPA_G2, 0xf7ffa);
+
+       /* PA/PAD control by 0x56, and set = 0x0 */
+       rtl8xxxu_write_rfreg(priv, RF_B, RF6052_REG_UNKNOWN_DF, 0x00980);
+       rtl8xxxu_write_rfreg(priv, RF_B, RF6052_REG_UNKNOWN_56, 0x51000);
+
+       /* Enter IQK mode */
+       rtl8xxxu_write32(priv, REG_FPGA0_IQK, 0x80800000);
+
+       /* IQK setting */
+       rtl8xxxu_write32(priv, REG_RX_IQK, 0x01004800);
+
+       /* Path A IQK setting */
+       rtl8xxxu_write32(priv, REG_TX_IQK_TONE_A, 0x38008c1c);
+       rtl8xxxu_write32(priv, REG_RX_IQK_TONE_A, 0x38008c1c);
+       rtl8xxxu_write32(priv, REG_TX_IQK_TONE_B, 0x38008c1c);
+       rtl8xxxu_write32(priv, REG_RX_IQK_TONE_B, 0x18008c1c);
+
+       rtl8xxxu_write32(priv, REG_TX_IQK_PI_A, 0x82160c1f);
+       rtl8xxxu_write32(priv, REG_RX_IQK_PI_A, 0x28160c1f);
+
+       /* LO calibration setting */
+       rtl8xxxu_write32(priv, REG_IQK_AGC_RSP, 0x0046a891);
+
+       /* One shot, path A LOK & IQK */
+       rtl8xxxu_write32(priv, REG_IQK_AGC_PTS, 0xfa000000);
+       rtl8xxxu_write32(priv, REG_IQK_AGC_PTS, 0xf8000000);
+
+       mdelay(10);
+
+       reg_eac = rtl8xxxu_read32(priv, REG_RX_POWER_AFTER_IQK_A_2);
+       reg_ec4 = rtl8xxxu_read32(priv, REG_RX_POWER_BEFORE_IQK_B_2);
+       reg_ecc = rtl8xxxu_read32(priv, REG_RX_POWER_AFTER_IQK_B_2);
+
+       rtl8xxxu_write32(priv, REG_FPGA0_IQK, 0x00000000);
+       rtl8xxxu_write_rfreg(priv, RF_B, RF6052_REG_UNKNOWN_DF, 0x180);
+
+       if (!(reg_eac & BIT(30)) &&
+           ((reg_ec4 & 0x03ff0000) != 0x01320000) &&
+           ((reg_ecc & 0x03ff0000) != 0x00360000))
+               result |= 0x02;
+       else
+               dev_warn(&priv->udev->dev, "%s: Path B RX IQK failed!\n",
+                        __func__);
+
+out:
+       return result;
+}
+
+static void rtl8xxxu_phy_iqcalibrate(struct rtl8xxxu_priv *priv,
+                                    int result[][8], int t)
+{
+       struct device *dev = &priv->udev->dev;
+       u32 i, val32;
+       int path_a_ok, path_b_ok;
+       int retry = 2;
+       const u32 adda_regs[RTL8XXXU_ADDA_REGS] = {
+               REG_FPGA0_XCD_SWITCH_CTRL, REG_BLUETOOTH,
+               REG_RX_WAIT_CCA, REG_TX_CCK_RFON,
+               REG_TX_CCK_BBON, REG_TX_OFDM_RFON,
+               REG_TX_OFDM_BBON, REG_TX_TO_RX,
+               REG_TX_TO_TX, REG_RX_CCK,
+               REG_RX_OFDM, REG_RX_WAIT_RIFS,
+               REG_RX_TO_RX, REG_STANDBY,
+               REG_SLEEP, REG_PMPD_ANAEN
+       };
+       const u32 iqk_mac_regs[RTL8XXXU_MAC_REGS] = {
+               REG_TXPAUSE, REG_BEACON_CTRL,
+               REG_BEACON_CTRL_1, REG_GPIO_MUXCFG
+       };
+       const u32 iqk_bb_regs[RTL8XXXU_BB_REGS] = {
+               REG_OFDM0_TRX_PATH_ENABLE, REG_OFDM0_TR_MUX_PAR,
+               REG_FPGA0_XCD_RF_SW_CTRL, REG_CONFIG_ANT_A, REG_CONFIG_ANT_B,
+               REG_FPGA0_XAB_RF_SW_CTRL, REG_FPGA0_XA_RF_INT_OE,
+               REG_FPGA0_XB_RF_INT_OE, REG_FPGA0_RF_MODE
+       };
+
+       /*
+        * Note: IQ calibration must be performed after loading
+        *       PHY_REG.txt , and radio_a, radio_b.txt
+        */
+
+       if (t == 0) {
+               /* Save ADDA parameters, turn Path A ADDA on */
+               rtl8xxxu_save_regs(priv, adda_regs, priv->adda_backup,
+                                  RTL8XXXU_ADDA_REGS);
+               rtl8xxxu_save_mac_regs(priv, iqk_mac_regs, priv->mac_backup);
+               rtl8xxxu_save_regs(priv, iqk_bb_regs,
+                                  priv->bb_backup, RTL8XXXU_BB_REGS);
+       }
+
+       rtl8xxxu_path_adda_on(priv, adda_regs, true);
+
+       if (t == 0) {
+               val32 = rtl8xxxu_read32(priv, REG_FPGA0_XA_HSSI_PARM1);
+               if (val32 & FPGA0_HSSI_PARM1_PI)
+                       priv->pi_enabled = 1;
+       }
+
+       if (!priv->pi_enabled) {
+               /* Switch BB to PI mode to do IQ Calibration. */
+               rtl8xxxu_write32(priv, REG_FPGA0_XA_HSSI_PARM1, 0x01000100);
+               rtl8xxxu_write32(priv, REG_FPGA0_XB_HSSI_PARM1, 0x01000100);
+       }
+
+       val32 = rtl8xxxu_read32(priv, REG_FPGA0_RF_MODE);
+       val32 &= ~FPGA_RF_MODE_CCK;
+       rtl8xxxu_write32(priv, REG_FPGA0_RF_MODE, val32);
+
+       rtl8xxxu_write32(priv, REG_OFDM0_TRX_PATH_ENABLE, 0x03a05600);
+       rtl8xxxu_write32(priv, REG_OFDM0_TR_MUX_PAR, 0x000800e4);
+       rtl8xxxu_write32(priv, REG_FPGA0_XCD_RF_SW_CTRL, 0x22204000);
+
+       if (!priv->no_pape) {
+               val32 = rtl8xxxu_read32(priv, REG_FPGA0_XAB_RF_SW_CTRL);
+               val32 |= (FPGA0_RF_PAPE |
+                         (FPGA0_RF_PAPE << FPGA0_RF_BD_CTRL_SHIFT));
+               rtl8xxxu_write32(priv, REG_FPGA0_XAB_RF_SW_CTRL, val32);
+       }
+
+       val32 = rtl8xxxu_read32(priv, REG_FPGA0_XA_RF_INT_OE);
+       val32 &= ~BIT(10);
+       rtl8xxxu_write32(priv, REG_FPGA0_XA_RF_INT_OE, val32);
+       val32 = rtl8xxxu_read32(priv, REG_FPGA0_XB_RF_INT_OE);
+       val32 &= ~BIT(10);
+       rtl8xxxu_write32(priv, REG_FPGA0_XB_RF_INT_OE, val32);
+
+       if (priv->tx_paths > 1) {
+               rtl8xxxu_write32(priv, REG_FPGA0_XA_LSSI_PARM, 0x00010000);
+               rtl8xxxu_write32(priv, REG_FPGA0_XB_LSSI_PARM, 0x00010000);
+       }
+
+       /* MAC settings */
+       rtl8xxxu_mac_calibration(priv, iqk_mac_regs, priv->mac_backup);
+
+       /* Page B init */
+       rtl8xxxu_write32(priv, REG_CONFIG_ANT_A, 0x00080000);
+
+       if (priv->tx_paths > 1)
+               rtl8xxxu_write32(priv, REG_CONFIG_ANT_B, 0x00080000);
 
        /* IQ calibration setting */
        rtl8xxxu_write32(priv, REG_FPGA0_IQK, 0x80800000);
@@ -4692,55 +5756,232 @@ static void rtl8723bu_phy_iqcalibrate(struct rtl8xxxu_priv *priv,
        rtl8xxxu_write32(priv, REG_OFDM0_TR_MUX_PAR, 0x000800e4);
        rtl8xxxu_write32(priv, REG_FPGA0_XCD_RF_SW_CTRL, 0x22204000);
 
-#ifdef RTL8723BU_PATH_B
-       /* Set RF mode to standby Path B */
-       if (priv->tx_paths > 1)
-               rtl8xxxu_write_rfreg(priv, RF_B, RF6052_REG_AC, 0x10000);
-#endif
+       /*
+        * RX IQ calibration setting for 8723B D cut large current issue
+        * when leaving IPS
+        */
+       val32 = rtl8xxxu_read32(priv, REG_FPGA0_IQK);
+       val32 &= 0x000000ff;
+       rtl8xxxu_write32(priv, REG_FPGA0_IQK, val32);
 
-#if 0
-       /* Page B init */
-       rtl8xxxu_write32(priv, REG_CONFIG_ANT_A, 0x0f600000);
+       val32 = rtl8xxxu_read_rfreg(priv, RF_A, RF6052_REG_WE_LUT);
+       val32 |= 0x80000;
+       rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_WE_LUT, val32);
 
-       if (priv->tx_paths > 1)
-               rtl8xxxu_write32(priv, REG_CONFIG_ANT_B, 0x0f600000);
+       rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_RCK_OS, 0x30000);
+       rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_TXPA_G1, 0x0001f);
+       rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_TXPA_G2, 0xf7fb7);
+
+       val32 = rtl8xxxu_read_rfreg(priv, RF_A, RF6052_REG_UNKNOWN_ED);
+       val32 |= 0x20;
+       rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_UNKNOWN_ED, val32);
+
+       rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_UNKNOWN_43, 0x60fbd);
+
+       for (i = 0; i < retry; i++) {
+               path_a_ok = rtl8723bu_iqk_path_a(priv);
+               if (path_a_ok == 0x01) {
+                       val32 = rtl8xxxu_read32(priv, REG_FPGA0_IQK);
+                       val32 &= 0x000000ff;
+                       rtl8xxxu_write32(priv, REG_FPGA0_IQK, val32);
+
+                       val32 = rtl8xxxu_read32(priv,
+                                               REG_TX_POWER_BEFORE_IQK_A);
+                       result[t][0] = (val32 >> 16) & 0x3ff;
+                       val32 = rtl8xxxu_read32(priv,
+                                               REG_TX_POWER_AFTER_IQK_A);
+                       result[t][1] = (val32 >> 16) & 0x3ff;
+
+                       break;
+               }
+       }
+
+       if (!path_a_ok)
+               dev_dbg(dev, "%s: Path A TX IQK failed!\n", __func__);
+
+       for (i = 0; i < retry; i++) {
+               path_a_ok = rtl8723bu_rx_iqk_path_a(priv);
+               if (path_a_ok == 0x03) {
+                       val32 = rtl8xxxu_read32(priv,
+                                               REG_RX_POWER_BEFORE_IQK_A_2);
+                       result[t][2] = (val32 >> 16) & 0x3ff;
+                       val32 = rtl8xxxu_read32(priv,
+                                               REG_RX_POWER_AFTER_IQK_A_2);
+                       result[t][3] = (val32 >> 16) & 0x3ff;
+
+                       break;
+               }
+       }
+
+       if (!path_a_ok)
+               dev_dbg(dev, "%s: Path A RX IQK failed!\n", __func__);
+
+       if (priv->tx_paths > 1) {
+#if 1
+               dev_warn(dev, "%s: Path B not supported\n", __func__);
+#else
+
+               /*
+                * Path A into standby
+                */
+               val32 = rtl8xxxu_read32(priv, REG_FPGA0_IQK);
+               val32 &= 0x000000ff;
+               rtl8xxxu_write32(priv, REG_FPGA0_IQK, val32);
+               rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_AC, 0x10000);
+
+               val32 = rtl8xxxu_read32(priv, REG_FPGA0_IQK);
+               val32 &= 0x000000ff;
+               val32 |= 0x80800000;
+               rtl8xxxu_write32(priv, REG_FPGA0_IQK, val32);
+
+               /* Turn Path B ADDA on */
+               rtl8xxxu_path_adda_on(priv, adda_regs, false);
+
+               for (i = 0; i < retry; i++) {
+                       path_b_ok = rtl8xxxu_iqk_path_b(priv);
+                       if (path_b_ok == 0x03) {
+                               val32 = rtl8xxxu_read32(priv, REG_TX_POWER_BEFORE_IQK_B);
+                               result[t][4] = (val32 >> 16) & 0x3ff;
+                               val32 = rtl8xxxu_read32(priv, REG_TX_POWER_AFTER_IQK_B);
+                               result[t][5] = (val32 >> 16) & 0x3ff;
+                               break;
+                       }
+               }
+
+               if (!path_b_ok)
+                       dev_dbg(dev, "%s: Path B IQK failed!\n", __func__);
+
+               for (i = 0; i < retry; i++) {
+                       path_b_ok = rtl8723bu_rx_iqk_path_b(priv);
+                       if (path_a_ok == 0x03) {
+                               val32 = rtl8xxxu_read32(priv,
+                                                       REG_RX_POWER_BEFORE_IQK_B_2);
+                               result[t][6] = (val32 >> 16) & 0x3ff;
+                               val32 = rtl8xxxu_read32(priv,
+                                                       REG_RX_POWER_AFTER_IQK_B_2);
+                               result[t][7] = (val32 >> 16) & 0x3ff;
+                               break;
+                       }
+               }
+
+               if (!path_b_ok)
+                       dev_dbg(dev, "%s: Path B RX IQK failed!\n", __func__);
 #endif
+       }
+
+       /* Back to BB mode, load original value */
+       val32 = rtl8xxxu_read32(priv, REG_FPGA0_IQK);
+       val32 &= 0x000000ff;
+       rtl8xxxu_write32(priv, REG_FPGA0_IQK, val32);
+
+       if (t) {
+               /* Reload ADDA power saving parameters */
+               rtl8xxxu_restore_regs(priv, adda_regs, priv->adda_backup,
+                                     RTL8XXXU_ADDA_REGS);
+
+               /* Reload MAC parameters */
+               rtl8xxxu_restore_mac_regs(priv, iqk_mac_regs, priv->mac_backup);
+
+               /* Reload BB parameters */
+               rtl8xxxu_restore_regs(priv, iqk_bb_regs,
+                                     priv->bb_backup, RTL8XXXU_BB_REGS);
+
+               /* Restore RX initial gain */
+               val32 = rtl8xxxu_read32(priv, REG_OFDM0_XA_AGC_CORE1);
+               val32 &= 0xffffff00;
+               rtl8xxxu_write32(priv, REG_OFDM0_XA_AGC_CORE1, val32 | 0x50);
+               rtl8xxxu_write32(priv, REG_OFDM0_XA_AGC_CORE1, val32 | xa_agc);
+
+               if (priv->tx_paths > 1) {
+                       val32 = rtl8xxxu_read32(priv, REG_OFDM0_XB_AGC_CORE1);
+                       val32 &= 0xffffff00;
+                       rtl8xxxu_write32(priv, REG_OFDM0_XB_AGC_CORE1,
+                                        val32 | 0x50);
+                       rtl8xxxu_write32(priv, REG_OFDM0_XB_AGC_CORE1,
+                                        val32 | xb_agc);
+               }
+
+               /* Load 0xe30 IQC default value */
+               rtl8xxxu_write32(priv, REG_TX_IQK_TONE_A, 0x01008c00);
+               rtl8xxxu_write32(priv, REG_RX_IQK_TONE_A, 0x01008c00);
+       }
+}
+
+static void rtl8192eu_phy_iqcalibrate(struct rtl8xxxu_priv *priv,
+                                     int result[][8], int t)
+{
+       struct device *dev = &priv->udev->dev;
+       u32 i, val32;
+       int path_a_ok, path_b_ok;
+       int retry = 2;
+       const u32 adda_regs[RTL8XXXU_ADDA_REGS] = {
+               REG_FPGA0_XCD_SWITCH_CTRL, REG_BLUETOOTH,
+               REG_RX_WAIT_CCA, REG_TX_CCK_RFON,
+               REG_TX_CCK_BBON, REG_TX_OFDM_RFON,
+               REG_TX_OFDM_BBON, REG_TX_TO_RX,
+               REG_TX_TO_TX, REG_RX_CCK,
+               REG_RX_OFDM, REG_RX_WAIT_RIFS,
+               REG_RX_TO_RX, REG_STANDBY,
+               REG_SLEEP, REG_PMPD_ANAEN
+       };
+       const u32 iqk_mac_regs[RTL8XXXU_MAC_REGS] = {
+               REG_TXPAUSE, REG_BEACON_CTRL,
+               REG_BEACON_CTRL_1, REG_GPIO_MUXCFG
+       };
+       const u32 iqk_bb_regs[RTL8XXXU_BB_REGS] = {
+               REG_OFDM0_TRX_PATH_ENABLE, REG_OFDM0_TR_MUX_PAR,
+               REG_FPGA0_XCD_RF_SW_CTRL, REG_CONFIG_ANT_A, REG_CONFIG_ANT_B,
+               REG_FPGA0_XAB_RF_SW_CTRL, REG_FPGA0_XA_RF_INT_OE,
+               REG_FPGA0_XB_RF_INT_OE, REG_CCK0_AFE_SETTING
+       };
+       u8 xa_agc = rtl8xxxu_read32(priv, REG_OFDM0_XA_AGC_CORE1) & 0xff;
+       u8 xb_agc = rtl8xxxu_read32(priv, REG_OFDM0_XB_AGC_CORE1) & 0xff;
+
+       /*
+        * Note: IQ calibration must be performed after loading
+        *       PHY_REG.txt , and radio_a, radio_b.txt
+        */
+
+       if (t == 0) {
+               /* Save ADDA parameters, turn Path A ADDA on */
+               rtl8xxxu_save_regs(priv, adda_regs, priv->adda_backup,
+                                  RTL8XXXU_ADDA_REGS);
+               rtl8xxxu_save_mac_regs(priv, iqk_mac_regs, priv->mac_backup);
+               rtl8xxxu_save_regs(priv, iqk_bb_regs,
+                                  priv->bb_backup, RTL8XXXU_BB_REGS);
+       }
+
+       rtl8xxxu_path_adda_on(priv, adda_regs, true);
 
-       /*
-        * RX IQ calibration setting for 8723B D cut large current issue
-        * when leaving IPS
-        */
-       val32 = rtl8xxxu_read32(priv, REG_FPGA0_IQK);
-       val32 &= 0x000000ff;
-       rtl8xxxu_write32(priv, REG_FPGA0_IQK, val32);
+       /* MAC settings */
+       rtl8xxxu_mac_calibration(priv, iqk_mac_regs, priv->mac_backup);
 
-       val32 = rtl8xxxu_read_rfreg(priv, RF_A, RF6052_REG_WE_LUT);
-       val32 |= 0x80000;
-       rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_WE_LUT, val32);
+       val32 = rtl8xxxu_read32(priv, REG_CCK0_AFE_SETTING);
+       val32 |= 0x0f000000;
+       rtl8xxxu_write32(priv, REG_CCK0_AFE_SETTING, val32);
 
-       rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_RCK_OS, 0x30000);
-       rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_TXPA_G1, 0x0001f);
-       rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_TXPA_G2, 0xf7fb7);
+       rtl8xxxu_write32(priv, REG_OFDM0_TRX_PATH_ENABLE, 0x03a05600);
+       rtl8xxxu_write32(priv, REG_OFDM0_TR_MUX_PAR, 0x000800e4);
+       rtl8xxxu_write32(priv, REG_FPGA0_XCD_RF_SW_CTRL, 0x22208200);
 
-       val32 = rtl8xxxu_read_rfreg(priv, RF_A, RF6052_REG_UNKNOWN_ED);
-       val32 |= 0x20;
-       rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_UNKNOWN_ED, val32);
+       val32 = rtl8xxxu_read32(priv, REG_FPGA0_XAB_RF_SW_CTRL);
+       val32 |= (FPGA0_RF_PAPE | (FPGA0_RF_PAPE << FPGA0_RF_BD_CTRL_SHIFT));
+       rtl8xxxu_write32(priv, REG_FPGA0_XAB_RF_SW_CTRL, val32);
 
-       rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_UNKNOWN_43, 0x60fbd);
+       val32 = rtl8xxxu_read32(priv, REG_FPGA0_XA_RF_INT_OE);
+       val32 |= BIT(10);
+       rtl8xxxu_write32(priv, REG_FPGA0_XA_RF_INT_OE, val32);
+       val32 = rtl8xxxu_read32(priv, REG_FPGA0_XB_RF_INT_OE);
+       val32 |= BIT(10);
+       rtl8xxxu_write32(priv, REG_FPGA0_XB_RF_INT_OE, val32);
+
+       rtl8xxxu_write32(priv, REG_FPGA0_IQK, 0x80800000);
+       rtl8xxxu_write32(priv, REG_TX_IQK, 0x01007c00);
+       rtl8xxxu_write32(priv, REG_RX_IQK, 0x01004800);
 
        for (i = 0; i < retry; i++) {
-               path_a_ok = rtl8723bu_iqk_path_a(priv);
+               path_a_ok = rtl8192eu_iqk_path_a(priv);
                if (path_a_ok == 0x01) {
-                       val32 = rtl8xxxu_read32(priv, REG_FPGA0_IQK);
-                       val32 &= 0x000000ff;
-                       rtl8xxxu_write32(priv, REG_FPGA0_IQK, val32);
-
-#if 0 /* Only needed in restore case, we may need this when going to suspend */
-                       priv->RFCalibrateInfo.TxLOK[RF_A] =
-                               rtl8xxxu_read_rfreg(priv, RF_A,
-                                                   RF6052_REG_TXM_IDAC);
-#endif
-
                        val32 = rtl8xxxu_read32(priv,
                                                REG_TX_POWER_BEFORE_IQK_A);
                        result[t][0] = (val32 >> 16) & 0x3ff;
@@ -4756,7 +5997,7 @@ static void rtl8723bu_phy_iqcalibrate(struct rtl8xxxu_priv *priv,
                dev_dbg(dev, "%s: Path A TX IQK failed!\n", __func__);
 
        for (i = 0; i < retry; i++) {
-               path_a_ok = rtl8723bu_rx_iqk_path_a(priv);
+               path_a_ok = rtl8192eu_rx_iqk_path_a(priv);
                if (path_a_ok == 0x03) {
                        val32 = rtl8xxxu_read32(priv,
                                                REG_RX_POWER_BEFORE_IQK_A_2);
@@ -4772,30 +6013,22 @@ static void rtl8723bu_phy_iqcalibrate(struct rtl8xxxu_priv *priv,
        if (!path_a_ok)
                dev_dbg(dev, "%s: Path A RX IQK failed!\n", __func__);
 
-       if (priv->tx_paths > 1) {
-#if 1
-               dev_warn(dev, "%s: Path B not supported\n", __func__);
-#else
-
-               /*
-                * Path A into standby
-                */
-               val32 = rtl8xxxu_read32(priv, REG_FPGA0_IQK);
-               val32 &= 0x000000ff;
-               rtl8xxxu_write32(priv, REG_FPGA0_IQK, val32);
+       if (priv->rf_paths > 1) {
+               /* Path A into standby */
+               rtl8xxxu_write32(priv, REG_FPGA0_IQK, 0x00000000);
                rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_AC, 0x10000);
-
-               val32 = rtl8xxxu_read32(priv, REG_FPGA0_IQK);
-               val32 &= 0x000000ff;
-               val32 |= 0x80800000;
-               rtl8xxxu_write32(priv, REG_FPGA0_IQK, val32);
+               rtl8xxxu_write32(priv, REG_FPGA0_IQK, 0x80800000);
 
                /* Turn Path B ADDA on */
                rtl8xxxu_path_adda_on(priv, adda_regs, false);
 
+               rtl8xxxu_write32(priv, REG_FPGA0_IQK, 0x80800000);
+               rtl8xxxu_write32(priv, REG_TX_IQK, 0x01007c00);
+               rtl8xxxu_write32(priv, REG_RX_IQK, 0x01004800);
+
                for (i = 0; i < retry; i++) {
-                       path_b_ok = rtl8xxxu_iqk_path_b(priv);
-                       if (path_b_ok == 0x03) {
+                       path_b_ok = rtl8192eu_iqk_path_b(priv);
+                       if (path_b_ok == 0x01) {
                                val32 = rtl8xxxu_read32(priv, REG_TX_POWER_BEFORE_IQK_B);
                                result[t][4] = (val32 >> 16) & 0x3ff;
                                val32 = rtl8xxxu_read32(priv, REG_TX_POWER_AFTER_IQK_B);
@@ -4808,7 +6041,7 @@ static void rtl8723bu_phy_iqcalibrate(struct rtl8xxxu_priv *priv,
                        dev_dbg(dev, "%s: Path B IQK failed!\n", __func__);
 
                for (i = 0; i < retry; i++) {
-                       path_b_ok = rtl8723bu_rx_iqk_path_b(priv);
+                       path_b_ok = rtl8192eu_rx_iqk_path_b(priv);
                        if (path_a_ok == 0x03) {
                                val32 = rtl8xxxu_read32(priv,
                                                        REG_RX_POWER_BEFORE_IQK_B_2);
@@ -4822,13 +6055,10 @@ static void rtl8723bu_phy_iqcalibrate(struct rtl8xxxu_priv *priv,
 
                if (!path_b_ok)
                        dev_dbg(dev, "%s: Path B RX IQK failed!\n", __func__);
-#endif
        }
 
        /* Back to BB mode, load original value */
-       val32 = rtl8xxxu_read32(priv, REG_FPGA0_IQK);
-       val32 &= 0x000000ff;
-       rtl8xxxu_write32(priv, REG_FPGA0_IQK, val32);
+       rtl8xxxu_write32(priv, REG_FPGA0_IQK, 0x00000000);
 
        if (t) {
                /* Reload ADDA power saving parameters */
@@ -4848,7 +6078,7 @@ static void rtl8723bu_phy_iqcalibrate(struct rtl8xxxu_priv *priv,
                rtl8xxxu_write32(priv, REG_OFDM0_XA_AGC_CORE1, val32 | 0x50);
                rtl8xxxu_write32(priv, REG_OFDM0_XA_AGC_CORE1, val32 | xa_agc);
 
-               if (priv->tx_paths > 1) {
+               if (priv->rf_paths > 1) {
                        val32 = rtl8xxxu_read32(priv, REG_OFDM0_XB_AGC_CORE1);
                        val32 &= 0xffffff00;
                        rtl8xxxu_write32(priv, REG_OFDM0_XB_AGC_CORE1,
@@ -4877,7 +6107,7 @@ static void rtl8xxxu_prepare_calibrate(struct rtl8xxxu_priv *priv, u8 start)
        rtl8723a_h2c_cmd(priv, &h2c, sizeof(h2c.bt_wlan_calibration));
 }
 
-static void rtl8723au_phy_iq_calibrate(struct rtl8xxxu_priv *priv)
+static void rtl8xxxu_gen1_phy_iq_calibrate(struct rtl8xxxu_priv *priv)
 {
        struct device *dev = &priv->udev->dev;
        int result[4][8];       /* last is final result */
@@ -4975,7 +6205,7 @@ static void rtl8723au_phy_iq_calibrate(struct rtl8xxxu_priv *priv)
                rtl8xxxu_fill_iqk_matrix_b(priv, path_b_ok, result,
                                           candidate, (reg_ec4 == 0));
 
-       rtl8xxxu_save_regs(priv, rtl8723au_iqk_phy_iq_bb_reg,
+       rtl8xxxu_save_regs(priv, rtl8xxxu_iqk_phy_iq_bb_reg,
                           priv->bb_recovery_backup, RTL8XXXU_BB_REGS);
 
        rtl8xxxu_prepare_calibrate(priv, 0);
@@ -5007,7 +6237,8 @@ static void rtl8723bu_phy_iq_calibrate(struct rtl8xxxu_priv *priv)
                rtl8723bu_phy_iqcalibrate(priv, result, i);
 
                if (i == 1) {
-                       simu = rtl8723bu_simularity_compare(priv, result, 0, 1);
+                       simu = rtl8xxxu_gen2_simularity_compare(priv,
+                                                               result, 0, 1);
                        if (simu) {
                                candidate = 0;
                                break;
@@ -5015,13 +6246,15 @@ static void rtl8723bu_phy_iq_calibrate(struct rtl8xxxu_priv *priv)
                }
 
                if (i == 2) {
-                       simu = rtl8723bu_simularity_compare(priv, result, 0, 2);
+                       simu = rtl8xxxu_gen2_simularity_compare(priv,
+                                                               result, 0, 2);
                        if (simu) {
                                candidate = 0;
                                break;
                        }
 
-                       simu = rtl8723bu_simularity_compare(priv, result, 1, 2);
+                       simu = rtl8xxxu_gen2_simularity_compare(priv,
+                                                               result, 1, 2);
                        if (simu) {
                                candidate = 1;
                        } else {
@@ -5080,7 +6313,7 @@ static void rtl8723bu_phy_iq_calibrate(struct rtl8xxxu_priv *priv)
                rtl8xxxu_fill_iqk_matrix_b(priv, path_b_ok, result,
                                           candidate, (reg_ec4 == 0));
 
-       rtl8xxxu_save_regs(priv, rtl8723au_iqk_phy_iq_bb_reg,
+       rtl8xxxu_save_regs(priv, rtl8xxxu_iqk_phy_iq_bb_reg,
                           priv->bb_recovery_backup, RTL8XXXU_BB_REGS);
 
        rtl8xxxu_write32(priv, REG_BT_CONTROL_8723BU, bt_control);
@@ -5096,18 +6329,105 @@ static void rtl8723bu_phy_iq_calibrate(struct rtl8xxxu_priv *priv)
        rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_UNKNOWN_ED, val32);
        rtl8xxxu_write_rfreg(priv, RF_A, 0x43, 0x300bd);
 
-       if (priv->rf_paths > 1) {
-               dev_dbg(dev, "%s: beware 2T not yet supported\n", __func__);
-#ifdef RTL8723BU_PATH_B
-               if (RF_Path == 0x0)     //S1
-                       ODM_SetIQCbyRFpath(pDM_Odm, 0);
-               else    //S0
-                       ODM_SetIQCbyRFpath(pDM_Odm, 1);
-#endif
-       }
+       if (priv->rf_paths > 1)
+               dev_dbg(dev, "%s: 8723BU 2T not supported\n", __func__);
+
        rtl8xxxu_prepare_calibrate(priv, 0);
 }
 
+static void rtl8192eu_phy_iq_calibrate(struct rtl8xxxu_priv *priv)
+{
+       struct device *dev = &priv->udev->dev;
+       int result[4][8];       /* last is final result */
+       int i, candidate;
+       bool path_a_ok, path_b_ok;
+       u32 reg_e94, reg_e9c, reg_ea4, reg_eac;
+       u32 reg_eb4, reg_ebc, reg_ec4, reg_ecc;
+       bool simu;
+
+       memset(result, 0, sizeof(result));
+       candidate = -1;
+
+       path_a_ok = false;
+       path_b_ok = false;
+
+       for (i = 0; i < 3; i++) {
+               rtl8192eu_phy_iqcalibrate(priv, result, i);
+
+               if (i == 1) {
+                       simu = rtl8xxxu_gen2_simularity_compare(priv,
+                                                               result, 0, 1);
+                       if (simu) {
+                               candidate = 0;
+                               break;
+                       }
+               }
+
+               if (i == 2) {
+                       simu = rtl8xxxu_gen2_simularity_compare(priv,
+                                                               result, 0, 2);
+                       if (simu) {
+                               candidate = 0;
+                               break;
+                       }
+
+                       simu = rtl8xxxu_gen2_simularity_compare(priv,
+                                                               result, 1, 2);
+                       if (simu)
+                               candidate = 1;
+                       else
+                               candidate = 3;
+               }
+       }
+
+       for (i = 0; i < 4; i++) {
+               reg_e94 = result[i][0];
+               reg_e9c = result[i][1];
+               reg_ea4 = result[i][2];
+               reg_eac = result[i][3];
+               reg_eb4 = result[i][4];
+               reg_ebc = result[i][5];
+               reg_ec4 = result[i][6];
+               reg_ecc = result[i][7];
+       }
+
+       if (candidate >= 0) {
+               reg_e94 = result[candidate][0];
+               priv->rege94 =  reg_e94;
+               reg_e9c = result[candidate][1];
+               priv->rege9c = reg_e9c;
+               reg_ea4 = result[candidate][2];
+               reg_eac = result[candidate][3];
+               reg_eb4 = result[candidate][4];
+               priv->regeb4 = reg_eb4;
+               reg_ebc = result[candidate][5];
+               priv->regebc = reg_ebc;
+               reg_ec4 = result[candidate][6];
+               reg_ecc = result[candidate][7];
+               dev_dbg(dev, "%s: candidate is %x\n", __func__, candidate);
+               dev_dbg(dev,
+                       "%s: e94 =%x e9c=%x ea4=%x eac=%x eb4=%x ebc=%x ec4=%x "
+                       "ecc=%x\n ", __func__, reg_e94, reg_e9c,
+                       reg_ea4, reg_eac, reg_eb4, reg_ebc, reg_ec4, reg_ecc);
+               path_a_ok = true;
+               path_b_ok = true;
+       } else {
+               reg_e94 = reg_eb4 = priv->rege94 = priv->regeb4 = 0x100;
+               reg_e9c = reg_ebc = priv->rege9c = priv->regebc = 0x0;
+       }
+
+       if (reg_e94 && candidate >= 0)
+               rtl8xxxu_fill_iqk_matrix_a(priv, path_a_ok, result,
+                                          candidate, (reg_ea4 == 0));
+
+       if (priv->rf_paths > 1)
+               rtl8xxxu_fill_iqk_matrix_b(priv, path_b_ok, result,
+                                          candidate, (reg_ec4 == 0));
+
+       rtl8xxxu_save_regs(priv, rtl8xxxu_iqk_phy_iq_bb_reg,
+                          priv->bb_recovery_backup, RTL8XXXU_BB_REGS);
+}
+
 static void rtl8723a_phy_lc_calibrate(struct rtl8xxxu_priv *priv)
 {
        u32 val32;
@@ -5231,7 +6551,7 @@ static void rtl8xxxu_set_ampdu_min_space(struct rtl8xxxu_priv *priv, u8 density)
 static int rtl8xxxu_active_to_emu(struct rtl8xxxu_priv *priv)
 {
        u8 val8;
-       int count, ret;
+       int count, ret = 0;
 
        /* Start of rtl8723AU_card_enable_flow */
        /* Act to Cardemu sequence*/
@@ -5281,7 +6601,7 @@ static int rtl8723bu_active_to_emu(struct rtl8xxxu_priv *priv)
        u8 val8;
        u16 val16;
        u32 val32;
-       int count, ret;
+       int count, ret = 0;
 
        /* Turn off RF */
        rtl8xxxu_write8(priv, REG_RF_CTRL, 0);
@@ -5292,9 +6612,9 @@ static int rtl8723bu_active_to_emu(struct rtl8xxxu_priv *priv)
        rtl8xxxu_write16(priv, REG_GPIO_INTM, val16);
 
        /* Release WLON reset 0x04[16]= 1*/
-       val32 = rtl8xxxu_read32(priv, REG_GPIO_INTM);
+       val32 = rtl8xxxu_read32(priv, REG_APS_FSMCO);
        val32 |= APS_FSMCO_WLON_RESET;
-       rtl8xxxu_write32(priv, REG_GPIO_INTM, val32);
+       rtl8xxxu_write32(priv, REG_APS_FSMCO, val32);
 
        /* 0x0005[1] = 1 turn off MAC by HW state machine*/
        val8 = rtl8xxxu_read8(priv, REG_APS_FSMCO + 1);
@@ -5338,7 +6658,7 @@ static int rtl8xxxu_active_to_lps(struct rtl8xxxu_priv *priv)
 {
        u8 val8;
        u8 val32;
-       int count, ret;
+       int count, ret = 0;
 
        rtl8xxxu_write8(priv, REG_TXPAUSE, 0xff);
 
@@ -5756,6 +7076,50 @@ static int rtl8xxxu_flush_fifo(struct rtl8xxxu_priv *priv)
        return retval;
 }
 
+static void rtl8xxxu_gen1_usb_quirks(struct rtl8xxxu_priv *priv)
+{
+       /* Fix USB interface interference issue */
+       rtl8xxxu_write8(priv, 0xfe40, 0xe0);
+       rtl8xxxu_write8(priv, 0xfe41, 0x8d);
+       rtl8xxxu_write8(priv, 0xfe42, 0x80);
+       /*
+        * This sets TXDMA_OFFSET_DROP_DATA_EN (bit 9) as well as bits
+        * 8 and 5, for which I have found no documentation.
+        */
+       rtl8xxxu_write32(priv, REG_TXDMA_OFFSET_CHK, 0xfd0320);
+
+       /*
+        * Solve too many protocol error on USB bus.
+        * Can't do this for 8188/8192 UMC A cut parts
+        */
+       if (!(!priv->chip_cut && priv->vendor_umc)) {
+               rtl8xxxu_write8(priv, 0xfe40, 0xe6);
+               rtl8xxxu_write8(priv, 0xfe41, 0x94);
+               rtl8xxxu_write8(priv, 0xfe42, 0x80);
+
+               rtl8xxxu_write8(priv, 0xfe40, 0xe0);
+               rtl8xxxu_write8(priv, 0xfe41, 0x19);
+               rtl8xxxu_write8(priv, 0xfe42, 0x80);
+
+               rtl8xxxu_write8(priv, 0xfe40, 0xe5);
+               rtl8xxxu_write8(priv, 0xfe41, 0x91);
+               rtl8xxxu_write8(priv, 0xfe42, 0x80);
+
+               rtl8xxxu_write8(priv, 0xfe40, 0xe2);
+               rtl8xxxu_write8(priv, 0xfe41, 0x81);
+               rtl8xxxu_write8(priv, 0xfe42, 0x80);
+       }
+}
+
+static void rtl8xxxu_gen2_usb_quirks(struct rtl8xxxu_priv *priv)
+{
+       u32 val32;
+
+       val32 = rtl8xxxu_read32(priv, REG_TXDMA_OFFSET_CHK);
+       val32 |= TXDMA_OFFSET_DROP_DATA_EN;
+       rtl8xxxu_write32(priv, REG_TXDMA_OFFSET_CHK, val32);
+}
+
 static int rtl8723au_power_on(struct rtl8xxxu_priv *priv)
 {
        u8 val8;
@@ -5952,10 +7316,12 @@ static int rtl8192cu_power_on(struct rtl8xxxu_priv *priv)
                CR_SCHEDULE_ENABLE | CR_MAC_TX_ENABLE | CR_MAC_RX_ENABLE;
        rtl8xxxu_write16(priv, REG_CR, val16);
 
+       rtl8xxxu_write8(priv, 0xfe10, 0x19);
+
        /*
         * Workaround for 8188RU LNA power leakage problem.
         */
-       if (priv->rtl_chip == RTL8188C && priv->hi_pa) {
+       if (priv->rtl_chip == RTL8188R) {
                val32 = rtl8xxxu_read32(priv, REG_FPGA0_XCD_RF_PARM);
                val32 &= ~BIT(1);
                rtl8xxxu_write32(priv, REG_FPGA0_XCD_RF_PARM, val32);
@@ -5965,6 +7331,41 @@ static int rtl8192cu_power_on(struct rtl8xxxu_priv *priv)
 
 #endif
 
+/*
+ * This is needed for 8723bu as well, presumable
+ */
+static void rtl8192e_crystal_afe_adjust(struct rtl8xxxu_priv *priv)
+{
+       u8 val8;
+       u32 val32;
+
+       /*
+        * 40Mhz crystal source, MAC 0x28[2]=0
+        */
+       val8 = rtl8xxxu_read8(priv, REG_AFE_PLL_CTRL);
+       val8 &= 0xfb;
+       rtl8xxxu_write8(priv, REG_AFE_PLL_CTRL, val8);
+
+       val32 = rtl8xxxu_read32(priv, REG_AFE_CTRL4);
+       val32 &= 0xfffffc7f;
+       rtl8xxxu_write32(priv, REG_AFE_CTRL4, val32);
+
+       /*
+        * 92e AFE parameter
+        * AFE PLL KVCO selection, MAC 0x28[6]=1
+        */
+       val8 = rtl8xxxu_read8(priv, REG_AFE_PLL_CTRL);
+       val8 &= 0xbf;
+       rtl8xxxu_write8(priv, REG_AFE_PLL_CTRL, val8);
+
+       /*
+        * AFE PLL KVCO selection, MAC 0x78[21]=0
+        */
+       val32 = rtl8xxxu_read32(priv, REG_AFE_CTRL4);
+       val32 &= 0xffdfffff;
+       rtl8xxxu_write32(priv, REG_AFE_CTRL4, val32);
+}
+
 static int rtl8192eu_power_on(struct rtl8xxxu_priv *priv)
 {
        u16 val16;
@@ -5987,6 +7388,10 @@ static int rtl8192eu_power_on(struct rtl8xxxu_priv *priv)
                rtl8xxxu_write8(priv, REG_LDO_SW_CTRL, 0x83);
        }
 
+       /*
+        * Adjust AFE before enabling PLL
+        */
+       rtl8192e_crystal_afe_adjust(priv);
        rtl8192e_disabled_to_emu(priv);
 
        ret = rtl8192e_emu_to_active(priv);
@@ -6020,7 +7425,7 @@ static void rtl8xxxu_power_off(struct rtl8xxxu_priv *priv)
        /*
         * Workaround for 8188RU LNA power leakage problem.
         */
-       if (priv->rtl_chip == RTL8188C && priv->hi_pa) {
+       if (priv->rtl_chip == RTL8188R) {
                val32 = rtl8xxxu_read32(priv, REG_FPGA0_XCD_RF_PARM);
                val32 |= BIT(1);
                rtl8xxxu_write32(priv, REG_FPGA0_XCD_RF_PARM, val32);
@@ -6075,7 +7480,7 @@ static void rtl8723bu_power_off(struct rtl8xxxu_priv *priv)
        val8 &= ~TX_REPORT_CTRL_TIMER_ENABLE;
        rtl8xxxu_write8(priv, REG_TX_REPORT_CTRL, val8);
 
-       rtl8xxxu_write16(priv, REG_CR, 0x0000);
+       rtl8xxxu_write8(priv, REG_CR, 0x0000);
 
        rtl8xxxu_active_to_lps(priv);
 
@@ -6092,7 +7497,15 @@ static void rtl8723bu_power_off(struct rtl8xxxu_priv *priv)
        rtl8xxxu_write8(priv, REG_MCU_FW_DL, 0x00);
 
        rtl8723bu_active_to_emu(priv);
-       rtl8xxxu_emu_to_disabled(priv);
+
+       val8 = rtl8xxxu_read8(priv, REG_APS_FSMCO + 1);
+       val8 |= BIT(3); /* APS_FSMCO_HW_SUSPEND */
+       rtl8xxxu_write8(priv, REG_APS_FSMCO + 1, val8);
+
+       /* 0x48[16] = 1 to enable GPIO9 as EXT wakeup */
+       val8 = rtl8xxxu_read8(priv, REG_GPIO_INTM + 2);
+       val8 |= BIT(0);
+       rtl8xxxu_write8(priv, REG_GPIO_INTM + 2, val8);
 }
 
 #ifdef NEED_PS_TDMA
@@ -6101,16 +7514,53 @@ static void rtl8723bu_set_ps_tdma(struct rtl8xxxu_priv *priv,
 {
        struct h2c_cmd h2c;
 
-       memset(&h2c, 0, sizeof(struct h2c_cmd));
-       h2c.b_type_dma.cmd = H2C_8723B_B_TYPE_TDMA;
-       h2c.b_type_dma.data1 = arg1;
-       h2c.b_type_dma.data2 = arg2;
-       h2c.b_type_dma.data3 = arg3;
-       h2c.b_type_dma.data4 = arg4;
-       h2c.b_type_dma.data5 = arg5;
-       rtl8723a_h2c_cmd(priv, &h2c, sizeof(h2c.b_type_dma));
+       memset(&h2c, 0, sizeof(struct h2c_cmd));
+       h2c.b_type_dma.cmd = H2C_8723B_B_TYPE_TDMA;
+       h2c.b_type_dma.data1 = arg1;
+       h2c.b_type_dma.data2 = arg2;
+       h2c.b_type_dma.data3 = arg3;
+       h2c.b_type_dma.data4 = arg4;
+       h2c.b_type_dma.data5 = arg5;
+       rtl8723a_h2c_cmd(priv, &h2c, sizeof(h2c.b_type_dma));
+}
+#endif
+
+static void rtl8192e_enable_rf(struct rtl8xxxu_priv *priv)
+{
+       u32 val32;
+       u8 val8;
+
+       val8 = rtl8xxxu_read8(priv, REG_GPIO_MUXCFG);
+       val8 |= BIT(5);
+       rtl8xxxu_write8(priv, REG_GPIO_MUXCFG, val8);
+
+       /*
+        * WLAN action by PTA
+        */
+       rtl8xxxu_write8(priv, REG_WLAN_ACT_CONTROL_8723B, 0x04);
+
+       val32 = rtl8xxxu_read32(priv, REG_PWR_DATA);
+       val32 |= PWR_DATA_EEPRPAD_RFE_CTRL_EN;
+       rtl8xxxu_write32(priv, REG_PWR_DATA, val32);
+
+       val32 = rtl8xxxu_read32(priv, REG_RFE_BUFFER);
+       val32 |= (BIT(0) | BIT(1));
+       rtl8xxxu_write32(priv, REG_RFE_BUFFER, val32);
+
+       rtl8xxxu_write8(priv, REG_RFE_CTRL_ANTA_SRC, 0x77);
+
+       val32 = rtl8xxxu_read32(priv, REG_LEDCFG0);
+       val32 &= ~BIT(24);
+       val32 |= BIT(23);
+       rtl8xxxu_write32(priv, REG_LEDCFG0, val32);
+
+       /*
+        * Fix external switch Main->S1, Aux->S0
+        */
+       val8 = rtl8xxxu_read8(priv, REG_PAD_CTRL1);
+       val8 &= ~BIT(0);
+       rtl8xxxu_write8(priv, REG_PAD_CTRL1, val8);
 }
-#endif
 
 static void rtl8723b_enable_rf(struct rtl8xxxu_priv *priv)
 {
@@ -6219,12 +7669,10 @@ static void rtl8723b_enable_rf(struct rtl8xxxu_priv *priv)
        rtl8723a_h2c_cmd(priv, &h2c, sizeof(h2c.ignore_wlan));
 }
 
-static void rtl8723b_disable_rf(struct rtl8xxxu_priv *priv)
+static void rtl8xxxu_gen2_disable_rf(struct rtl8xxxu_priv *priv)
 {
        u32 val32;
 
-       rtl8xxxu_write8(priv, REG_TXPAUSE, 0xff);
-
        val32 = rtl8xxxu_read32(priv, REG_RX_WAIT_CCA);
        val32 &= ~(BIT(22) | BIT(23));
        rtl8xxxu_write32(priv, REG_RX_WAIT_CCA, val32);
@@ -6272,11 +7720,64 @@ static void rtl8723bu_init_statistics(struct rtl8xxxu_priv *priv)
        rtl8xxxu_write32(priv, REG_OFDM0_FA_RSTC, val32);
 }
 
+static void rtl8xxxu_old_init_queue_reserved_page(struct rtl8xxxu_priv *priv)
+{
+       u8 val8;
+       u32 val32;
+
+       if (priv->ep_tx_normal_queue)
+               val8 = TX_PAGE_NUM_NORM_PQ;
+       else
+               val8 = 0;
+
+       rtl8xxxu_write8(priv, REG_RQPN_NPQ, val8);
+
+       val32 = (TX_PAGE_NUM_PUBQ << RQPN_PUB_PQ_SHIFT) | RQPN_LOAD;
+
+       if (priv->ep_tx_high_queue)
+               val32 |= (TX_PAGE_NUM_HI_PQ << RQPN_HI_PQ_SHIFT);
+       if (priv->ep_tx_low_queue)
+               val32 |= (TX_PAGE_NUM_LO_PQ << RQPN_LO_PQ_SHIFT);
+
+       rtl8xxxu_write32(priv, REG_RQPN, val32);
+}
+
+static void rtl8xxxu_init_queue_reserved_page(struct rtl8xxxu_priv *priv)
+{
+       struct rtl8xxxu_fileops *fops = priv->fops;
+       u32 hq, lq, nq, eq, pubq;
+       u32 val32;
+
+       hq = 0;
+       lq = 0;
+       nq = 0;
+       eq = 0;
+       pubq = 0;
+
+       if (priv->ep_tx_high_queue)
+               hq = fops->page_num_hi;
+       if (priv->ep_tx_low_queue)
+               lq = fops->page_num_lo;
+       if (priv->ep_tx_normal_queue)
+               nq = fops->page_num_norm;
+
+       val32 = (nq << RQPN_NPQ_SHIFT) | (eq << RQPN_EPQ_SHIFT);
+       rtl8xxxu_write32(priv, REG_RQPN_NPQ, val32);
+
+       pubq = fops->total_page_num - hq - lq - nq;
+
+       val32 = RQPN_LOAD;
+       val32 |= (hq << RQPN_HI_PQ_SHIFT);
+       val32 |= (lq << RQPN_LO_PQ_SHIFT);
+       val32 |= (pubq << RQPN_PUB_PQ_SHIFT);
+
+       rtl8xxxu_write32(priv, REG_RQPN, val32);
+}
+
 static int rtl8xxxu_init_device(struct ieee80211_hw *hw)
 {
        struct rtl8xxxu_priv *priv = hw->priv;
        struct device *dev = &priv->udev->dev;
-       struct rtl8xxxu_rfregval *rftable;
        bool macpower;
        int ret;
        u8 val8;
@@ -6301,33 +7802,22 @@ static int rtl8xxxu_init_device(struct ieee80211_hw *hw)
                goto exit;
        }
 
-       dev_dbg(dev, "%s: macpower %i\n", __func__, macpower);
        if (!macpower) {
-               ret = priv->fops->llt_init(priv, TX_TOTAL_PAGE_NUM);
-               if (ret) {
-                       dev_warn(dev, "%s: LLT table init failed\n", __func__);
-                       goto exit;
-               }
+               if (priv->fops->total_page_num)
+                       rtl8xxxu_init_queue_reserved_page(priv);
+               else
+                       rtl8xxxu_old_init_queue_reserved_page(priv);
+       }
 
-               /*
-                * Presumably this is for 8188EU as well
-                * Enable TX report and TX report timer
-                */
-               if (priv->rtl_chip == RTL8723B) {
-                       val8 = rtl8xxxu_read8(priv, REG_TX_REPORT_CTRL);
-                       val8 |= TX_REPORT_CTRL_TIMER_ENABLE;
-                       rtl8xxxu_write8(priv, REG_TX_REPORT_CTRL, val8);
-                       /* Set MAX RPT MACID */
-                       rtl8xxxu_write8(priv, REG_TX_REPORT_CTRL + 1, 0x02);
-                       /* TX report Timer. Unit: 32us */
-                       rtl8xxxu_write16(priv, REG_TX_REPORT_TIME, 0xcdf0);
+       ret = rtl8xxxu_init_queue_priority(priv);
+       dev_dbg(dev, "%s: init_queue_priority %i\n", __func__, ret);
+       if (ret)
+               goto exit;
 
-                       /* tmp ps ? */
-                       val8 = rtl8xxxu_read8(priv, 0xa3);
-                       val8 &= 0xf8;
-                       rtl8xxxu_write8(priv, 0xa3, val8);
-               }
-       }
+       /*
+        * Set RX page boundary
+        */
+       rtl8xxxu_write16(priv, REG_TRXFF_BNDY + 2, priv->fops->trxff_boundary);
 
        ret = rtl8xxxu_download_firmware(priv);
        dev_dbg(dev, "%s: download_fiwmare %i\n", __func__, ret);
@@ -6338,41 +7828,10 @@ static int rtl8xxxu_init_device(struct ieee80211_hw *hw)
        if (ret)
                goto exit;
 
-       /* Solve too many protocol error on USB bus */
-       /* Can't do this for 8188/8192 UMC A cut parts */
-       if (priv->rtl_chip == RTL8723A ||
-           ((priv->rtl_chip == RTL8192C || priv->rtl_chip == RTL8191C ||
-             priv->rtl_chip == RTL8188C) &&
-            (priv->chip_cut || !priv->vendor_umc))) {
-               rtl8xxxu_write8(priv, 0xfe40, 0xe6);
-               rtl8xxxu_write8(priv, 0xfe41, 0x94);
-               rtl8xxxu_write8(priv, 0xfe42, 0x80);
-
-               rtl8xxxu_write8(priv, 0xfe40, 0xe0);
-               rtl8xxxu_write8(priv, 0xfe41, 0x19);
-               rtl8xxxu_write8(priv, 0xfe42, 0x80);
-
-               rtl8xxxu_write8(priv, 0xfe40, 0xe5);
-               rtl8xxxu_write8(priv, 0xfe41, 0x91);
-               rtl8xxxu_write8(priv, 0xfe42, 0x80);
-
-               rtl8xxxu_write8(priv, 0xfe40, 0xe2);
-               rtl8xxxu_write8(priv, 0xfe41, 0x81);
-               rtl8xxxu_write8(priv, 0xfe42, 0x80);
-       }
-
-       if (priv->rtl_chip == RTL8192E) {
-               rtl8xxxu_write32(priv, REG_HIMR0, 0x00);
-               rtl8xxxu_write32(priv, REG_HIMR1, 0x00);
-       }
-
        if (priv->fops->phy_init_antenna_selection)
                priv->fops->phy_init_antenna_selection(priv);
 
-       if (priv->rtl_chip == RTL8723B)
-               ret = rtl8xxxu_init_mac(priv, rtl8723b_mac_init_table);
-       else
-               ret = rtl8xxxu_init_mac(priv, rtl8723a_mac_init_table);
+       ret = rtl8xxxu_init_mac(priv);
 
        dev_dbg(dev, "%s: init_mac %i\n", __func__, ret);
        if (ret)
@@ -6383,90 +7842,35 @@ static int rtl8xxxu_init_device(struct ieee80211_hw *hw)
        if (ret)
                goto exit;
 
-       switch(priv->rtl_chip) {
-       case RTL8723A:
-               rftable = rtl8723au_radioa_1t_init_table;
-               ret = rtl8xxxu_init_phy_rf(priv, rftable, RF_A);
-               break;
-       case RTL8723B:
-               rftable = rtl8723bu_radioa_1t_init_table;
-               ret = rtl8xxxu_init_phy_rf(priv, rftable, RF_A);
-               /*
-                * PHY LCK
-                */
-               rtl8xxxu_write_rfreg(priv, RF_A, 0xb0, 0xdfbe0);
-               rtl8xxxu_write_rfreg(priv, RF_A, RF6052_REG_MODE_AG, 0x8c01);
-               msleep(200);
-               rtl8xxxu_write_rfreg(priv, RF_A, 0xb0, 0xdffe0);
-               break;
-       case RTL8188C:
-               if (priv->hi_pa)
-                       rftable = rtl8188ru_radioa_1t_highpa_table;
-               else
-                       rftable = rtl8192cu_radioa_1t_init_table;
-               ret = rtl8xxxu_init_phy_rf(priv, rftable, RF_A);
-               break;
-       case RTL8191C:
-               rftable = rtl8192cu_radioa_1t_init_table;
-               ret = rtl8xxxu_init_phy_rf(priv, rftable, RF_A);
-               break;
-       case RTL8192C:
-               rftable = rtl8192cu_radioa_2t_init_table;
-               ret = rtl8xxxu_init_phy_rf(priv, rftable, RF_A);
-               if (ret)
-                       break;
-               rftable = rtl8192cu_radiob_2t_init_table;
-               ret = rtl8xxxu_init_phy_rf(priv, rftable, RF_B);
-               break;
-       default:
-               ret = -EINVAL;
-       }
-
+       ret = priv->fops->init_phy_rf(priv);
        if (ret)
                goto exit;
 
-       /*
-        * Chip specific quirks
-        */
-       if (priv->rtl_chip == RTL8723A) {
-               /* Fix USB interface interference issue */
-               rtl8xxxu_write8(priv, 0xfe40, 0xe0);
-               rtl8xxxu_write8(priv, 0xfe41, 0x8d);
-               rtl8xxxu_write8(priv, 0xfe42, 0x80);
-               rtl8xxxu_write32(priv, REG_TXDMA_OFFSET_CHK, 0xfd0320);
+       /* RFSW Control - clear bit 14 ?? */
+       if (priv->rtl_chip != RTL8723B && priv->rtl_chip != RTL8192E)
+               rtl8xxxu_write32(priv, REG_FPGA0_TX_INFO, 0x00000003);
 
-               /* Reduce 80M spur */
-               rtl8xxxu_write32(priv, REG_AFE_XTAL_CTRL, 0x0381808d);
-               rtl8xxxu_write32(priv, REG_AFE_PLL_CTRL, 0xf0ffff83);
-               rtl8xxxu_write32(priv, REG_AFE_PLL_CTRL, 0xf0ffff82);
-               rtl8xxxu_write32(priv, REG_AFE_PLL_CTRL, 0xf0ffff83);
-       } else {
-               val32 = rtl8xxxu_read32(priv, REG_TXDMA_OFFSET_CHK);
-               val32 |= TXDMA_OFFSET_DROP_DATA_EN;
-               rtl8xxxu_write32(priv, REG_TXDMA_OFFSET_CHK, val32);
+       val32 = FPGA0_RF_TRSW | FPGA0_RF_TRSWB | FPGA0_RF_ANTSW |
+               FPGA0_RF_ANTSWB |
+               ((FPGA0_RF_ANTSW | FPGA0_RF_ANTSWB) << FPGA0_RF_BD_CTRL_SHIFT);
+       if (!priv->no_pape) {
+               val32 |= (FPGA0_RF_PAPE |
+                         (FPGA0_RF_PAPE << FPGA0_RF_BD_CTRL_SHIFT));
        }
+       rtl8xxxu_write32(priv, REG_FPGA0_XAB_RF_SW_CTRL, val32);
 
-       if (!macpower) {
-               if (priv->ep_tx_normal_queue)
-                       val8 = TX_PAGE_NUM_NORM_PQ;
-               else
-                       val8 = 0;
-
-               rtl8xxxu_write8(priv, REG_RQPN_NPQ, val8);
-
-               val32 = (TX_PAGE_NUM_PUBQ << RQPN_NORM_PQ_SHIFT) | RQPN_LOAD;
-
-               if (priv->ep_tx_high_queue)
-                       val32 |= (TX_PAGE_NUM_HI_PQ << RQPN_HI_PQ_SHIFT);
-               if (priv->ep_tx_low_queue)
-                       val32 |= (TX_PAGE_NUM_LO_PQ << RQPN_LO_PQ_SHIFT);
-
-               rtl8xxxu_write32(priv, REG_RQPN, val32);
+       /* 0x860[6:5]= 00 - why? - this sets antenna B */
+       if (priv->rtl_chip != RTL8192E)
+               rtl8xxxu_write32(priv, REG_FPGA0_XA_RF_INT_OE, 0x66f60210);
 
+       if (!macpower) {
                /*
                 * Set TX buffer boundary
                 */
-               val8 = TX_TOTAL_PAGE_NUM + 1;
+               if (priv->rtl_chip == RTL8192E)
+                       val8 = TX_TOTAL_PAGE_NUM_8192E + 1;
+               else
+                       val8 = TX_TOTAL_PAGE_NUM + 1;
 
                if (priv->rtl_chip == RTL8723B)
                        val8 -= 1;
@@ -6478,54 +7882,63 @@ static int rtl8xxxu_init_device(struct ieee80211_hw *hw)
                rtl8xxxu_write8(priv, REG_TDECTRL + 1, val8);
        }
 
-       ret = rtl8xxxu_init_queue_priority(priv);
-       dev_dbg(dev, "%s: init_queue_priority %i\n", __func__, ret);
-       if (ret)
-               goto exit;
+       /*
+        * The vendor drivers set PBP for all devices, except 8192e.
+        * There is no explanation for this in any of the sources.
+        */
+       val8 = (priv->fops->pbp_rx << PBP_PAGE_SIZE_RX_SHIFT) |
+               (priv->fops->pbp_tx << PBP_PAGE_SIZE_TX_SHIFT);
+       if (priv->rtl_chip != RTL8192E)
+               rtl8xxxu_write8(priv, REG_PBP, val8);
 
-       /* RFSW Control - clear bit 14 ?? */
-       if (priv->rtl_chip != RTL8723B)
-               rtl8xxxu_write32(priv, REG_FPGA0_TX_INFO, 0x00000003);
-       /* 0x07000760 */
-       val32 = FPGA0_RF_TRSW | FPGA0_RF_TRSWB | FPGA0_RF_ANTSW |
-               FPGA0_RF_ANTSWB | FPGA0_RF_PAPE |
-               ((FPGA0_RF_ANTSW | FPGA0_RF_ANTSWB | FPGA0_RF_PAPE) <<
-                FPGA0_RF_BD_CTRL_SHIFT);
-       rtl8xxxu_write32(priv, REG_FPGA0_XAB_RF_SW_CTRL, val32);
-       /* 0x860[6:5]= 00 - why? - this sets antenna B */
-       rtl8xxxu_write32(priv, REG_FPGA0_XA_RF_INT_OE, 0x66F60210);
+       dev_dbg(dev, "%s: macpower %i\n", __func__, macpower);
+       if (!macpower) {
+               ret = priv->fops->llt_init(priv, TX_TOTAL_PAGE_NUM);
+               if (ret) {
+                       dev_warn(dev, "%s: LLT table init failed\n", __func__);
+                       goto exit;
+               }
 
-       priv->rf_mode_ag[0] = rtl8xxxu_read_rfreg(priv, RF_A,
-                                                 RF6052_REG_MODE_AG);
+               /*
+                * Chip specific quirks
+                */
+               priv->fops->usb_quirks(priv);
 
-       /*
-        * Set RX page boundary
-        */
-       if (priv->rtl_chip == RTL8723B)
-               rtl8xxxu_write16(priv, REG_TRXFF_BNDY + 2, 0x3f7f);
-       else
-               rtl8xxxu_write16(priv, REG_TRXFF_BNDY + 2, 0x27ff);
-       /*
-        * Transfer page size is always 128
-        */
-       if (priv->rtl_chip == RTL8723B)
-               val8 = (PBP_PAGE_SIZE_256 << PBP_PAGE_SIZE_RX_SHIFT) |
-                       (PBP_PAGE_SIZE_256 << PBP_PAGE_SIZE_TX_SHIFT);
-       else
-               val8 = (PBP_PAGE_SIZE_128 << PBP_PAGE_SIZE_RX_SHIFT) |
-                       (PBP_PAGE_SIZE_128 << PBP_PAGE_SIZE_TX_SHIFT);
-       rtl8xxxu_write8(priv, REG_PBP, val8);
+               /*
+                * Presumably this is for 8188EU as well
+                * Enable TX report and TX report timer
+                */
+               if (priv->rtl_chip == RTL8723B) {
+                       val8 = rtl8xxxu_read8(priv, REG_TX_REPORT_CTRL);
+                       val8 |= TX_REPORT_CTRL_TIMER_ENABLE;
+                       rtl8xxxu_write8(priv, REG_TX_REPORT_CTRL, val8);
+                       /* Set MAX RPT MACID */
+                       rtl8xxxu_write8(priv, REG_TX_REPORT_CTRL + 1, 0x02);
+                       /* TX report Timer. Unit: 32us */
+                       rtl8xxxu_write16(priv, REG_TX_REPORT_TIME, 0xcdf0);
+
+                       /* tmp ps ? */
+                       val8 = rtl8xxxu_read8(priv, 0xa3);
+                       val8 &= 0xf8;
+                       rtl8xxxu_write8(priv, 0xa3, val8);
+               }
+       }
 
        /*
         * Unit in 8 bytes, not obvious what it is used for
         */
        rtl8xxxu_write8(priv, REG_RX_DRVINFO_SZ, 4);
 
-       /*
-        * Enable all interrupts - not obvious USB needs to do this
-        */
-       rtl8xxxu_write32(priv, REG_HISR, 0xffffffff);
-       rtl8xxxu_write32(priv, REG_HIMR, 0xffffffff);
+       if (priv->rtl_chip == RTL8192E) {
+               rtl8xxxu_write32(priv, REG_HIMR0, 0x00);
+               rtl8xxxu_write32(priv, REG_HIMR1, 0x00);
+       } else {
+               /*
+                * Enable all interrupts - not obvious USB needs to do this
+                */
+               rtl8xxxu_write32(priv, REG_HISR, 0xffffffff);
+               rtl8xxxu_write32(priv, REG_HIMR, 0xffffffff);
+       }
 
        rtl8xxxu_set_mac(priv);
        rtl8xxxu_set_linktype(priv, NL80211_IFTYPE_STATION);
@@ -6651,9 +8064,11 @@ static int rtl8xxxu_init_device(struct ieee80211_hw *hw)
        priv->fops->set_tx_power(priv, 1, false);
 
        /* Let the 8051 take control of antenna setting */
-       val8 = rtl8xxxu_read8(priv, REG_LEDCFG2);
-       val8 |= LEDCFG2_DPDT_SELECT;
-       rtl8xxxu_write8(priv, REG_LEDCFG2, val8);
+       if (priv->rtl_chip != RTL8192E) {
+               val8 = rtl8xxxu_read8(priv, REG_LEDCFG2);
+               val8 |= LEDCFG2_DPDT_SELECT;
+               rtl8xxxu_write8(priv, REG_LEDCFG2, val8);
+       }
 
        rtl8xxxu_write8(priv, REG_HWSEQ_CTRL, 0xff);
 
@@ -6665,6 +8080,20 @@ static int rtl8xxxu_init_device(struct ieee80211_hw *hw)
        if (priv->fops->init_statistics)
                priv->fops->init_statistics(priv);
 
+       if (priv->rtl_chip == RTL8192E) {
+               /*
+                * 0x4c6[3] 1: RTS BW = Data BW
+                * 0: RTS BW depends on CCA / secondary CCA result.
+                */
+               val8 = rtl8xxxu_read8(priv, REG_QUEUE_CTRL);
+               val8 &= ~BIT(3);
+               rtl8xxxu_write8(priv, REG_QUEUE_CTRL, val8);
+               /*
+                * Reset USB mode switch setting
+                */
+               rtl8xxxu_write8(priv, REG_ACLK_MON, 0x00);
+       }
+
        rtl8723a_phy_lc_calibrate(priv);
 
        priv->fops->phy_iq_calibrate(priv);
@@ -6672,7 +8101,7 @@ static int rtl8xxxu_init_device(struct ieee80211_hw *hw)
        /*
         * This should enable thermal meter
         */
-       if (priv->fops->has_s0s1)
+       if (priv->fops->tx_desc_size == sizeof(struct rtl8xxxu_txdesc40))
                rtl8xxxu_write_rfreg(priv,
                                     RF_A, RF6052_REG_T_METER_8723B, 0x37cf8);
        else
@@ -6693,6 +8122,8 @@ static int rtl8xxxu_init_device(struct ieee80211_hw *hw)
                        val32 |= FPGA_RF_MODE_CCK;
                        rtl8xxxu_write32(priv, REG_FPGA0_RF_MODE, val32);
                }
+       } else if (priv->rtl_chip == RTL8192E) {
+               rtl8xxxu_write8(priv, REG_USB_HRPWM, 0x00);
        }
 
        val32 = rtl8xxxu_read32(priv, REG_FWHW_TXQ_CTRL);
@@ -6700,17 +8131,20 @@ static int rtl8xxxu_init_device(struct ieee80211_hw *hw)
        /* ack for xmit mgmt frames. */
        rtl8xxxu_write32(priv, REG_FWHW_TXQ_CTRL, val32);
 
+       if (priv->rtl_chip == RTL8192E) {
+               /*
+                * Fix LDPC rx hang issue.
+                */
+               val32 = rtl8xxxu_read32(priv, REG_AFE_MISC);
+               rtl8xxxu_write8(priv, REG_8192E_LDOV12_CTRL, 0x75);
+               val32 &= 0xfff00fff;
+               val32 |= 0x0007e000;
+               rtl8xxxu_write32(priv, REG_AFE_MISC, val32);
+       }
 exit:
        return ret;
 }
 
-static void rtl8xxxu_disable_device(struct ieee80211_hw *hw)
-{
-       struct rtl8xxxu_priv *priv = hw->priv;
-
-       priv->fops->power_off(priv);
-}
-
 static void rtl8xxxu_cam_write(struct rtl8xxxu_priv *priv,
                               struct ieee80211_key_conf *key, const u8 *mac)
 {
@@ -6775,8 +8209,8 @@ static void rtl8xxxu_sw_scan_complete(struct ieee80211_hw *hw,
        rtl8xxxu_write8(priv, REG_BEACON_CTRL, val8);
 }
 
-static void rtl8723au_update_rate_mask(struct rtl8xxxu_priv *priv,
-                                      u32 ramask, int sgi)
+static void rtl8xxxu_update_rate_mask(struct rtl8xxxu_priv *priv,
+                                     u32 ramask, int sgi)
 {
        struct h2c_cmd h2c;
 
@@ -6795,8 +8229,8 @@ static void rtl8723au_update_rate_mask(struct rtl8xxxu_priv *priv,
        rtl8723a_h2c_cmd(priv, &h2c, sizeof(h2c.ramask));
 }
 
-static void rtl8723bu_update_rate_mask(struct rtl8xxxu_priv *priv,
-                                      u32 ramask, int sgi)
+static void rtl8xxxu_gen2_update_rate_mask(struct rtl8xxxu_priv *priv,
+                                          u32 ramask, int sgi)
 {
        struct h2c_cmd h2c;
        u8 bw = 0;
@@ -6821,8 +8255,8 @@ static void rtl8723bu_update_rate_mask(struct rtl8xxxu_priv *priv,
        rtl8723a_h2c_cmd(priv, &h2c, sizeof(h2c.b_macid_cfg));
 }
 
-static void rtl8723au_report_connect(struct rtl8xxxu_priv *priv,
-                                    u8 macid, bool connect)
+static void rtl8xxxu_gen1_report_connect(struct rtl8xxxu_priv *priv,
+                                        u8 macid, bool connect)
 {
        struct h2c_cmd h2c;
 
@@ -6838,8 +8272,8 @@ static void rtl8723au_report_connect(struct rtl8xxxu_priv *priv,
        rtl8723a_h2c_cmd(priv, &h2c, sizeof(h2c.joinbss));
 }
 
-static void rtl8723bu_report_connect(struct rtl8xxxu_priv *priv,
-                                    u8 macid, bool connect)
+static void rtl8xxxu_gen2_report_connect(struct rtl8xxxu_priv *priv,
+                                        u8 macid, bool connect)
 {
        struct h2c_cmd h2c;
 
@@ -7492,15 +8926,22 @@ static void rtl8xxxu_rx_urb_work(struct work_struct *work)
        }
 }
 
-static int rtl8723au_parse_rx_desc(struct rtl8xxxu_priv *priv,
+static int rtl8xxxu_parse_rxdesc16(struct rtl8xxxu_priv *priv,
                                   struct sk_buff *skb,
                                   struct ieee80211_rx_status *rx_status)
 {
-       struct rtl8xxxu_rx_desc *rx_desc = (struct rtl8xxxu_rx_desc *)skb->data;
+       struct rtl8xxxu_rxdesc16 *rx_desc =
+               (struct rtl8xxxu_rxdesc16 *)skb->data;
        struct rtl8723au_phy_stats *phy_stats;
+       __le32 *_rx_desc_le = (__le32 *)skb->data;
+       u32 *_rx_desc = (u32 *)skb->data;
        int drvinfo_sz, desc_shift;
+       int i;
+
+       for (i = 0; i < (sizeof(struct rtl8xxxu_rxdesc16) / sizeof(u32)); i++)
+               _rx_desc[i] = le32_to_cpu(_rx_desc_le[i]);
 
-       skb_pull(skb, sizeof(struct rtl8xxxu_rx_desc));
+       skb_pull(skb, sizeof(struct rtl8xxxu_rxdesc16));
 
        phy_stats = (struct rtl8723au_phy_stats *)skb->data;
 
@@ -7532,16 +8973,22 @@ static int rtl8723au_parse_rx_desc(struct rtl8xxxu_priv *priv,
        return RX_TYPE_DATA_PKT;
 }
 
-static int rtl8723bu_parse_rx_desc(struct rtl8xxxu_priv *priv,
+static int rtl8xxxu_parse_rxdesc24(struct rtl8xxxu_priv *priv,
                                   struct sk_buff *skb,
                                   struct ieee80211_rx_status *rx_status)
 {
-       struct rtl8723bu_rx_desc *rx_desc =
-               (struct rtl8723bu_rx_desc *)skb->data;
+       struct rtl8xxxu_rxdesc24 *rx_desc =
+               (struct rtl8xxxu_rxdesc24 *)skb->data;
        struct rtl8723au_phy_stats *phy_stats;
+       __le32 *_rx_desc_le = (__le32 *)skb->data;
+       u32 *_rx_desc = (u32 *)skb->data;
        int drvinfo_sz, desc_shift;
+       int i;
+
+       for (i = 0; i < (sizeof(struct rtl8xxxu_rxdesc24) / sizeof(u32)); i++)
+               _rx_desc[i] = le32_to_cpu(_rx_desc_le[i]);
 
-       skb_pull(skb, sizeof(struct rtl8723bu_rx_desc));
+       skb_pull(skb, sizeof(struct rtl8xxxu_rxdesc24));
 
        phy_stats = (struct rtl8723au_phy_stats *)skb->data;
 
@@ -7633,12 +9080,7 @@ static void rtl8xxxu_rx_complete(struct urb *urb)
        struct sk_buff *skb = (struct sk_buff *)urb->context;
        struct ieee80211_rx_status *rx_status = IEEE80211_SKB_RXCB(skb);
        struct device *dev = &priv->udev->dev;
-       __le32 *_rx_desc_le = (__le32 *)skb->data;
-       u32 *_rx_desc = (u32 *)skb->data;
-       int rx_type, i;
-
-       for (i = 0; i < (sizeof(struct rtl8xxxu_rx_desc) / sizeof(u32)); i++)
-               _rx_desc[i] = le32_to_cpu(_rx_desc_le[i]);
+       int rx_type;
 
        skb_put(skb, urb->actual_length);
 
@@ -7677,14 +9119,15 @@ static int rtl8xxxu_submit_rx_urb(struct rtl8xxxu_priv *priv,
 {
        struct sk_buff *skb;
        int skb_size;
-       int ret;
+       int ret, rx_desc_sz;
 
-       skb_size = sizeof(struct rtl8xxxu_rx_desc) + RTL_RX_BUFFER_SIZE;
+       rx_desc_sz = priv->fops->rx_desc_size;
+       skb_size = rx_desc_sz + RTL_RX_BUFFER_SIZE;
        skb = __netdev_alloc_skb(NULL, skb_size, GFP_KERNEL);
        if (!skb)
                return -ENOMEM;
 
-       memset(skb->data, 0, sizeof(struct rtl8xxxu_rx_desc));
+       memset(skb->data, 0, rx_desc_sz);
        usb_fill_bulk_urb(&rx_urb->urb, priv->udev, priv->pipe_in, skb->data,
                          skb_size, rtl8xxxu_rx_complete, skb);
        usb_anchor_urb(&rx_urb->urb, &priv->rx_anchor);
@@ -8154,6 +9597,8 @@ static void rtl8xxxu_stop(struct ieee80211_hw *hw)
        if (priv->usb_interrupts)
                usb_kill_anchored_urbs(&priv->int_anchor);
 
+       rtl8xxxu_write8(priv, REG_TXPAUSE, 0xff);
+
        priv->fops->disable_rf(priv);
 
        /*
@@ -8286,6 +9731,10 @@ static int rtl8xxxu_probe(struct usb_interface *interface,
                if (id->idProduct == 0x7811)
                        untested = 0;
                break;
+       case 0x050d:
+               if (id->idProduct == 0x1004)
+                       untested = 0;
+               break;
        default:
                break;
        }
@@ -8414,13 +9863,14 @@ static void rtl8xxxu_disconnect(struct usb_interface *interface)
        hw = usb_get_intfdata(interface);
        priv = hw->priv;
 
-       rtl8xxxu_disable_device(hw);
+       ieee80211_unregister_hw(hw);
+
+       priv->fops->power_off(priv);
+
        usb_set_intfdata(interface, NULL);
 
        dev_info(&priv->udev->dev, "disconnecting\n");
 
-       ieee80211_unregister_hw(hw);
-
        kfree(priv->fw_data);
        mutex_destroy(&priv->usb_buf_mutex);
        mutex_destroy(&priv->h2c_mutex);
@@ -8436,22 +9886,30 @@ static struct rtl8xxxu_fileops rtl8723au_fops = {
        .power_off = rtl8xxxu_power_off,
        .reset_8051 = rtl8xxxu_reset_8051,
        .llt_init = rtl8xxxu_init_llt_table,
-       .phy_iq_calibrate = rtl8723au_phy_iq_calibrate,
-       .config_channel = rtl8723au_config_channel,
-       .parse_rx_desc = rtl8723au_parse_rx_desc,
-       .enable_rf = rtl8723a_enable_rf,
-       .disable_rf = rtl8723a_disable_rf,
-       .set_tx_power = rtl8723a_set_tx_power,
-       .update_rate_mask = rtl8723au_update_rate_mask,
-       .report_connect = rtl8723au_report_connect,
+       .init_phy_bb = rtl8xxxu_gen1_init_phy_bb,
+       .init_phy_rf = rtl8723au_init_phy_rf,
+       .phy_iq_calibrate = rtl8xxxu_gen1_phy_iq_calibrate,
+       .config_channel = rtl8xxxu_gen1_config_channel,
+       .parse_rx_desc = rtl8xxxu_parse_rxdesc16,
+       .enable_rf = rtl8xxxu_gen1_enable_rf,
+       .disable_rf = rtl8xxxu_gen1_disable_rf,
+       .usb_quirks = rtl8xxxu_gen1_usb_quirks,
+       .set_tx_power = rtl8xxxu_gen1_set_tx_power,
+       .update_rate_mask = rtl8xxxu_update_rate_mask,
+       .report_connect = rtl8xxxu_gen1_report_connect,
        .writeN_block_size = 1024,
        .mbox_ext_reg = REG_HMBOX_EXT_0,
        .mbox_ext_width = 2,
        .tx_desc_size = sizeof(struct rtl8xxxu_txdesc32),
+       .rx_desc_size = sizeof(struct rtl8xxxu_rxdesc16),
        .adda_1t_init = 0x0b1b25a0,
        .adda_1t_path_on = 0x0bdb25a0,
        .adda_2t_path_on_a = 0x04db25a4,
        .adda_2t_path_on_b = 0x0b1b25a4,
+       .trxff_boundary = 0x27ff,
+       .pbp_rx = PBP_PAGE_SIZE_128,
+       .pbp_tx = PBP_PAGE_SIZE_128,
+       .mactable = rtl8xxxu_gen1_mac_init_table,
 };
 
 static struct rtl8xxxu_fileops rtl8723bu_fops = {
@@ -8461,26 +9919,34 @@ static struct rtl8xxxu_fileops rtl8723bu_fops = {
        .power_off = rtl8723bu_power_off,
        .reset_8051 = rtl8723bu_reset_8051,
        .llt_init = rtl8xxxu_auto_llt_table,
+       .init_phy_bb = rtl8723bu_init_phy_bb,
+       .init_phy_rf = rtl8723bu_init_phy_rf,
        .phy_init_antenna_selection = rtl8723bu_phy_init_antenna_selection,
        .phy_iq_calibrate = rtl8723bu_phy_iq_calibrate,
-       .config_channel = rtl8723bu_config_channel,
-       .parse_rx_desc = rtl8723bu_parse_rx_desc,
+       .config_channel = rtl8xxxu_gen2_config_channel,
+       .parse_rx_desc = rtl8xxxu_parse_rxdesc24,
        .init_aggregation = rtl8723bu_init_aggregation,
        .init_statistics = rtl8723bu_init_statistics,
        .enable_rf = rtl8723b_enable_rf,
-       .disable_rf = rtl8723b_disable_rf,
+       .disable_rf = rtl8xxxu_gen2_disable_rf,
+       .usb_quirks = rtl8xxxu_gen2_usb_quirks,
        .set_tx_power = rtl8723b_set_tx_power,
-       .update_rate_mask = rtl8723bu_update_rate_mask,
-       .report_connect = rtl8723bu_report_connect,
+       .update_rate_mask = rtl8xxxu_gen2_update_rate_mask,
+       .report_connect = rtl8xxxu_gen2_report_connect,
        .writeN_block_size = 1024,
        .mbox_ext_reg = REG_HMBOX_EXT0_8723B,
        .mbox_ext_width = 4,
        .tx_desc_size = sizeof(struct rtl8xxxu_txdesc40),
+       .rx_desc_size = sizeof(struct rtl8xxxu_rxdesc24),
        .has_s0s1 = 1,
        .adda_1t_init = 0x01c00014,
        .adda_1t_path_on = 0x01c00014,
        .adda_2t_path_on_a = 0x01c00014,
        .adda_2t_path_on_b = 0x01c00014,
+       .trxff_boundary = 0x3f7f,
+       .pbp_rx = PBP_PAGE_SIZE_256,
+       .pbp_tx = PBP_PAGE_SIZE_256,
+       .mactable = rtl8723b_mac_init_table,
 };
 
 #ifdef CONFIG_RTL8XXXU_UNTESTED
@@ -8492,22 +9958,30 @@ static struct rtl8xxxu_fileops rtl8192cu_fops = {
        .power_off = rtl8xxxu_power_off,
        .reset_8051 = rtl8xxxu_reset_8051,
        .llt_init = rtl8xxxu_init_llt_table,
-       .phy_iq_calibrate = rtl8723au_phy_iq_calibrate,
-       .config_channel = rtl8723au_config_channel,
-       .parse_rx_desc = rtl8723au_parse_rx_desc,
-       .enable_rf = rtl8723a_enable_rf,
-       .disable_rf = rtl8723a_disable_rf,
-       .set_tx_power = rtl8723a_set_tx_power,
-       .update_rate_mask = rtl8723au_update_rate_mask,
-       .report_connect = rtl8723au_report_connect,
+       .init_phy_bb = rtl8xxxu_gen1_init_phy_bb,
+       .init_phy_rf = rtl8192cu_init_phy_rf,
+       .phy_iq_calibrate = rtl8xxxu_gen1_phy_iq_calibrate,
+       .config_channel = rtl8xxxu_gen1_config_channel,
+       .parse_rx_desc = rtl8xxxu_parse_rxdesc16,
+       .enable_rf = rtl8xxxu_gen1_enable_rf,
+       .disable_rf = rtl8xxxu_gen1_disable_rf,
+       .usb_quirks = rtl8xxxu_gen1_usb_quirks,
+       .set_tx_power = rtl8xxxu_gen1_set_tx_power,
+       .update_rate_mask = rtl8xxxu_update_rate_mask,
+       .report_connect = rtl8xxxu_gen1_report_connect,
        .writeN_block_size = 128,
        .mbox_ext_reg = REG_HMBOX_EXT_0,
        .mbox_ext_width = 2,
        .tx_desc_size = sizeof(struct rtl8xxxu_txdesc32),
+       .rx_desc_size = sizeof(struct rtl8xxxu_rxdesc16),
        .adda_1t_init = 0x0b1b25a0,
        .adda_1t_path_on = 0x0bdb25a0,
        .adda_2t_path_on_a = 0x04db25a4,
        .adda_2t_path_on_b = 0x0b1b25a4,
+       .trxff_boundary = 0x27ff,
+       .pbp_rx = PBP_PAGE_SIZE_128,
+       .pbp_tx = PBP_PAGE_SIZE_128,
+       .mactable = rtl8xxxu_gen1_mac_init_table,
 };
 
 #endif
@@ -8519,23 +9993,33 @@ static struct rtl8xxxu_fileops rtl8192eu_fops = {
        .power_off = rtl8xxxu_power_off,
        .reset_8051 = rtl8xxxu_reset_8051,
        .llt_init = rtl8xxxu_auto_llt_table,
-       .phy_iq_calibrate = rtl8723bu_phy_iq_calibrate,
-       .config_channel = rtl8723bu_config_channel,
-       .parse_rx_desc = rtl8723bu_parse_rx_desc,
-       .enable_rf = rtl8723b_enable_rf,
-       .disable_rf = rtl8723b_disable_rf,
-       .set_tx_power = rtl8723b_set_tx_power,
-       .update_rate_mask = rtl8723bu_update_rate_mask,
-       .report_connect = rtl8723bu_report_connect,
+       .init_phy_bb = rtl8192eu_init_phy_bb,
+       .init_phy_rf = rtl8192eu_init_phy_rf,
+       .phy_iq_calibrate = rtl8192eu_phy_iq_calibrate,
+       .config_channel = rtl8xxxu_gen2_config_channel,
+       .parse_rx_desc = rtl8xxxu_parse_rxdesc24,
+       .enable_rf = rtl8192e_enable_rf,
+       .disable_rf = rtl8xxxu_gen2_disable_rf,
+       .usb_quirks = rtl8xxxu_gen2_usb_quirks,
+       .set_tx_power = rtl8192e_set_tx_power,
+       .update_rate_mask = rtl8xxxu_gen2_update_rate_mask,
+       .report_connect = rtl8xxxu_gen2_report_connect,
        .writeN_block_size = 128,
        .mbox_ext_reg = REG_HMBOX_EXT0_8723B,
        .mbox_ext_width = 4,
        .tx_desc_size = sizeof(struct rtl8xxxu_txdesc40),
-       .has_s0s1 = 1,
+       .rx_desc_size = sizeof(struct rtl8xxxu_rxdesc24),
+       .has_s0s1 = 0,
        .adda_1t_init = 0x0fc01616,
        .adda_1t_path_on = 0x0fc01616,
        .adda_2t_path_on_a = 0x0fc01616,
        .adda_2t_path_on_b = 0x0fc01616,
+       .trxff_boundary = 0x3cff,
+       .mactable = rtl8192e_mac_init_table,
+       .total_page_num = TX_TOTAL_PAGE_NUM_8192E,
+       .page_num_hi = TX_PAGE_NUM_HI_PQ_8192E,
+       .page_num_lo = TX_PAGE_NUM_LO_PQ_8192E,
+       .page_num_norm = TX_PAGE_NUM_NORM_PQ_8192E,
 };
 
 static struct usb_device_id dev_table[] = {
@@ -8560,6 +10044,9 @@ static struct usb_device_id dev_table[] = {
 /* Tested by Larry Finger */
 {USB_DEVICE_AND_INTERFACE_INFO(0x7392, 0x7811, 0xff, 0xff, 0xff),
        .driver_info = (unsigned long)&rtl8192cu_fops},
+/* Tested by Andrea Merello */
+{USB_DEVICE_AND_INTERFACE_INFO(0x050d, 0x1004, 0xff, 0xff, 0xff),
+       .driver_info = (unsigned long)&rtl8192cu_fops},
 /* Currently untested 8188 series devices */
 {USB_DEVICE_AND_INTERFACE_INFO(USB_VENDOR_ID_REALTEK, 0x8191, 0xff, 0xff, 0xff),
        .driver_info = (unsigned long)&rtl8192cu_fops},
@@ -8644,8 +10131,6 @@ static struct usb_device_id dev_table[] = {
 /* Currently untested 8192 series devices */
 {USB_DEVICE_AND_INTERFACE_INFO(0x04bb, 0x0950, 0xff, 0xff, 0xff),
        .driver_info = (unsigned long)&rtl8192cu_fops},
-{USB_DEVICE_AND_INTERFACE_INFO(0x050d, 0x1004, 0xff, 0xff, 0xff),
-       .driver_info = (unsigned long)&rtl8192cu_fops},
 {USB_DEVICE_AND_INTERFACE_INFO(0x050d, 0x2102, 0xff, 0xff, 0xff),
        .driver_info = (unsigned long)&rtl8192cu_fops},
 {USB_DEVICE_AND_INTERFACE_INFO(0x050d, 0x2103, 0xff, 0xff, 0xff),
@@ -8701,6 +10186,7 @@ static struct usb_driver rtl8xxxu_driver = {
        .probe = rtl8xxxu_probe,
        .disconnect = rtl8xxxu_disconnect,
        .id_table = dev_table,
+       .no_dynamic_id = 1,
        .disable_hub_initiated_lpm = 1,
 };
 
index 455e112..3e2643c 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 2014 - 2015 Jes Sorensen <Jes.Sorensen@redhat.com>
+ * Copyright (c) 2014 - 2016 Jes Sorensen <Jes.Sorensen@redhat.com>
  *
  * This program is free software; you can redistribute it and/or modify it
  * under the terms of version 2 of the GNU General Public License as
 #define REALTEK_USB_CMD_IDX            0x00
 
 #define TX_TOTAL_PAGE_NUM              0xf8
+#define TX_TOTAL_PAGE_NUM_8192E                0xf3
 /* (HPQ + LPQ + NPQ + PUBQ) = TX_TOTAL_PAGE_NUM */
 #define TX_PAGE_NUM_PUBQ               0xe7
 #define TX_PAGE_NUM_HI_PQ              0x0c
 #define TX_PAGE_NUM_LO_PQ              0x02
 #define TX_PAGE_NUM_NORM_PQ            0x02
 
+#define TX_PAGE_NUM_PUBQ_8192E         0xe7
+#define TX_PAGE_NUM_HI_PQ_8192E                0x08
+#define TX_PAGE_NUM_LO_PQ_8192E                0x0c
+#define TX_PAGE_NUM_NORM_PQ_8192E      0x00
+
 #define RTL_FW_PAGE_SIZE               4096
 #define RTL8XXXU_FIRMWARE_POLL_MAX     1000
 
@@ -95,7 +101,7 @@ enum rtl8xxxu_rx_type {
        RX_TYPE_ERROR = -1
 };
 
-struct rtl8xxxu_rx_desc {
+struct rtl8xxxu_rxdesc16 {
 #ifdef __LITTLE_ENDIAN
        u32 pktlen:14;
        u32 crc32:1;
@@ -231,7 +237,7 @@ struct rtl8xxxu_rx_desc {
 #endif
 };
 
-struct rtl8723bu_rx_desc {
+struct rtl8xxxu_rxdesc24 {
 #ifdef __LITTLE_ENDIAN
        u32 pktlen:14;
        u32 crc32:1;
@@ -623,6 +629,31 @@ struct rtl8xxxu_firmware_header {
        u8      data[0];
 };
 
+/*
+ * 8723au/8192cu/8188ru required base power index offset tables.
+ */
+struct rtl8xxxu_power_base {
+       u32 reg_0e00;
+       u32 reg_0e04;
+       u32 reg_0e08;
+       u32 reg_086c;
+
+       u32 reg_0e10;
+       u32 reg_0e14;
+       u32 reg_0e18;
+       u32 reg_0e1c;
+
+       u32 reg_0830;
+       u32 reg_0834;
+       u32 reg_0838;
+       u32 reg_086c_2;
+
+       u32 reg_083c;
+       u32 reg_0848;
+       u32 reg_084c;
+       u32 reg_0868;
+};
+
 /*
  * The 8723au has 3 channel groups: 1-3, 4-9, and 10-14
  */
@@ -787,55 +818,49 @@ struct rtl8192eu_efuse_tx_power {
        u8 cck_base[6];
        u8 ht40_base[5];
        struct rtl8723au_idx ht20_ofdm_1s_diff;
-       struct rtl8723au_idx ht40_ht20_2s_diff;
-       struct rtl8723au_idx ofdm_cck_2s_diff; /* not used */
-       struct rtl8723au_idx ht40_ht20_3s_diff;
-       struct rtl8723au_idx ofdm_cck_3s_diff; /* not used */
-       struct rtl8723au_idx ht40_ht20_4s_diff;
-       struct rtl8723au_idx ofdm_cck_4s_diff; /* not used */
+       struct rtl8723bu_pwr_idx pwr_diff[3];
+       u8 dummy5g[24]; /* max channel group (14) + power diff offset (10) */
 };
 
 struct rtl8192eu_efuse {
        __le16 rtl_id;
        u8 res0[0x0e];
        struct rtl8192eu_efuse_tx_power tx_power_index_A;       /* 0x10 */
-       struct rtl8192eu_efuse_tx_power tx_power_index_B;       /* 0x22 */
-       struct rtl8192eu_efuse_tx_power tx_power_index_C;       /* 0x34 */
-       struct rtl8192eu_efuse_tx_power tx_power_index_D;       /* 0x46 */
-       u8 res1[0x60];
+       struct rtl8192eu_efuse_tx_power tx_power_index_B;       /* 0x3a */
+       u8 res2[0x54];
        u8 channel_plan;                /* 0xb8 */
        u8 xtal_k;
        u8 thermal_meter;
        u8 iqk_lck;
        u8 pa_type;                     /* 0xbc */
        u8 lna_type_2g;                 /* 0xbd */
-       u8 res2[1];
+       u8 res3[1];
        u8 lna_type_5g;                 /* 0xbf */
-       u8 res13[1];
+       u8 res4[1];
        u8 rf_board_option;
        u8 rf_feature_option;
        u8 rf_bt_setting;
        u8 eeprom_version;
        u8 eeprom_customer_id;
-       u8 res3[3];
+       u8 res5[3];
        u8 rf_antenna_option;           /* 0xc9 */
-       u8 res4[6];
+       u8 res6[6];
        u8 vid;                         /* 0xd0 */
-       u8 res5[1];
+       u8 res7[1];
        u8 pid;                         /* 0xd2 */
-       u8 res6[1];
+       u8 res8[1];
        u8 usb_optional_function;
-       u8 res7[2];
+       u8 res9[2];
        u8 mac_addr[ETH_ALEN];          /* 0xd7 */
-       u8 res8[2];
+       u8 res10[2];
        u8 vendor_name[7];
-       u8 res9[2];
+       u8 res11[2];
        u8 device_name[0x0b];           /* 0xe8 */
-       u8 res10[2];
+       u8 res12[2];
        u8 serial[0x0b];                /* 0xf5 */
-       u8 res11[0x30];
+       u8 res13[0x30];
        u8 unknown[0x0d];               /* 0x130 */
-       u8 res12[0xc3];
+       u8 res14[0xc3];
 };
 
 struct rtl8xxxu_reg8val {
@@ -1201,6 +1226,7 @@ struct rtl8xxxu_priv {
        struct rtl8723au_idx ofdm_tx_power_diff[RTL8723B_TX_COUNT];
        struct rtl8723au_idx ht20_tx_power_diff[RTL8723B_TX_COUNT];
        struct rtl8723au_idx ht40_tx_power_diff[RTL8723B_TX_COUNT];
+       struct rtl8xxxu_power_base *power_base;
        u32 chip_cut:4;
        u32 rom_rev:4;
        u32 is_multi_func:1;
@@ -1228,7 +1254,6 @@ struct rtl8xxxu_priv {
        u8 rf_paths;
        u8 rx_paths;
        u8 tx_paths;
-       u32 rf_mode_ag[2];
        u32 rege94;
        u32 rege9c;
        u32 regeb4;
@@ -1262,6 +1287,7 @@ struct rtl8xxxu_priv {
        u32 bb_recovery_backup[RTL8XXXU_BB_REGS];
        enum rtl8xxxu_rtl_chip rtl_chip;
        u8 pi_enabled:1;
+       u8 no_pape:1;
        u8 int_buf[USB_INTR_CONTENT_LENGTH];
 };
 
@@ -1284,6 +1310,8 @@ struct rtl8xxxu_fileops {
        void (*power_off) (struct rtl8xxxu_priv *priv);
        void (*reset_8051) (struct rtl8xxxu_priv *priv);
        int (*llt_init) (struct rtl8xxxu_priv *priv, u8 last_tx_page);
+       void (*init_phy_bb) (struct rtl8xxxu_priv *priv);
+       int (*init_phy_rf) (struct rtl8xxxu_priv *priv);
        void (*phy_init_antenna_selection) (struct rtl8xxxu_priv *priv);
        void (*phy_iq_calibrate) (struct rtl8xxxu_priv *priv);
        void (*config_channel) (struct ieee80211_hw *hw);
@@ -1293,6 +1321,7 @@ struct rtl8xxxu_fileops {
        void (*init_statistics) (struct rtl8xxxu_priv *priv);
        void (*enable_rf) (struct rtl8xxxu_priv *priv);
        void (*disable_rf) (struct rtl8xxxu_priv *priv);
+       void (*usb_quirks) (struct rtl8xxxu_priv *priv);
        void (*set_tx_power) (struct rtl8xxxu_priv *priv, int channel,
                              bool ht40);
        void (*update_rate_mask) (struct rtl8xxxu_priv *priv,
@@ -1303,9 +1332,18 @@ struct rtl8xxxu_fileops {
        u16 mbox_ext_reg;
        char mbox_ext_width;
        char tx_desc_size;
+       char rx_desc_size;
        char has_s0s1;
        u32 adda_1t_init;
        u32 adda_1t_path_on;
        u32 adda_2t_path_on_a;
        u32 adda_2t_path_on_b;
+       u16 trxff_boundary;
+       u8 pbp_rx;
+       u8 pbp_tx;
+       struct rtl8xxxu_reg8val *mactable;
+       u8 total_page_num;
+       u8 page_num_hi;
+       u8 page_num_lo;
+       u8 page_num_norm;
 };
index ade42fe..b0e0c64 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 2014 - 2015 Jes Sorensen <Jes.Sorensen@redhat.com>
+ * Copyright (c) 2014 - 2016 Jes Sorensen <Jes.Sorensen@redhat.com>
  *
  * This program is free software; you can redistribute it and/or modify it
  * under the terms of version 2 of the GNU General Public License as
 #define  AFE_XTAL_GATE_DIG             BIT(17)
 #define  AFE_XTAL_BT_GATE              BIT(20)
 
+/*
+ * 0x0028 is also known as REG_AFE_CTRL2 on 8723bu/8192eu
+ */
 #define REG_AFE_PLL_CTRL               0x0028
 #define  AFE_PLL_ENABLE                        BIT(0)
 #define  AFE_PLL_320_ENABLE            BIT(1)
                                                   control */
 #define  MULTI_GPS_FUNC_EN             BIT(22) /* GPS function enable */
 
+#define REG_AFE_CTRL4                  0x0078  /* 8192eu/8723bu */
 #define REG_LDO_SW_CTRL                        0x007c  /* 8192eu */
 
 #define REG_MCU_FW_DL                  0x0080
 #define REG_RQPN                       0x0200
 #define  RQPN_HI_PQ_SHIFT              0
 #define  RQPN_LO_PQ_SHIFT              8
-#define  RQPN_NORM_PQ_SHIFT            16
+#define  RQPN_PUB_PQ_SHIFT             16
 #define  RQPN_LOAD                     BIT(31)
 
 #define REG_FIFOPAGE                   0x0204
 #define REG_PKT_VO_VI_LIFE_TIME                0x04c0
 #define REG_PKT_BE_BK_LIFE_TIME                0x04c2
 #define REG_STBC_SETTING               0x04c4
+#define REG_QUEUE_CTRL                 0x04c6
 #define REG_HT_SINGLE_AMPDU_8723B      0x04c7
 #define REG_PROT_MODE_CTRL             0x04c8
 #define REG_MAX_AGGR_NUM               0x04ca
 #define  CCK0_SIDEBAND                 BIT(4)
 
 #define REG_CCK0_AFE_SETTING           0x0a04
+#define  CCK0_AFE_RX_MASK              0x0f000000
+#define  CCK0_AFE_RX_ANT_AB            BIT(24)
+#define  CCK0_AFE_RX_ANT_A             0
+#define  CCK0_AFE_RX_ANT_B             (BIT(24) | BIT(26))
 
 #define REG_CONFIG_ANT_A               0x0b68
 #define REG_CONFIG_ANT_B               0x0b6c
 #define  USB_HIMR_ROK                  BIT(0)  /*  Receive DMA OK Interrupt */
 
 #define REG_USB_SPECIAL_OPTION         0xfe55
+#define REG_USB_HRPWM                  0xfe58
 #define REG_USB_DMA_AGG_TO             0xfe5b
 #define REG_USB_AGG_TO                 0xfe5c
 #define REG_USB_AGG_TH                 0xfe5d
 #define RF6052_REG_T_METER_8723B       0x42
 #define RF6052_REG_UNKNOWN_43          0x43
 #define RF6052_REG_UNKNOWN_55          0x55
+#define RF6052_REG_UNKNOWN_56          0x56
 #define RF6052_REG_S0S1                        0xb0
 #define RF6052_REG_UNKNOWN_DF          0xdf
 #define RF6052_REG_UNKNOWN_ED          0xed
index ddf74d5..0c3b9ce 100644 (file)
@@ -959,7 +959,7 @@ static void _rtl8821ae_phy_store_txpower_by_rate_base(struct ieee80211_hw *hw)
 static void _phy_convert_txpower_dbm_to_relative_value(u32 *data, u8 start,
                                                u8 end, u8 base_val)
 {
-       char i = 0;
+       int i;
        u8 temp_value = 0;
        u32 temp_data = 0;
 
index 99de07d..13fd734 100644 (file)
@@ -1287,7 +1287,7 @@ static void wl3501_tx_timeout(struct net_device *dev)
                printk(KERN_ERR "%s: Error %d resetting card on Tx timeout!\n",
                       dev->name, rc);
        else {
-               dev->trans_start = jiffies; /* prevent tx timeout */
+               netif_trans_update(dev); /* prevent tx timeout */
                netif_wake_queue(dev);
        }
 }
index 6f5c793..dea049b 100644 (file)
@@ -845,7 +845,7 @@ static void zd1201_tx_timeout(struct net_device *dev)
        usb_unlink_urb(zd->tx_urb);
        dev->stats.tx_errors++;
        /* Restart the timeout to quiet the watchdog: */
-       dev->trans_start = jiffies; /* prevent tx timeout */
+       netif_trans_update(dev); /* prevent tx timeout */
 }
 
 static int zd1201_set_mac_address(struct net_device *dev, void *p)
index 2c1e52e..e051e1b 100644 (file)
@@ -56,7 +56,7 @@ static void of_mdiobus_register_phy(struct mii_bus *mdio,
                phy = phy_device_create(mdio, addr, phy_id, 0, NULL);
        else
                phy = get_phy_device(mdio, addr, is_c45);
-       if (IS_ERR_OR_NULL(phy))
+       if (IS_ERR(phy))
                return;
 
        rc = irq_of_parse_and_map(child, 0);
@@ -209,6 +209,10 @@ int of_mdiobus_register(struct mii_bus *mdio, struct device_node *np)
        bool scanphys = false;
        int addr, rc;
 
+       /* Do not continue if the node is disabled */
+       if (!of_device_is_available(np))
+               return -ENODEV;
+
        /* Mask out all PHYs from auto probing.  Instead the PHYs listed in
         * the device tree are populated after the bus has been registered */
        mdio->phy_mask = ~0;
index 32346b5..f700908 100644 (file)
@@ -737,8 +737,19 @@ static void cpu_pm_pmu_setup(struct arm_pmu *armpmu, unsigned long cmd)
                        break;
                case CPU_PM_EXIT:
                case CPU_PM_ENTER_FAILED:
-                        /* Restore and enable the counter */
-                       armpmu_start(event, PERF_EF_RELOAD);
+                        /*
+                         * Restore and enable the counter.
+                         * armpmu_start() indirectly calls
+                         *
+                         * perf_event_update_userpage()
+                         *
+                         * that requires RCU read locking to be functional,
+                         * wrap the call within RCU_NONIDLE to make the
+                         * RCU subsystem aware this cpu is not idle from
+                         * an RCU perspective for the armpmu_start() call
+                         * duration.
+                         */
+                       RCU_NONIDLE(armpmu_start(event, PERF_EF_RELOAD));
                        break;
                default:
                        break;
index 77e2d02..793ecb6 100644 (file)
@@ -86,6 +86,9 @@ static int rockchip_dp_phy_probe(struct platform_device *pdev)
        if (!np)
                return -ENODEV;
 
+       if (!dev->parent || !dev->parent->of_node)
+               return -ENODEV;
+
        dp = devm_kzalloc(dev, sizeof(*dp), GFP_KERNEL);
        if (IS_ERR(dp))
                return -ENOMEM;
@@ -104,9 +107,9 @@ static int rockchip_dp_phy_probe(struct platform_device *pdev)
                return ret;
        }
 
-       dp->grf = syscon_regmap_lookup_by_phandle(np, "rockchip,grf");
+       dp->grf = syscon_node_to_regmap(dev->parent->of_node);
        if (IS_ERR(dp->grf)) {
-               dev_err(dev, "rk3288-dp needs rockchip,grf property\n");
+               dev_err(dev, "rk3288-dp needs the General Register Files syscon\n");
                return PTR_ERR(dp->grf);
        }
 
index 887b4c2..6ebcf3e 100644 (file)
@@ -176,7 +176,10 @@ static int rockchip_emmc_phy_probe(struct platform_device *pdev)
        struct regmap *grf;
        unsigned int reg_offset;
 
-       grf = syscon_regmap_lookup_by_phandle(dev->of_node, "rockchip,grf");
+       if (!dev->parent || !dev->parent->of_node)
+               return -ENODEV;
+
+       grf = syscon_node_to_regmap(dev->parent->of_node);
        if (IS_ERR(grf)) {
                dev_err(dev, "Missing rockchip,grf property\n");
                return PTR_ERR(grf);
index debe121..fc8cbf6 100644 (file)
@@ -2,6 +2,7 @@ config PINCTRL_IMX
        bool
        select PINMUX
        select PINCONF
+       select REGMAP
 
 config PINCTRL_IMX1_CORE
        bool
index 2bbe6f7..6ab8c3c 100644 (file)
@@ -1004,7 +1004,8 @@ static int mtk_gpio_set_debounce(struct gpio_chip *chip, unsigned offset,
        struct mtk_pinctrl *pctl = dev_get_drvdata(chip->parent);
        int eint_num, virq, eint_offset;
        unsigned int set_offset, bit, clr_bit, clr_offset, rst, i, unmask, dbnc;
-       static const unsigned int dbnc_arr[] = {0 , 1, 16, 32, 64, 128, 256};
+       static const unsigned int debounce_time[] = {500, 1000, 16000, 32000, 64000,
+                                               128000, 256000};
        const struct mtk_desc_pin *pin;
        struct irq_data *d;
 
@@ -1022,9 +1023,9 @@ static int mtk_gpio_set_debounce(struct gpio_chip *chip, unsigned offset,
        if (!mtk_eint_can_en_debounce(pctl, eint_num))
                return -ENOSYS;
 
-       dbnc = ARRAY_SIZE(dbnc_arr);
-       for (i = 0; i < ARRAY_SIZE(dbnc_arr); i++) {
-               if (debounce <= dbnc_arr[i]) {
+       dbnc = ARRAY_SIZE(debounce_time);
+       for (i = 0; i < ARRAY_SIZE(debounce_time); i++) {
+               if (debounce <= debounce_time[i]) {
                        dbnc = i;
                        break;
                }
index fb126d5..cf9bafa 100644 (file)
@@ -1280,9 +1280,9 @@ static int pcs_parse_bits_in_pinctrl_entry(struct pcs_device *pcs,
 
                /* Parse pins in each row from LSB */
                while (mask) {
-                       bit_pos = ffs(mask);
+                       bit_pos = __ffs(mask);
                        pin_num_from_lsb = bit_pos / pcs->bits_per_pin;
-                       mask_pos = ((pcs->fmask) << (bit_pos - 1));
+                       mask_pos = ((pcs->fmask) << bit_pos);
                        val_pos = val & mask_pos;
                        submask = mask & mask_pos;
 
@@ -1852,7 +1852,7 @@ static int pcs_probe(struct platform_device *pdev)
        ret = of_property_read_u32(np, "pinctrl-single,function-mask",
                                   &pcs->fmask);
        if (!ret) {
-               pcs->fshift = ffs(pcs->fmask) - 1;
+               pcs->fshift = __ffs(pcs->fmask);
                pcs->fmax = pcs->fmask >> pcs->fshift;
        } else {
                /* If mask property doesn't exist, function mux is invalid. */
index df1f1a7..01e12d2 100644 (file)
@@ -135,7 +135,7 @@ MODULE_LICENSE("GPL");
 /* Field definitions */
 #define HCI_ACCEL_MASK                 0x7fff
 #define HCI_HOTKEY_DISABLE             0x0b
-#define HCI_HOTKEY_ENABLE              0x01
+#define HCI_HOTKEY_ENABLE              0x09
 #define HCI_HOTKEY_SPECIAL_FUNCTIONS   0x10
 #define HCI_LCD_BRIGHTNESS_BITS                3
 #define HCI_LCD_BRIGHTNESS_SHIFT       (16-HCI_LCD_BRIGHTNESS_BITS)
index 5d4d918..96168b8 100644 (file)
@@ -2669,9 +2669,9 @@ static int __init mport_init(void)
 
        /* Create device class needed by udev */
        dev_class = class_create(THIS_MODULE, DRV_NAME);
-       if (!dev_class) {
+       if (IS_ERR(dev_class)) {
                rmcd_error("Unable to create " DRV_NAME " class");
-               return -EINVAL;
+               return PTR_ERR(dev_class);
        }
 
        ret = alloc_chrdev_region(&dev_number, 0, RIO_MAX_MPORTS, DRV_NAME);
index 648cb86..ea607a4 100644 (file)
@@ -56,6 +56,7 @@ static int sclp_ctl_ioctl_sccb(void __user *user_area)
 {
        struct sclp_ctl_sccb ctl_sccb;
        struct sccb_header *sccb;
+       unsigned long copied;
        int rc;
 
        if (copy_from_user(&ctl_sccb, user_area, sizeof(ctl_sccb)))
@@ -65,14 +66,15 @@ static int sclp_ctl_ioctl_sccb(void __user *user_area)
        sccb = (void *) get_zeroed_page(GFP_KERNEL | GFP_DMA);
        if (!sccb)
                return -ENOMEM;
-       if (copy_from_user(sccb, u64_to_uptr(ctl_sccb.sccb), sizeof(*sccb))) {
+       copied = PAGE_SIZE -
+               copy_from_user(sccb, u64_to_uptr(ctl_sccb.sccb), PAGE_SIZE);
+       if (offsetof(struct sccb_header, length) +
+           sizeof(sccb->length) > copied || sccb->length > copied) {
                rc = -EFAULT;
                goto out_free;
        }
-       if (sccb->length > PAGE_SIZE || sccb->length < 8)
-               return -EINVAL;
-       if (copy_from_user(sccb, u64_to_uptr(ctl_sccb.sccb), sccb->length)) {
-               rc = -EFAULT;
+       if (sccb->length < 8) {
+               rc = -EINVAL;
                goto out_free;
        }
        rc = sclp_sync_request(ctl_sccb.cmdw, sccb);
index c3e2252..ad17fc5 100644 (file)
@@ -642,7 +642,7 @@ static void ctcmpc_send_sweep_req(struct channel *rch)
 
        kfree(header);
 
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
        skb_queue_tail(&ch->sweep_queue, sweep_skb);
 
        fsm_addtimer(&ch->sweep_timer, 100, CTC_EVENT_RSWEEP_TIMER, ch);
@@ -911,7 +911,7 @@ static int ctcm_tx(struct sk_buff *skb, struct net_device *dev)
        if (ctcm_test_and_set_busy(dev))
                return NETDEV_TX_BUSY;
 
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
        if (ctcm_transmit_skb(priv->channel[CTCM_WRITE], skb) != 0)
                return NETDEV_TX_BUSY;
        return NETDEV_TX_OK;
@@ -994,7 +994,7 @@ static int ctcmpc_tx(struct sk_buff *skb, struct net_device *dev)
                                        goto done;
        }
 
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
        if (ctcmpc_transmit_skb(priv->channel[CTCM_WRITE], skb) != 0) {
                CTCM_DBF_TEXT_(MPC_ERROR, CTC_DBF_ERROR,
                        "%s(%s): device error - dropped",
index edf16bf..c103fc7 100644 (file)
@@ -671,7 +671,7 @@ static void ctcmpc_send_sweep_resp(struct channel *rch)
 
        kfree(header);
 
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
        skb_queue_tail(&ch->sweep_queue, sweep_skb);
 
        fsm_addtimer(&ch->sweep_timer, 100, CTC_EVENT_RSWEEP_TIMER, ch);
index 0ba3a2f..b0e8ffd 100644 (file)
@@ -1407,7 +1407,7 @@ static int netiucv_tx(struct sk_buff *skb, struct net_device *dev)
                IUCV_DBF_TEXT(data, 2, "EBUSY from netiucv_tx\n");
                return NETDEV_TX_BUSY;
        }
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
        rc = netiucv_transmit_skb(privptr->conn, skb);
        netiucv_clear_busy(dev);
        return rc ? NETDEV_TX_BUSY : NETDEV_TX_OK;
index 7871537..b7b7477 100644 (file)
@@ -3481,7 +3481,7 @@ static void qeth_flush_buffers(struct qeth_qdio_out_q *queue, int index,
                }
        }
 
-       queue->card->dev->trans_start = jiffies;
+       netif_trans_update(queue->card->dev);
        if (queue->card->options.performance_stats) {
                queue->card->perf_stats.outbound_do_qdio_cnt++;
                queue->card->perf_stats.outbound_do_qdio_start_time =
index f3bb7af..ead83a2 100644 (file)
@@ -688,6 +688,7 @@ static struct rt6_info *find_route_ipv6(const struct in6_addr *saddr,
 {
        struct flowi6 fl;
 
+       memset(&fl, 0, sizeof(fl));
        if (saddr)
                memcpy(&fl.saddr, saddr, sizeof(struct in6_addr));
        if (daddr)
index 57e781c..837effe 100644 (file)
@@ -491,13 +491,14 @@ static int scpsys_probe(struct platform_device *pdev)
                genpd->dev_ops.active_wakeup = scpsys_active_wakeup;
 
                /*
-                * With CONFIG_PM disabled turn on all domains to make the
-                * hardware usable.
+                * Initially turn on all domains to make the domains usable
+                * with !CONFIG_PM and to get the hardware in sync with the
+                * software.  The unused domains will be switched off during
+                * late_init time.
                 */
-               if (!IS_ENABLED(CONFIG_PM))
-                       genpd->power_on(genpd);
+               genpd->power_on(genpd);
 
-               pm_genpd_init(genpd, NULL, true);
+               pm_genpd_init(genpd, NULL, false);
        }
 
        /*
index b793c04..be72a8e 100644 (file)
@@ -172,9 +172,11 @@ static int vpfe_prepare_pipeline(struct vpfe_video_device *video)
 static int vpfe_update_pipe_state(struct vpfe_video_device *video)
 {
        struct vpfe_pipeline *pipe = &video->pipe;
+       int ret;
 
-       if (vpfe_prepare_pipeline(video))
-               return vpfe_prepare_pipeline(video);
+       ret = vpfe_prepare_pipeline(video);
+       if (ret)
+               return ret;
 
        /*
         * Find out if there is any input video
@@ -182,9 +184,10 @@ static int vpfe_update_pipe_state(struct vpfe_video_device *video)
         */
        if (pipe->input_num == 0) {
                pipe->state = VPFE_PIPELINE_STREAM_CONTINUOUS;
-               if (vpfe_update_current_ext_subdev(video)) {
+               ret = vpfe_update_current_ext_subdev(video);
+               if (ret) {
                        pr_err("Invalid external subdev\n");
-                       return vpfe_update_current_ext_subdev(video);
+                       return ret;
                }
        } else {
                pipe->state = VPFE_PIPELINE_STREAM_SINGLESHOT;
@@ -667,6 +670,7 @@ static int vpfe_enum_fmt(struct file *file, void  *priv,
        struct v4l2_subdev *subdev;
        struct v4l2_format format;
        struct media_pad *remote;
+       int ret;
 
        v4l2_dbg(1, debug, &vpfe_dev->v4l2_dev, "vpfe_enum_fmt\n");
 
@@ -695,10 +699,11 @@ static int vpfe_enum_fmt(struct file *file, void  *priv,
        sd_fmt.pad = remote->index;
        sd_fmt.which = V4L2_SUBDEV_FORMAT_ACTIVE;
        /* get output format of remote subdev */
-       if (v4l2_subdev_call(subdev, pad, get_fmt, NULL, &sd_fmt)) {
+       ret = v4l2_subdev_call(subdev, pad, get_fmt, NULL, &sd_fmt);
+       if (ret) {
                v4l2_err(&vpfe_dev->v4l2_dev,
                         "invalid remote subdev for video node\n");
-               return v4l2_subdev_call(subdev, pad, get_fmt, NULL, &sd_fmt);
+               return ret;
        }
        /* convert to pix format */
        mbus.code = sd_fmt.format.code;
@@ -725,6 +730,7 @@ static int vpfe_s_fmt(struct file *file, void *priv,
        struct vpfe_video_device *video = video_drvdata(file);
        struct vpfe_device *vpfe_dev = video->vpfe_dev;
        struct v4l2_format format;
+       int ret;
 
        v4l2_dbg(1, debug, &vpfe_dev->v4l2_dev, "vpfe_s_fmt\n");
        /* If streaming is started, return error */
@@ -733,8 +739,9 @@ static int vpfe_s_fmt(struct file *file, void *priv,
                return -EBUSY;
        }
        /* get adjacent subdev's output pad format */
-       if (__vpfe_video_get_format(video, &format))
-               return __vpfe_video_get_format(video, &format);
+       ret = __vpfe_video_get_format(video, &format);
+       if (ret)
+               return ret;
        *fmt = format;
        video->fmt = *fmt;
        return 0;
@@ -757,11 +764,13 @@ static int vpfe_try_fmt(struct file *file, void *priv,
        struct vpfe_video_device *video = video_drvdata(file);
        struct vpfe_device *vpfe_dev = video->vpfe_dev;
        struct v4l2_format format;
+       int ret;
 
        v4l2_dbg(1, debug, &vpfe_dev->v4l2_dev, "vpfe_try_fmt\n");
        /* get adjacent subdev's output pad format */
-       if (__vpfe_video_get_format(video, &format))
-               return __vpfe_video_get_format(video, &format);
+       ret = __vpfe_video_get_format(video, &format);
+       if (ret)
+               return ret;
 
        *fmt = format;
        return 0;
@@ -838,8 +847,9 @@ static int vpfe_s_input(struct file *file, void *priv, unsigned int index)
 
        v4l2_dbg(1, debug, &vpfe_dev->v4l2_dev, "vpfe_s_input\n");
 
-       if (mutex_lock_interruptible(&video->lock))
-               return mutex_lock_interruptible(&video->lock);
+       ret = mutex_lock_interruptible(&video->lock);
+       if (ret)
+               return ret;
        /*
         * If streaming is started return device busy
         * error
@@ -940,8 +950,9 @@ static int vpfe_s_std(struct file *file, void *priv, v4l2_std_id std_id)
        v4l2_dbg(1, debug, &vpfe_dev->v4l2_dev, "vpfe_s_std\n");
 
        /* Call decoder driver function to set the standard */
-       if (mutex_lock_interruptible(&video->lock))
-               return mutex_lock_interruptible(&video->lock);
+       ret = mutex_lock_interruptible(&video->lock);
+       if (ret)
+               return ret;
        sdinfo = video->current_ext_subdev;
        /* If streaming is started, return device busy error */
        if (video->started) {
@@ -1327,8 +1338,9 @@ static int vpfe_reqbufs(struct file *file, void *priv,
                return -EINVAL;
        }
 
-       if (mutex_lock_interruptible(&video->lock))
-               return mutex_lock_interruptible(&video->lock);
+       ret = mutex_lock_interruptible(&video->lock);
+       if (ret)
+               return ret;
 
        if (video->io_usrs != 0) {
                v4l2_err(&vpfe_dev->v4l2_dev, "Only one IO user allowed\n");
@@ -1354,10 +1366,11 @@ static int vpfe_reqbufs(struct file *file, void *priv,
        q->buf_struct_size = sizeof(struct vpfe_cap_buffer);
        q->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC;
 
-       if (vb2_queue_init(q)) {
+       ret = vb2_queue_init(q);
+       if (ret) {
                v4l2_err(&vpfe_dev->v4l2_dev, "vb2_queue_init() failed\n");
                vb2_dma_contig_cleanup_ctx(vpfe_dev->pdev);
-               return vb2_queue_init(q);
+               return ret;
        }
 
        fh->io_allowed = 1;
@@ -1533,8 +1546,9 @@ static int vpfe_streamoff(struct file *file, void *priv,
                return -EINVAL;
        }
 
-       if (mutex_lock_interruptible(&video->lock))
-               return mutex_lock_interruptible(&video->lock);
+       ret = mutex_lock_interruptible(&video->lock);
+       if (ret)
+               return ret;
 
        vpfe_stop_capture(video);
        ret = vb2_streamoff(&video->buffer_queue, buf_type);
index 05de0da..4c6f1d7 100644 (file)
@@ -3,4 +3,4 @@ July, 2015
 - Remove unneeded file entries in sysfs
 - Remove software processing of IB protocol and place in library for use
   by qib, ipath (if still present), hfi1, and eventually soft-roce
-
+- Replace incorrect uAPI
index 8396dc5..c1c5bf8 100644 (file)
@@ -49,6 +49,8 @@
 #include <linux/vmalloc.h>
 #include <linux/io.h>
 
+#include <rdma/ib.h>
+
 #include "hfi.h"
 #include "pio.h"
 #include "device.h"
@@ -190,6 +192,10 @@ static ssize_t hfi1_file_write(struct file *fp, const char __user *data,
        int uctxt_required = 1;
        int must_be_root = 0;
 
+       /* FIXME: This interface cannot continue out of staging */
+       if (WARN_ON_ONCE(!ib_safe_file_access(fp)))
+               return -EACCES;
+
        if (count < sizeof(cmd)) {
                ret = -EINVAL;
                goto bail;
@@ -791,15 +797,16 @@ static int hfi1_file_close(struct inode *inode, struct file *fp)
        spin_unlock_irqrestore(&dd->uctxt_lock, flags);
 
        dd->rcd[uctxt->ctxt] = NULL;
+
+       hfi1_user_exp_rcv_free(fdata);
+       hfi1_clear_ctxt_pkey(dd, uctxt->ctxt);
+
        uctxt->rcvwait_to = 0;
        uctxt->piowait_to = 0;
        uctxt->rcvnowait = 0;
        uctxt->pionowait = 0;
        uctxt->event_flags = 0;
 
-       hfi1_user_exp_rcv_free(fdata);
-       hfi1_clear_ctxt_pkey(dd, uctxt->ctxt);
-
        hfi1_stats.sps_ctxts--;
        if (++dd->freectxts == dd->num_user_contexts)
                aspm_enable_all(dd);
@@ -1127,27 +1134,13 @@ bail:
 
 static int user_init(struct file *fp)
 {
-       int ret;
        unsigned int rcvctrl_ops = 0;
        struct hfi1_filedata *fd = fp->private_data;
        struct hfi1_ctxtdata *uctxt = fd->uctxt;
 
        /* make sure that the context has already been setup */
-       if (!test_bit(HFI1_CTXT_SETUP_DONE, &uctxt->event_flags)) {
-               ret = -EFAULT;
-               goto done;
-       }
-
-       /*
-        * Subctxts don't need to initialize anything since master
-        * has done it.
-        */
-       if (fd->subctxt) {
-               ret = wait_event_interruptible(uctxt->wait, !test_bit(
-                                              HFI1_CTXT_MASTER_UNINIT,
-                                              &uctxt->event_flags));
-               goto expected;
-       }
+       if (!test_bit(HFI1_CTXT_SETUP_DONE, &uctxt->event_flags))
+               return -EFAULT;
 
        /* initialize poll variables... */
        uctxt->urgent = 0;
@@ -1202,19 +1195,7 @@ static int user_init(struct file *fp)
                wake_up(&uctxt->wait);
        }
 
-expected:
-       /*
-        * Expected receive has to be setup for all processes (including
-        * shared contexts). However, it has to be done after the master
-        * context has been fully configured as it depends on the
-        * eager/expected split of the RcvArray entries.
-        * Setting it up here ensures that the subcontexts will be waiting
-        * (due to the above wait_event_interruptible() until the master
-        * is setup.
-        */
-       ret = hfi1_user_exp_rcv_init(fp);
-done:
-       return ret;
+       return 0;
 }
 
 static int get_ctxt_info(struct file *fp, void __user *ubase, __u32 len)
@@ -1261,7 +1242,7 @@ static int setup_ctxt(struct file *fp)
        int ret = 0;
 
        /*
-        * Context should be set up only once (including allocation and
+        * Context should be set up only onceincluding allocation and
         * programming of eager buffers. This is done if context sharing
         * is not requested or by the master process.
         */
@@ -1282,8 +1263,27 @@ static int setup_ctxt(struct file *fp)
                        if (ret)
                                goto done;
                }
+       } else {
+               ret = wait_event_interruptible(uctxt->wait, !test_bit(
+                                              HFI1_CTXT_MASTER_UNINIT,
+                                              &uctxt->event_flags));
+               if (ret)
+                       goto done;
        }
+
        ret = hfi1_user_sdma_alloc_queues(uctxt, fp);
+       if (ret)
+               goto done;
+       /*
+        * Expected receive has to be setup for all processes (including
+        * shared contexts). However, it has to be done after the master
+        * context has been fully configured as it depends on the
+        * eager/expected split of the RcvArray entries.
+        * Setting it up here ensures that the subcontexts will be waiting
+        * (due to the above wait_event_interruptible() until the master
+        * is setup.
+        */
+       ret = hfi1_user_exp_rcv_init(fp);
        if (ret)
                goto done;
 
@@ -1565,29 +1565,8 @@ static loff_t ui_lseek(struct file *filp, loff_t offset, int whence)
 {
        struct hfi1_devdata *dd = filp->private_data;
 
-       switch (whence) {
-       case SEEK_SET:
-               break;
-       case SEEK_CUR:
-               offset += filp->f_pos;
-               break;
-       case SEEK_END:
-               offset = ((dd->kregend - dd->kregbase) + DC8051_DATA_MEM_SIZE) -
-                       offset;
-               break;
-       default:
-               return -EINVAL;
-       }
-
-       if (offset < 0)
-               return -EINVAL;
-
-       if (offset >= (dd->kregend - dd->kregbase) + DC8051_DATA_MEM_SIZE)
-               return -EINVAL;
-
-       filp->f_pos = offset;
-
-       return filp->f_pos;
+       return fixed_size_llseek(filp, offset, whence,
+               (dd->kregend - dd->kregbase) + DC8051_DATA_MEM_SIZE);
 }
 
 /* NOTE: assumes unsigned long is 8 bytes */
index c7ad016..b3f0682 100644 (file)
@@ -71,6 +71,7 @@ static inline void mmu_notifier_range_start(struct mmu_notifier *,
                                            struct mm_struct *,
                                            unsigned long, unsigned long);
 static void mmu_notifier_mem_invalidate(struct mmu_notifier *,
+                                       struct mm_struct *,
                                        unsigned long, unsigned long);
 static struct mmu_rb_node *__mmu_rb_search(struct mmu_rb_handler *,
                                           unsigned long, unsigned long);
@@ -137,7 +138,7 @@ void hfi1_mmu_rb_unregister(struct rb_root *root)
                        rbnode = rb_entry(node, struct mmu_rb_node, node);
                        rb_erase(node, root);
                        if (handler->ops->remove)
-                               handler->ops->remove(root, rbnode, false);
+                               handler->ops->remove(root, rbnode, NULL);
                }
        }
 
@@ -176,7 +177,7 @@ unlock:
        return ret;
 }
 
-/* Caller must host handler lock */
+/* Caller must hold handler lock */
 static struct mmu_rb_node *__mmu_rb_search(struct mmu_rb_handler *handler,
                                           unsigned long addr,
                                           unsigned long len)
@@ -200,15 +201,21 @@ static struct mmu_rb_node *__mmu_rb_search(struct mmu_rb_handler *handler,
        return node;
 }
 
+/* Caller must *not* hold handler lock. */
 static void __mmu_rb_remove(struct mmu_rb_handler *handler,
-                           struct mmu_rb_node *node, bool arg)
+                           struct mmu_rb_node *node, struct mm_struct *mm)
 {
+       unsigned long flags;
+
        /* Validity of handler and node pointers has been checked by caller. */
        hfi1_cdbg(MMU, "Removing node addr 0x%llx, len %u", node->addr,
                  node->len);
+       spin_lock_irqsave(&handler->lock, flags);
        __mmu_int_rb_remove(node, handler->root);
+       spin_unlock_irqrestore(&handler->lock, flags);
+
        if (handler->ops->remove)
-               handler->ops->remove(handler->root, node, arg);
+               handler->ops->remove(handler->root, node, mm);
 }
 
 struct mmu_rb_node *hfi1_mmu_rb_search(struct rb_root *root, unsigned long addr,
@@ -231,14 +238,11 @@ struct mmu_rb_node *hfi1_mmu_rb_search(struct rb_root *root, unsigned long addr,
 void hfi1_mmu_rb_remove(struct rb_root *root, struct mmu_rb_node *node)
 {
        struct mmu_rb_handler *handler = find_mmu_handler(root);
-       unsigned long flags;
 
        if (!handler || !node)
                return;
 
-       spin_lock_irqsave(&handler->lock, flags);
-       __mmu_rb_remove(handler, node, false);
-       spin_unlock_irqrestore(&handler->lock, flags);
+       __mmu_rb_remove(handler, node, NULL);
 }
 
 static struct mmu_rb_handler *find_mmu_handler(struct rb_root *root)
@@ -260,7 +264,7 @@ unlock:
 static inline void mmu_notifier_page(struct mmu_notifier *mn,
                                     struct mm_struct *mm, unsigned long addr)
 {
-       mmu_notifier_mem_invalidate(mn, addr, addr + PAGE_SIZE);
+       mmu_notifier_mem_invalidate(mn, mm, addr, addr + PAGE_SIZE);
 }
 
 static inline void mmu_notifier_range_start(struct mmu_notifier *mn,
@@ -268,25 +272,31 @@ static inline void mmu_notifier_range_start(struct mmu_notifier *mn,
                                            unsigned long start,
                                            unsigned long end)
 {
-       mmu_notifier_mem_invalidate(mn, start, end);
+       mmu_notifier_mem_invalidate(mn, mm, start, end);
 }
 
 static void mmu_notifier_mem_invalidate(struct mmu_notifier *mn,
+                                       struct mm_struct *mm,
                                        unsigned long start, unsigned long end)
 {
        struct mmu_rb_handler *handler =
                container_of(mn, struct mmu_rb_handler, mn);
        struct rb_root *root = handler->root;
-       struct mmu_rb_node *node;
+       struct mmu_rb_node *node, *ptr = NULL;
        unsigned long flags;
 
        spin_lock_irqsave(&handler->lock, flags);
-       for (node = __mmu_int_rb_iter_first(root, start, end - 1); node;
-            node = __mmu_int_rb_iter_next(node, start, end - 1)) {
+       for (node = __mmu_int_rb_iter_first(root, start, end - 1);
+            node; node = ptr) {
+               /* Guard against node removal. */
+               ptr = __mmu_int_rb_iter_next(node, start, end - 1);
                hfi1_cdbg(MMU, "Invalidating node addr 0x%llx, len %u",
                          node->addr, node->len);
-               if (handler->ops->invalidate(root, node))
-                       __mmu_rb_remove(handler, node, true);
+               if (handler->ops->invalidate(root, node)) {
+                       spin_unlock_irqrestore(&handler->lock, flags);
+                       __mmu_rb_remove(handler, node, mm);
+                       spin_lock_irqsave(&handler->lock, flags);
+               }
        }
        spin_unlock_irqrestore(&handler->lock, flags);
 }
index f8523fd..19a306e 100644 (file)
@@ -59,7 +59,8 @@ struct mmu_rb_node {
 struct mmu_rb_ops {
        bool (*filter)(struct mmu_rb_node *, unsigned long, unsigned long);
        int (*insert)(struct rb_root *, struct mmu_rb_node *);
-       void (*remove)(struct rb_root *, struct mmu_rb_node *, bool);
+       void (*remove)(struct rb_root *, struct mmu_rb_node *,
+                      struct mm_struct *);
        int (*invalidate)(struct rb_root *, struct mmu_rb_node *);
 };
 
index 29a5ad2..dc9119e 100644 (file)
@@ -519,10 +519,12 @@ static void iowait_sdma_drained(struct iowait *wait)
         * do the flush work until that QP's
         * sdma work has finished.
         */
+       spin_lock(&qp->s_lock);
        if (qp->s_flags & RVT_S_WAIT_DMA) {
                qp->s_flags &= ~RVT_S_WAIT_DMA;
                hfi1_schedule_send(qp);
        }
+       spin_unlock(&qp->s_lock);
 }
 
 /**
index 0861e09..8bd56d5 100644 (file)
@@ -87,7 +87,8 @@ static u32 find_phys_blocks(struct page **, unsigned, struct tid_pageset *);
 static int set_rcvarray_entry(struct file *, unsigned long, u32,
                              struct tid_group *, struct page **, unsigned);
 static int mmu_rb_insert(struct rb_root *, struct mmu_rb_node *);
-static void mmu_rb_remove(struct rb_root *, struct mmu_rb_node *, bool);
+static void mmu_rb_remove(struct rb_root *, struct mmu_rb_node *,
+                         struct mm_struct *);
 static int mmu_rb_invalidate(struct rb_root *, struct mmu_rb_node *);
 static int program_rcvarray(struct file *, unsigned long, struct tid_group *,
                            struct tid_pageset *, unsigned, u16, struct page **,
@@ -254,6 +255,8 @@ int hfi1_user_exp_rcv_free(struct hfi1_filedata *fd)
        struct hfi1_ctxtdata *uctxt = fd->uctxt;
        struct tid_group *grp, *gptr;
 
+       if (!test_bit(HFI1_CTXT_SETUP_DONE, &uctxt->event_flags))
+               return 0;
        /*
         * The notifier would have been removed when the process'es mm
         * was freed.
@@ -899,7 +902,7 @@ static int unprogram_rcvarray(struct file *fp, u32 tidinfo,
        if (!node || node->rcventry != (uctxt->expected_base + rcventry))
                return -EBADF;
        if (HFI1_CAP_IS_USET(TID_UNMAP))
-               mmu_rb_remove(&fd->tid_rb_root, &node->mmu, false);
+               mmu_rb_remove(&fd->tid_rb_root, &node->mmu, NULL);
        else
                hfi1_mmu_rb_remove(&fd->tid_rb_root, &node->mmu);
 
@@ -965,7 +968,7 @@ static void unlock_exp_tids(struct hfi1_ctxtdata *uctxt,
                                        continue;
                                if (HFI1_CAP_IS_USET(TID_UNMAP))
                                        mmu_rb_remove(&fd->tid_rb_root,
-                                                     &node->mmu, false);
+                                                     &node->mmu, NULL);
                                else
                                        hfi1_mmu_rb_remove(&fd->tid_rb_root,
                                                           &node->mmu);
@@ -1032,7 +1035,7 @@ static int mmu_rb_insert(struct rb_root *root, struct mmu_rb_node *node)
 }
 
 static void mmu_rb_remove(struct rb_root *root, struct mmu_rb_node *node,
-                         bool notifier)
+                         struct mm_struct *mm)
 {
        struct hfi1_filedata *fdata =
                container_of(root, struct hfi1_filedata, tid_rb_root);
index ab6b6a4..d53a659 100644 (file)
@@ -278,7 +278,8 @@ static inline void pq_update(struct hfi1_user_sdma_pkt_q *);
 static void user_sdma_free_request(struct user_sdma_request *, bool);
 static int pin_vector_pages(struct user_sdma_request *,
                            struct user_sdma_iovec *);
-static void unpin_vector_pages(struct mm_struct *, struct page **, unsigned);
+static void unpin_vector_pages(struct mm_struct *, struct page **, unsigned,
+                              unsigned);
 static int check_header_template(struct user_sdma_request *,
                                 struct hfi1_pkt_header *, u32, u32);
 static int set_txreq_header(struct user_sdma_request *,
@@ -299,7 +300,8 @@ static int defer_packet_queue(
 static void activate_packet_queue(struct iowait *, int);
 static bool sdma_rb_filter(struct mmu_rb_node *, unsigned long, unsigned long);
 static int sdma_rb_insert(struct rb_root *, struct mmu_rb_node *);
-static void sdma_rb_remove(struct rb_root *, struct mmu_rb_node *, bool);
+static void sdma_rb_remove(struct rb_root *, struct mmu_rb_node *,
+                          struct mm_struct *);
 static int sdma_rb_invalidate(struct rb_root *, struct mmu_rb_node *);
 
 static struct mmu_rb_ops sdma_rb_ops = {
@@ -1063,8 +1065,10 @@ static int pin_vector_pages(struct user_sdma_request *req,
        rb_node = hfi1_mmu_rb_search(&pq->sdma_rb_root,
                                     (unsigned long)iovec->iov.iov_base,
                                     iovec->iov.iov_len);
-       if (rb_node)
+       if (rb_node && !IS_ERR(rb_node))
                node = container_of(rb_node, struct sdma_mmu_node, rb);
+       else
+               rb_node = NULL;
 
        if (!node) {
                node = kzalloc(sizeof(*node), GFP_KERNEL);
@@ -1107,7 +1111,8 @@ retry:
                        goto bail;
                }
                if (pinned != npages) {
-                       unpin_vector_pages(current->mm, pages, pinned);
+                       unpin_vector_pages(current->mm, pages, node->npages,
+                                          pinned);
                        ret = -EFAULT;
                        goto bail;
                }
@@ -1147,9 +1152,9 @@ bail:
 }
 
 static void unpin_vector_pages(struct mm_struct *mm, struct page **pages,
-                              unsigned npages)
+                              unsigned start, unsigned npages)
 {
-       hfi1_release_user_pages(mm, pages, npages, 0);
+       hfi1_release_user_pages(mm, pages + start, npages, 0);
        kfree(pages);
 }
 
@@ -1502,7 +1507,7 @@ static void user_sdma_free_request(struct user_sdma_request *req, bool unpin)
                                &req->pq->sdma_rb_root,
                                (unsigned long)req->iovs[i].iov.iov_base,
                                req->iovs[i].iov.iov_len);
-                       if (!mnode)
+                       if (!mnode || IS_ERR(mnode))
                                continue;
 
                        node = container_of(mnode, struct sdma_mmu_node, rb);
@@ -1547,7 +1552,7 @@ static int sdma_rb_insert(struct rb_root *root, struct mmu_rb_node *mnode)
 }
 
 static void sdma_rb_remove(struct rb_root *root, struct mmu_rb_node *mnode,
-                          bool notifier)
+                          struct mm_struct *mm)
 {
        struct sdma_mmu_node *node =
                container_of(mnode, struct sdma_mmu_node, rb);
@@ -1557,14 +1562,20 @@ static void sdma_rb_remove(struct rb_root *root, struct mmu_rb_node *mnode,
        node->pq->n_locked -= node->npages;
        spin_unlock(&node->pq->evict_lock);
 
-       unpin_vector_pages(notifier ? NULL : current->mm, node->pages,
+       /*
+        * If mm is set, we are being called by the MMU notifier and we
+        * should not pass a mm_struct to unpin_vector_page(). This is to
+        * prevent a deadlock when hfi1_release_user_pages() attempts to
+        * take the mmap_sem, which the MMU notifier has already taken.
+        */
+       unpin_vector_pages(mm ? NULL : current->mm, node->pages, 0,
                           node->npages);
        /*
         * If called by the MMU notifier, we have to adjust the pinned
         * page count ourselves.
         */
-       if (notifier)
-               current->mm->pinned_vm -= node->npages;
+       if (mm)
+               mm->pinned_vm -= node->npages;
        kfree(node);
 }
 
index 9b7cc7d..13a5ddc 100644 (file)
@@ -1792,7 +1792,7 @@ static short _rtl92e_tx(struct net_device *dev, struct sk_buff *skb)
        __skb_queue_tail(&ring->queue, skb);
        pdesc->OWN = 1;
        spin_unlock_irqrestore(&priv->irq_th_lock, flags);
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
 
        rtl92e_writew(dev, TPPoll, 0x01 << tcb_desc->queue_index);
        return 0;
index cfab715..62154e3 100644 (file)
@@ -1991,7 +1991,7 @@ static short rtllib_sta_ps_sleep(struct rtllib_device *ieee, u64 *time)
                return 2;
 
        if (!time_after(jiffies,
-                       ieee->dev->trans_start + msecs_to_jiffies(timeout)))
+                       dev_trans_start(ieee->dev) + msecs_to_jiffies(timeout)))
                return 0;
        if (!time_after(jiffies,
                        ieee->last_rx_ps_time + msecs_to_jiffies(timeout)))
index ae1274c..d705595 100644 (file)
@@ -249,7 +249,7 @@ inline void softmac_mgmt_xmit(struct sk_buff *skb, struct ieee80211_device *ieee
                                ieee->seq_ctrl[0]++;
 
                        /* avoid watchdog triggers */
-                       ieee->dev->trans_start = jiffies;
+                       netif_trans_update(ieee->dev);
                        ieee->softmac_data_hard_start_xmit(skb,ieee->dev,ieee->basic_rate);
                        //dev_kfree_skb_any(skb);//edit by thomas
                }
@@ -302,7 +302,7 @@ inline void softmac_ps_mgmt_xmit(struct sk_buff *skb, struct ieee80211_device *i
                        ieee->seq_ctrl[0]++;
 
                /* avoid watchdog triggers */
-               ieee->dev->trans_start = jiffies;
+               netif_trans_update(ieee->dev);
                ieee->softmac_data_hard_start_xmit(skb,ieee->dev,ieee->basic_rate);
 
        }else{
@@ -1737,7 +1737,7 @@ static short ieee80211_sta_ps_sleep(struct ieee80211_device *ieee, u32 *time_h,
                return 2;
 
        if(!time_after(jiffies,
-                      ieee->dev->trans_start + msecs_to_jiffies(timeout)))
+                      dev_trans_start(ieee->dev) + msecs_to_jiffies(timeout)))
                return 0;
 
        if(!time_after(jiffies,
@@ -2205,7 +2205,7 @@ static void ieee80211_resume_tx(struct ieee80211_device *ieee)
                                ieee->dev, ieee->rate);
                                //(i+1)<ieee->tx_pending.txb->nr_frags);
                        ieee->stats.tx_packets++;
-                       ieee->dev->trans_start = jiffies;
+                       netif_trans_update(ieee->dev);
                }
        }
 
index 849a95e..4af0140 100644 (file)
@@ -1108,7 +1108,7 @@ static void rtl8192_tx_isr(struct urb *tx_urb)
 
        if (tcb_desc->queue_index != TXCMD_QUEUE) {
                if (tx_urb->status == 0) {
-                       dev->trans_start = jiffies;
+                       netif_trans_update(dev);
                        priv->stats.txoktotal++;
                        priv->ieee80211->LinkDetectInfo.NumTxOkInPeriod++;
                        priv->stats.txbytesunicast +=
@@ -1715,7 +1715,7 @@ short rtl8192_tx(struct net_device *dev, struct sk_buff *skb)
                                return -1;
                        }
                }
-               dev->trans_start = jiffies;
+               netif_trans_update(dev);
                atomic_inc(&priv->tx_pending[tcb_desc->queue_index]);
                return 0;
        }
index 88255ce..1f9dfba 100644 (file)
@@ -393,7 +393,7 @@ static int p80211knetdev_hard_start_xmit(struct sk_buff *skb,
                goto failed;
        }
 
-       netdev->trans_start = jiffies;
+       netif_trans_update(netdev);
 
        netdev->stats.tx_packets++;
        /* count only the packet payload */
index c37eedc..3c3dc4a 100644 (file)
@@ -376,6 +376,8 @@ config MTK_THERMAL
        tristate "Temperature sensor driver for mediatek SoCs"
        depends on ARCH_MEDIATEK || COMPILE_TEST
        depends on HAS_IOMEM
+       depends on NVMEM || NVMEM=n
+       depends on RESET_CONTROLLER
        default y
        help
          Enable this option if you want to have support for thermal management
index 36d0729..5e820b5 100644 (file)
@@ -68,12 +68,12 @@ static inline int _step_to_temp(int step)
         * Every step equals (1 * 200) / 255 celsius, and finally
         * need convert to millicelsius.
         */
-       return (HISI_TEMP_BASE + (step * 200 / 255)) * 1000;
+       return (HISI_TEMP_BASE * 1000 + (step * 200000 / 255));
 }
 
 static inline long _temp_to_step(long temp)
 {
-       return ((temp / 1000 - HISI_TEMP_BASE) * 255 / 200);
+       return ((temp - HISI_TEMP_BASE * 1000) * 255) / 200000;
 }
 
 static long hisi_thermal_get_sensor_temp(struct hisi_thermal_data *data,
index 3d93b1c..507632b 100644 (file)
@@ -27,7 +27,6 @@
 #include <linux/thermal.h>
 #include <linux/reset.h>
 #include <linux/types.h>
-#include <linux/nvmem-consumer.h>
 
 /* AUXADC Registers */
 #define AUXADC_CON0_V          0x000
@@ -619,7 +618,7 @@ static struct platform_driver mtk_thermal_driver = {
 
 module_platform_driver(mtk_thermal_driver);
 
-MODULE_AUTHOR("Sascha Hauer <s.hauer@pengutronix.de");
+MODULE_AUTHOR("Sascha Hauer <s.hauer@pengutronix.de>");
 MODULE_AUTHOR("Hanyi Wu <hanyi.wu@mediatek.com>");
 MODULE_DESCRIPTION("Mediatek thermal driver");
 MODULE_LICENSE("GPL v2");
index 49ac23d..d8ec44b 100644 (file)
@@ -803,8 +803,8 @@ static int thermal_of_populate_trip(struct device_node *np,
  * otherwise, it returns a corresponding ERR_PTR(). Caller must
  * check the return value with help of IS_ERR() helper.
  */
-static struct __thermal_zone *
-thermal_of_build_thermal_zone(struct device_node *np)
+static struct __thermal_zone
+__init *thermal_of_build_thermal_zone(struct device_node *np)
 {
        struct device_node *child = NULL, *gchild;
        struct __thermal_zone *tz;
index 1246aa6..2f1a863 100644 (file)
@@ -301,7 +301,7 @@ static void divvy_up_power(u32 *req_power, u32 *max_power, int num_actors,
        capped_extra_power = 0;
        extra_power = 0;
        for (i = 0; i < num_actors; i++) {
-               u64 req_range = req_power[i] * power_range;
+               u64 req_range = (u64)req_power[i] * power_range;
 
                granted_power[i] = DIV_ROUND_CLOSEST_ULL(req_range,
                                                         total_req_power);
index d4b5465..5133cd1 100644 (file)
@@ -688,7 +688,7 @@ trip_point_temp_store(struct device *dev, struct device_attribute *attr,
 {
        struct thermal_zone_device *tz = to_thermal_zone(dev);
        int trip, ret;
-       unsigned long temperature;
+       int temperature;
 
        if (!tz->ops->set_trip_temp)
                return -EPERM;
@@ -696,7 +696,7 @@ trip_point_temp_store(struct device *dev, struct device_attribute *attr,
        if (!sscanf(attr->attr.name, "trip_point_%d_temp", &trip))
                return -EINVAL;
 
-       if (kstrtoul(buf, 10, &temperature))
+       if (kstrtoint(buf, 10, &temperature))
                return -EINVAL;
 
        ret = tz->ops->set_trip_temp(tz, trip, temperature);
@@ -899,9 +899,9 @@ emul_temp_store(struct device *dev, struct device_attribute *attr,
 {
        struct thermal_zone_device *tz = to_thermal_zone(dev);
        int ret = 0;
-       unsigned long temperature;
+       int temperature;
 
-       if (kstrtoul(buf, 10, &temperature))
+       if (kstrtoint(buf, 10, &temperature))
                return -EINVAL;
 
        if (!tz->ops->set_emul_temp) {
@@ -959,7 +959,7 @@ static DEVICE_ATTR(sustainable_power, S_IWUSR | S_IRUGO, sustainable_power_show,
        struct thermal_zone_device *tz = to_thermal_zone(dev);          \
                                                                        \
        if (tz->tzp)                                                    \
-               return sprintf(buf, "%u\n", tz->tzp->name);             \
+               return sprintf(buf, "%d\n", tz->tzp->name);             \
        else                                                            \
                return -EIO;                                            \
        }                                                               \
index c016207..0c27a00 100644 (file)
@@ -2662,7 +2662,7 @@ static int gsm_mux_net_start_xmit(struct sk_buff *skb,
        STATS(net).tx_bytes += skb->len;
        gsm_dlci_data_kick(dlci);
        /* And tell the kernel when the last transmit started. */
-       net->trans_start = jiffies;
+       netif_trans_update(net);
        muxnet_put(mux_net);
        return NETDEV_TX_OK;
 }
index 0058d9f..cf0dc51 100644 (file)
@@ -626,7 +626,7 @@ static int pty_unix98_ioctl(struct tty_struct *tty,
  */
 
 static struct tty_struct *ptm_unix98_lookup(struct tty_driver *driver,
-               struct inode *ptm_inode, int idx)
+               struct file *file, int idx)
 {
        /* Master must be open via /dev/ptmx */
        return ERR_PTR(-EIO);
@@ -642,12 +642,12 @@ static struct tty_struct *ptm_unix98_lookup(struct tty_driver *driver,
  */
 
 static struct tty_struct *pts_unix98_lookup(struct tty_driver *driver,
-               struct inode *pts_inode, int idx)
+               struct file *file, int idx)
 {
        struct tty_struct *tty;
 
        mutex_lock(&devpts_mutex);
-       tty = devpts_get_priv(pts_inode);
+       tty = devpts_get_priv(file->f_path.dentry);
        mutex_unlock(&devpts_mutex);
        /* Master must be open before slave */
        if (!tty)
@@ -722,7 +722,7 @@ static int ptmx_open(struct inode *inode, struct file *filp)
 {
        struct pts_fs_info *fsi;
        struct tty_struct *tty;
-       struct inode *slave_inode;
+       struct dentry *dentry;
        int retval;
        int index;
 
@@ -769,14 +769,12 @@ static int ptmx_open(struct inode *inode, struct file *filp)
 
        tty_add_file(tty, filp);
 
-       slave_inode = devpts_pty_new(fsi,
-                       MKDEV(UNIX98_PTY_SLAVE_MAJOR, index), index,
-                       tty->link);
-       if (IS_ERR(slave_inode)) {
-               retval = PTR_ERR(slave_inode);
+       dentry = devpts_pty_new(fsi, index, tty->link);
+       if (IS_ERR(dentry)) {
+               retval = PTR_ERR(dentry);
                goto err_release;
        }
-       tty->link->driver_data = slave_inode;
+       tty->link->driver_data = dentry;
 
        retval = ptm_driver->ops->open(tty, filp);
        if (retval)
index e213da0..00ad263 100644 (file)
@@ -1403,9 +1403,18 @@ static void __do_stop_tx_rs485(struct uart_8250_port *p)
        /*
         * Empty the RX FIFO, we are not interested in anything
         * received during the half-duplex transmission.
+        * Enable previously disabled RX interrupts.
         */
-       if (!(p->port.rs485.flags & SER_RS485_RX_DURING_TX))
+       if (!(p->port.rs485.flags & SER_RS485_RX_DURING_TX)) {
                serial8250_clear_fifos(p);
+
+               serial8250_rpm_get(p);
+
+               p->ier |= UART_IER_RLSI | UART_IER_RDI;
+               serial_port_out(&p->port, UART_IER, p->ier);
+
+               serial8250_rpm_put(p);
+       }
 }
 
 static void serial8250_em485_handle_stop_tx(unsigned long arg)
index 64742a0..4d7cb9c 100644 (file)
@@ -324,7 +324,6 @@ config SERIAL_8250_EM
 config SERIAL_8250_RT288X
        bool "Ralink RT288x/RT305x/RT3662/RT3883 serial port support"
        depends on SERIAL_8250
-       depends on MIPS || COMPILE_TEST
        default y if MIPS_ALCHEMY || SOC_RT288X || SOC_RT305X || SOC_RT3883 || SOC_MT7620
        help
          Selecting this option will add support for the alternate register
index c9fdfc8..d08baa6 100644 (file)
@@ -72,7 +72,7 @@ static void uartlite_outbe32(u32 val, void __iomem *addr)
        iowrite32be(val, addr);
 }
 
-static const struct uartlite_reg_ops uartlite_be = {
+static struct uartlite_reg_ops uartlite_be = {
        .in = uartlite_inbe32,
        .out = uartlite_outbe32,
 };
@@ -87,21 +87,21 @@ static void uartlite_outle32(u32 val, void __iomem *addr)
        iowrite32(val, addr);
 }
 
-static const struct uartlite_reg_ops uartlite_le = {
+static struct uartlite_reg_ops uartlite_le = {
        .in = uartlite_inle32,
        .out = uartlite_outle32,
 };
 
 static inline u32 uart_in32(u32 offset, struct uart_port *port)
 {
-       const struct uartlite_reg_ops *reg_ops = port->private_data;
+       struct uartlite_reg_ops *reg_ops = port->private_data;
 
        return reg_ops->in(port->membase + offset);
 }
 
 static inline void uart_out32(u32 val, u32 offset, struct uart_port *port)
 {
-       const struct uartlite_reg_ops *reg_ops = port->private_data;
+       struct uartlite_reg_ops *reg_ops = port->private_data;
 
        reg_ops->out(val, port->membase + offset);
 }
index f5476e2..c8c7601 100644 (file)
@@ -7708,7 +7708,7 @@ static netdev_tx_t hdlcdev_xmit(struct sk_buff *skb,
        dev_kfree_skb(skb);
 
        /* save start time for transmit timeout detection */
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
 
        /* start hardware transmitter if necessary */
        spin_lock_irqsave(&info->irq_spinlock,flags);
@@ -7764,7 +7764,7 @@ static int hdlcdev_open(struct net_device *dev)
        mgsl_program_hw(info);
 
        /* enable network layer transmit */
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
        netif_start_queue(dev);
 
        /* inform generic HDLC layer of current DCD status */
index c0a2f5a..d5b6471 100644 (file)
@@ -1493,7 +1493,7 @@ static netdev_tx_t hdlcdev_xmit(struct sk_buff *skb,
        dev->stats.tx_bytes += skb->len;
 
        /* save start time for transmit timeout detection */
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
 
        spin_lock_irqsave(&info->lock, flags);
        tx_load(info, skb->data, skb->len);
@@ -1552,7 +1552,7 @@ static int hdlcdev_open(struct net_device *dev)
        program_hw(info);
 
        /* enable network layer transmit */
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
        netif_start_queue(dev);
 
        /* inform generic HDLC layer of current DCD status */
index 90da0c7..3f89685 100644 (file)
@@ -1612,7 +1612,7 @@ static netdev_tx_t hdlcdev_xmit(struct sk_buff *skb,
        dev_kfree_skb(skb);
 
        /* save start time for transmit timeout detection */
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
 
        /* start hardware transmitter if necessary */
        spin_lock_irqsave(&info->lock,flags);
@@ -1668,7 +1668,7 @@ static int hdlcdev_open(struct net_device *dev)
        program_hw(info);
 
        /* enable network layer transmit */
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
        netif_start_queue(dev);
 
        /* inform generic HDLC layer of current DCD status */
index 9b04d72..24d5491 100644 (file)
@@ -1367,12 +1367,12 @@ static ssize_t tty_line_name(struct tty_driver *driver, int index, char *p)
  *     Locking: tty_mutex must be held. If the tty is found, bump the tty kref.
  */
 static struct tty_struct *tty_driver_lookup_tty(struct tty_driver *driver,
-               struct inode *inode, int idx)
+               struct file *file, int idx)
 {
        struct tty_struct *tty;
 
        if (driver->ops->lookup)
-               tty = driver->ops->lookup(driver, inode, idx);
+               tty = driver->ops->lookup(driver, file, idx);
        else
                tty = driver->ttys[idx];
 
@@ -2040,7 +2040,7 @@ static struct tty_struct *tty_open_by_driver(dev_t device, struct inode *inode,
        }
 
        /* check whether we're reopening an existing tty */
-       tty = tty_driver_lookup_tty(driver, inode, index);
+       tty = tty_driver_lookup_tty(driver, filp, index);
        if (IS_ERR(tty)) {
                mutex_unlock(&tty_mutex);
                goto out;
index fa20f5a..34277ce 100644 (file)
@@ -1150,6 +1150,11 @@ static int dwc3_suspend(struct device *dev)
        phy_exit(dwc->usb2_generic_phy);
        phy_exit(dwc->usb3_generic_phy);
 
+       usb_phy_set_suspend(dwc->usb2_phy, 1);
+       usb_phy_set_suspend(dwc->usb3_phy, 1);
+       WARN_ON(phy_power_off(dwc->usb2_generic_phy) < 0);
+       WARN_ON(phy_power_off(dwc->usb3_generic_phy) < 0);
+
        pinctrl_pm_select_sleep_state(dev);
 
        return 0;
@@ -1163,11 +1168,21 @@ static int dwc3_resume(struct device *dev)
 
        pinctrl_pm_select_default_state(dev);
 
+       usb_phy_set_suspend(dwc->usb2_phy, 0);
+       usb_phy_set_suspend(dwc->usb3_phy, 0);
+       ret = phy_power_on(dwc->usb2_generic_phy);
+       if (ret < 0)
+               return ret;
+
+       ret = phy_power_on(dwc->usb3_generic_phy);
+       if (ret < 0)
+               goto err_usb2phy_power;
+
        usb_phy_init(dwc->usb3_phy);
        usb_phy_init(dwc->usb2_phy);
        ret = phy_init(dwc->usb2_generic_phy);
        if (ret < 0)
-               return ret;
+               goto err_usb3phy_power;
 
        ret = phy_init(dwc->usb3_generic_phy);
        if (ret < 0)
@@ -1200,6 +1215,12 @@ static int dwc3_resume(struct device *dev)
 err_usb2phy_init:
        phy_exit(dwc->usb2_generic_phy);
 
+err_usb3phy_power:
+       phy_power_off(dwc->usb3_generic_phy);
+
+err_usb2phy_power:
+       phy_power_off(dwc->usb2_generic_phy);
+
        return ret;
 }
 
index 9ac37fe..cebf9e3 100644 (file)
@@ -645,7 +645,7 @@ int dwc3_debugfs_init(struct dwc3 *dwc)
        file = debugfs_create_regset32("regdump", S_IRUGO, root, dwc->regset);
        if (!file) {
                ret = -ENOMEM;
-               goto err1;
+               goto err2;
        }
 
        if (IS_ENABLED(CONFIG_USB_DWC3_DUAL_ROLE)) {
@@ -653,7 +653,7 @@ int dwc3_debugfs_init(struct dwc3 *dwc)
                                dwc, &dwc3_mode_fops);
                if (!file) {
                        ret = -ENOMEM;
-                       goto err1;
+                       goto err2;
                }
        }
 
@@ -663,19 +663,22 @@ int dwc3_debugfs_init(struct dwc3 *dwc)
                                dwc, &dwc3_testmode_fops);
                if (!file) {
                        ret = -ENOMEM;
-                       goto err1;
+                       goto err2;
                }
 
                file = debugfs_create_file("link_state", S_IRUGO | S_IWUSR, root,
                                dwc, &dwc3_link_state_fops);
                if (!file) {
                        ret = -ENOMEM;
-                       goto err1;
+                       goto err2;
                }
        }
 
        return 0;
 
+err2:
+       kfree(dwc->regset);
+
 err1:
        debugfs_remove_recursive(root);
 
@@ -686,5 +689,5 @@ err0:
 void dwc3_debugfs_exit(struct dwc3 *dwc)
 {
        debugfs_remove_recursive(dwc->root);
-       dwc->root = NULL;
+       kfree(dwc->regset);
 }
index 22e9606..55da2c7 100644 (file)
@@ -496,7 +496,7 @@ static int dwc3_omap_probe(struct platform_device *pdev)
        ret = pm_runtime_get_sync(dev);
        if (ret < 0) {
                dev_err(dev, "get_sync failed with err %d\n", ret);
-               goto err0;
+               goto err1;
        }
 
        dwc3_omap_map_offset(omap);
@@ -516,28 +516,24 @@ static int dwc3_omap_probe(struct platform_device *pdev)
 
        ret = dwc3_omap_extcon_register(omap);
        if (ret < 0)
-               goto err2;
+               goto err1;
 
        ret = of_platform_populate(node, NULL, NULL, dev);
        if (ret) {
                dev_err(&pdev->dev, "failed to create dwc3 core\n");
-               goto err3;
+               goto err2;
        }
 
        dwc3_omap_enable_irqs(omap);
 
        return 0;
 
-err3:
+err2:
        extcon_unregister_notifier(omap->edev, EXTCON_USB, &omap->vbus_nb);
        extcon_unregister_notifier(omap->edev, EXTCON_USB_HOST, &omap->id_nb);
-err2:
-       dwc3_omap_disable_irqs(omap);
 
 err1:
        pm_runtime_put_sync(dev);
-
-err0:
        pm_runtime_disable(dev);
 
        return ret;
index d54a028..8e4a1b1 100644 (file)
@@ -2936,6 +2936,9 @@ void dwc3_gadget_exit(struct dwc3 *dwc)
 
 int dwc3_gadget_suspend(struct dwc3 *dwc)
 {
+       if (!dwc->gadget_driver)
+               return 0;
+
        if (dwc->pullups_connected) {
                dwc3_gadget_disable_irq(dwc);
                dwc3_gadget_run_stop(dwc, true, true);
@@ -2954,6 +2957,9 @@ int dwc3_gadget_resume(struct dwc3 *dwc)
        struct dwc3_ep          *dep;
        int                     ret;
 
+       if (!dwc->gadget_driver)
+               return 0;
+
        /* Start with SuperSpeed Default */
        dwc3_gadget_ep0_desc.wMaxPacketSize = cpu_to_le16(512);
 
index de9ffd6..524e233 100644 (file)
@@ -651,6 +651,8 @@ static int bos_desc(struct usb_composite_dev *cdev)
                ssp_cap->bLength = USB_DT_USB_SSP_CAP_SIZE(1);
                ssp_cap->bDescriptorType = USB_DT_DEVICE_CAPABILITY;
                ssp_cap->bDevCapabilityType = USB_SSP_CAP_TYPE;
+               ssp_cap->bReserved = 0;
+               ssp_cap->wReserved = 0;
 
                /* SSAC = 1 (2 attributes) */
                ssp_cap->bmAttributes = cpu_to_le32(1);
index e21ca2b..15b648c 100644 (file)
@@ -646,6 +646,7 @@ static void ffs_user_copy_worker(struct work_struct *work)
                                                   work);
        int ret = io_data->req->status ? io_data->req->status :
                                         io_data->req->actual;
+       bool kiocb_has_eventfd = io_data->kiocb->ki_flags & IOCB_EVENTFD;
 
        if (io_data->read && ret > 0) {
                use_mm(io_data->mm);
@@ -657,13 +658,11 @@ static void ffs_user_copy_worker(struct work_struct *work)
 
        io_data->kiocb->ki_complete(io_data->kiocb, ret, ret);
 
-       if (io_data->ffs->ffs_eventfd &&
-           !(io_data->kiocb->ki_flags & IOCB_EVENTFD))
+       if (io_data->ffs->ffs_eventfd && !kiocb_has_eventfd)
                eventfd_signal(io_data->ffs->ffs_eventfd, 1);
 
        usb_ep_free_request(io_data->ep, io_data->req);
 
-       io_data->kiocb->private = NULL;
        if (io_data->read)
                kfree(io_data->to_free);
        kfree(io_data->buf);
index 637809e..a3f7e7c 100644 (file)
@@ -597,7 +597,7 @@ static netdev_tx_t eth_start_xmit(struct sk_buff *skb,
                DBG(dev, "tx queue err %d\n", retval);
                break;
        case 0:
-               net->trans_start = jiffies;
+               netif_trans_update(net);
                atomic_inc(&dev->tx_qlen);
        }
 
index 541ead4..85b8517 100644 (file)
@@ -386,9 +386,7 @@ void ceph_put_mds_session(struct ceph_mds_session *s)
             atomic_read(&s->s_ref), atomic_read(&s->s_ref)-1);
        if (atomic_dec_and_test(&s->s_ref)) {
                if (s->s_auth.authorizer)
-                       ceph_auth_destroy_authorizer(
-                               s->s_mdsc->fsc->client->monc.auth,
-                               s->s_auth.authorizer);
+                       ceph_auth_destroy_authorizer(s->s_auth.authorizer);
                kfree(s);
        }
 }
@@ -3900,7 +3898,7 @@ static struct ceph_auth_handshake *get_authorizer(struct ceph_connection *con,
        struct ceph_auth_handshake *auth = &s->s_auth;
 
        if (force_new && auth->authorizer) {
-               ceph_auth_destroy_authorizer(ac, auth->authorizer);
+               ceph_auth_destroy_authorizer(auth->authorizer);
                auth->authorizer = NULL;
        }
        if (!auth->authorizer) {
index 0af8e7d..0b2954d 100644 (file)
@@ -604,8 +604,7 @@ void devpts_put_ref(struct pts_fs_info *fsi)
  *
  * The created inode is returned. Remove it from /dev/pts/ by devpts_pty_kill.
  */
-struct inode *devpts_pty_new(struct pts_fs_info *fsi, dev_t device, int index,
-               void *priv)
+struct dentry *devpts_pty_new(struct pts_fs_info *fsi, int index, void *priv)
 {
        struct dentry *dentry;
        struct super_block *sb;
@@ -629,25 +628,21 @@ struct inode *devpts_pty_new(struct pts_fs_info *fsi, dev_t device, int index,
        inode->i_uid = opts->setuid ? opts->uid : current_fsuid();
        inode->i_gid = opts->setgid ? opts->gid : current_fsgid();
        inode->i_mtime = inode->i_atime = inode->i_ctime = CURRENT_TIME;
-       init_special_inode(inode, S_IFCHR|opts->mode, device);
-       inode->i_private = priv;
+       init_special_inode(inode, S_IFCHR|opts->mode, MKDEV(UNIX98_PTY_SLAVE_MAJOR, index));
 
        sprintf(s, "%d", index);
 
-       inode_lock(d_inode(root));
-
        dentry = d_alloc_name(root, s);
        if (dentry) {
+               dentry->d_fsdata = priv;
                d_add(dentry, inode);
                fsnotify_create(d_inode(root), dentry);
        } else {
                iput(inode);
-               inode = ERR_PTR(-ENOMEM);
+               dentry = ERR_PTR(-ENOMEM);
        }
 
-       inode_unlock(d_inode(root));
-
-       return inode;
+       return dentry;
 }
 
 /**
@@ -656,24 +651,10 @@ struct inode *devpts_pty_new(struct pts_fs_info *fsi, dev_t device, int index,
  *
  * Returns whatever was passed as priv in devpts_pty_new for a given inode.
  */
-void *devpts_get_priv(struct inode *pts_inode)
+void *devpts_get_priv(struct dentry *dentry)
 {
-       struct dentry *dentry;
-       void *priv = NULL;
-
-       BUG_ON(pts_inode->i_rdev == MKDEV(TTYAUX_MAJOR, PTMX_MINOR));
-
-       /* Ensure dentry has not been deleted by devpts_pty_kill() */
-       dentry = d_find_alias(pts_inode);
-       if (!dentry)
-               return NULL;
-
-       if (pts_inode->i_sb->s_magic == DEVPTS_SUPER_MAGIC)
-               priv = pts_inode->i_private;
-
-       dput(dentry);
-
-       return priv;
+       WARN_ON_ONCE(dentry->d_sb->s_magic != DEVPTS_SUPER_MAGIC);
+       return dentry->d_fsdata;
 }
 
 /**
@@ -682,24 +663,14 @@ void *devpts_get_priv(struct inode *pts_inode)
  *
  * This is an inverse operation of devpts_pty_new.
  */
-void devpts_pty_kill(struct inode *inode)
+void devpts_pty_kill(struct dentry *dentry)
 {
-       struct super_block *sb = pts_sb_from_inode(inode);
-       struct dentry *root = sb->s_root;
-       struct dentry *dentry;
+       WARN_ON_ONCE(dentry->d_sb->s_magic != DEVPTS_SUPER_MAGIC);
 
-       BUG_ON(inode->i_rdev == MKDEV(TTYAUX_MAJOR, PTMX_MINOR));
-
-       inode_lock(d_inode(root));
-
-       dentry = d_find_alias(inode);
-
-       drop_nlink(inode);
+       dentry->d_fsdata = NULL;
+       drop_nlink(dentry->d_inode);
        d_delete(dentry);
        dput(dentry);   /* d_alloc_name() in devpts_pty_new() */
-       dput(dentry);           /* d_find_alias above */
-
-       inode_unlock(d_inode(root));
 }
 
 static int __init init_devpts_fs(void)
index 719924d..dcad5e2 100644 (file)
@@ -1295,7 +1295,7 @@ static int fuse_get_user_pages(struct fuse_req *req, struct iov_iter *ii,
 
        *nbytesp = nbytes;
 
-       return ret;
+       return ret < 0 ? ret : 0;
 }
 
 static inline int fuse_iter_npages(const struct iov_iter *ii_p)
index 9aed6e2..13719d3 100644 (file)
@@ -2455,6 +2455,8 @@ int dlm_deref_lockres_done_handler(struct o2net_msg *msg, u32 len, void *data,
 
        spin_unlock(&dlm->spinlock);
 
+       ret = 0;
+
 done:
        dlm_put(dlm);
        return ret;
index 229cb54..5415835 100644 (file)
@@ -1518,6 +1518,32 @@ static struct page *can_gather_numa_stats(pte_t pte, struct vm_area_struct *vma,
        return page;
 }
 
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+static struct page *can_gather_numa_stats_pmd(pmd_t pmd,
+                                             struct vm_area_struct *vma,
+                                             unsigned long addr)
+{
+       struct page *page;
+       int nid;
+
+       if (!pmd_present(pmd))
+               return NULL;
+
+       page = vm_normal_page_pmd(vma, addr, pmd);
+       if (!page)
+               return NULL;
+
+       if (PageReserved(page))
+               return NULL;
+
+       nid = page_to_nid(page);
+       if (!node_isset(nid, node_states[N_MEMORY]))
+               return NULL;
+
+       return page;
+}
+#endif
+
 static int gather_pte_stats(pmd_t *pmd, unsigned long addr,
                unsigned long end, struct mm_walk *walk)
 {
@@ -1527,14 +1553,14 @@ static int gather_pte_stats(pmd_t *pmd, unsigned long addr,
        pte_t *orig_pte;
        pte_t *pte;
 
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
        ptl = pmd_trans_huge_lock(pmd, vma);
        if (ptl) {
-               pte_t huge_pte = *(pte_t *)pmd;
                struct page *page;
 
-               page = can_gather_numa_stats(huge_pte, vma, addr);
+               page = can_gather_numa_stats_pmd(*pmd, vma, addr);
                if (page)
-                       gather_stats(page, md, pte_dirty(huge_pte),
+                       gather_stats(page, md, pmd_dirty(*pmd),
                                     HPAGE_PMD_SIZE/PAGE_SIZE);
                spin_unlock(ptl);
                return 0;
@@ -1542,6 +1568,7 @@ static int gather_pte_stats(pmd_t *pmd, unsigned long addr,
 
        if (pmd_trans_unstable(pmd))
                return 0;
+#endif
        orig_pte = pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
        do {
                struct page *page = can_gather_numa_stats(*pte, vma, addr);
index d07a2f9..8b25267 100644 (file)
@@ -47,7 +47,7 @@ void quota_send_warning(struct kqid qid, dev_t dev,
        void *msg_head;
        int ret;
        int msg_size = 4 * nla_total_size(sizeof(u32)) +
-                      2 * nla_total_size(sizeof(u64));
+                      2 * nla_total_size_64bit(sizeof(u64));
 
        /* We have to allocate using GFP_NOFS as we are called from a
         * filesystem performing write and thus further recursion into
@@ -68,8 +68,9 @@ void quota_send_warning(struct kqid qid, dev_t dev,
        ret = nla_put_u32(skb, QUOTA_NL_A_QTYPE, qid.type);
        if (ret)
                goto attr_err_out;
-       ret = nla_put_u64(skb, QUOTA_NL_A_EXCESS_ID,
-                         from_kqid_munged(&init_user_ns, qid));
+       ret = nla_put_u64_64bit(skb, QUOTA_NL_A_EXCESS_ID,
+                               from_kqid_munged(&init_user_ns, qid),
+                               QUOTA_NL_A_PAD);
        if (ret)
                goto attr_err_out;
        ret = nla_put_u32(skb, QUOTA_NL_A_WARNING, warntype);
@@ -81,8 +82,9 @@ void quota_send_warning(struct kqid qid, dev_t dev,
        ret = nla_put_u32(skb, QUOTA_NL_A_DEV_MINOR, MINOR(dev));
        if (ret)
                goto attr_err_out;
-       ret = nla_put_u64(skb, QUOTA_NL_A_CAUSED_ID,
-                         from_kuid_munged(&init_user_ns, current_uid()));
+       ret = nla_put_u64_64bit(skb, QUOTA_NL_A_CAUSED_ID,
+                               from_kuid_munged(&init_user_ns, current_uid()),
+                               QUOTA_NL_A_PAD);
        if (ret)
                goto attr_err_out;
        genlmsg_end(skb, msg_head);
index fa92fe8..36661ac 100644 (file)
@@ -919,14 +919,14 @@ static int udf_load_pvoldesc(struct super_block *sb, sector_t block)
 #endif
        }
 
-       ret = udf_CS0toUTF8(outstr, 31, pvoldesc->volIdent, 32);
+       ret = udf_dstrCS0toUTF8(outstr, 31, pvoldesc->volIdent, 32);
        if (ret < 0)
                goto out_bh;
 
        strncpy(UDF_SB(sb)->s_volume_ident, outstr, ret);
        udf_debug("volIdent[] = '%s'\n", UDF_SB(sb)->s_volume_ident);
 
-       ret = udf_CS0toUTF8(outstr, 127, pvoldesc->volSetIdent, 128);
+       ret = udf_dstrCS0toUTF8(outstr, 127, pvoldesc->volSetIdent, 128);
        if (ret < 0)
                goto out_bh;
 
index 972b706..263829e 100644 (file)
@@ -212,7 +212,7 @@ extern int udf_get_filename(struct super_block *, const uint8_t *, int,
                            uint8_t *, int);
 extern int udf_put_filename(struct super_block *, const uint8_t *, int,
                            uint8_t *, int);
-extern int udf_CS0toUTF8(uint8_t *, int, const uint8_t *, int);
+extern int udf_dstrCS0toUTF8(uint8_t *, int, const uint8_t *, int);
 
 /* ialloc.c */
 extern void udf_free_inode(struct inode *);
index 3ff42f4..695389a 100644 (file)
@@ -335,9 +335,21 @@ try_again:
        return u_len;
 }
 
-int udf_CS0toUTF8(uint8_t *utf_o, int o_len, const uint8_t *ocu_i, int i_len)
+int udf_dstrCS0toUTF8(uint8_t *utf_o, int o_len,
+                     const uint8_t *ocu_i, int i_len)
 {
-       return udf_name_from_CS0(utf_o, o_len, ocu_i, i_len,
+       int s_len = 0;
+
+       if (i_len > 0) {
+               s_len = ocu_i[i_len - 1];
+               if (s_len >= i_len) {
+                       pr_err("incorrect dstring lengths (%d/%d)\n",
+                              s_len, i_len);
+                       return -EINVAL;
+               }
+       }
+
+       return udf_name_from_CS0(utf_o, o_len, ocu_i, s_len,
                                 udf_uni2char_utf8, 0);
 }
 
index e56272c..bf2d34c 100644 (file)
@@ -108,11 +108,15 @@ futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,
        u32 val;
 
        preempt_disable();
-       if (unlikely(get_user(val, uaddr) != 0))
+       if (unlikely(get_user(val, uaddr) != 0)) {
+               preempt_enable();
                return -EFAULT;
+       }
 
-       if (val == oldval && unlikely(put_user(newval, uaddr) != 0))
+       if (val == oldval && unlikely(put_user(newval, uaddr) != 0)) {
+               preempt_enable();
                return -EFAULT;
+       }
 
        *uval = val;
        preempt_enable();
index 461a055..cebecff 100644 (file)
@@ -39,6 +39,8 @@ static inline bool drm_arch_can_wc_memory(void)
 {
 #if defined(CONFIG_PPC) && !defined(CONFIG_NOT_COHERENT_CACHE)
        return false;
+#elif defined(CONFIG_MIPS) && defined(CONFIG_CPU_LOONGSON3)
+       return false;
 #else
        return true;
 #endif
index f63afdc..8ee27b8 100644 (file)
@@ -180,12 +180,13 @@ void bpf_register_prog_type(struct bpf_prog_type_list *tl);
 void bpf_register_map_type(struct bpf_map_type_list *tl);
 
 struct bpf_prog *bpf_prog_get(u32 ufd);
+struct bpf_prog *bpf_prog_inc(struct bpf_prog *prog);
 void bpf_prog_put(struct bpf_prog *prog);
 void bpf_prog_put_rcu(struct bpf_prog *prog);
 
 struct bpf_map *bpf_map_get_with_uref(u32 ufd);
 struct bpf_map *__bpf_map_get(struct fd f);
-void bpf_map_inc(struct bpf_map *map, bool uref);
+struct bpf_map *bpf_map_inc(struct bpf_map *map, bool uref);
 void bpf_map_put_with_uref(struct bpf_map *map);
 void bpf_map_put(struct bpf_map *map);
 int bpf_map_precharge_memlock(u32 pages);
index 260d78b..1563265 100644 (file)
  */
 
 struct ceph_auth_client;
-struct ceph_authorizer;
 struct ceph_msg;
 
+struct ceph_authorizer {
+       void (*destroy)(struct ceph_authorizer *);
+};
+
 struct ceph_auth_handshake {
        struct ceph_authorizer *authorizer;
        void *authorizer_buf;
@@ -62,8 +65,6 @@ struct ceph_auth_client_ops {
                                 struct ceph_auth_handshake *auth);
        int (*verify_authorizer_reply)(struct ceph_auth_client *ac,
                                       struct ceph_authorizer *a, size_t len);
-       void (*destroy_authorizer)(struct ceph_auth_client *ac,
-                                  struct ceph_authorizer *a);
        void (*invalidate_authorizer)(struct ceph_auth_client *ac,
                                      int peer_type);
 
@@ -112,8 +113,7 @@ extern int ceph_auth_is_authenticated(struct ceph_auth_client *ac);
 extern int ceph_auth_create_authorizer(struct ceph_auth_client *ac,
                                       int peer_type,
                                       struct ceph_auth_handshake *auth);
-extern void ceph_auth_destroy_authorizer(struct ceph_auth_client *ac,
-                                        struct ceph_authorizer *a);
+void ceph_auth_destroy_authorizer(struct ceph_authorizer *a);
 extern int ceph_auth_update_authorizer(struct ceph_auth_client *ac,
                                       int peer_type,
                                       struct ceph_auth_handshake *a);
index 4343df8..cbf4609 100644 (file)
@@ -16,7 +16,6 @@ struct ceph_msg;
 struct ceph_snap_context;
 struct ceph_osd_request;
 struct ceph_osd_client;
-struct ceph_authorizer;
 
 /*
  * completion callback for async writepages
index 3e39ae5..5b17de6 100644 (file)
@@ -444,6 +444,7 @@ struct cgroup_subsys {
        int (*can_attach)(struct cgroup_taskset *tset);
        void (*cancel_attach)(struct cgroup_taskset *tset);
        void (*attach)(struct cgroup_taskset *tset);
+       void (*post_attach)(void);
        int (*can_fork)(struct task_struct *task);
        void (*cancel_fork)(struct task_struct *task);
        void (*fork)(struct task_struct *task);
index fea160e..85a868c 100644 (file)
@@ -137,8 +137,6 @@ static inline void set_mems_allowed(nodemask_t nodemask)
        task_unlock(current);
 }
 
-extern void cpuset_post_attach_flush(void);
-
 #else /* !CONFIG_CPUSETS */
 
 static inline bool cpusets_enabled(void) { return false; }
@@ -245,10 +243,6 @@ static inline bool read_mems_allowed_retry(unsigned int seq)
        return false;
 }
 
-static inline void cpuset_post_attach_flush(void)
-{
-}
-
 #endif /* !CONFIG_CPUSETS */
 
 #endif /* _LINUX_CPUSET_H */
index 358a4db..5871f29 100644 (file)
@@ -27,11 +27,11 @@ int devpts_new_index(struct pts_fs_info *);
 void devpts_kill_index(struct pts_fs_info *, int);
 
 /* mknod in devpts */
-struct inode *devpts_pty_new(struct pts_fs_info *, dev_t, int, void *);
+struct dentry *devpts_pty_new(struct pts_fs_info *, int, void *);
 /* get private structure */
-void *devpts_get_priv(struct inode *pts_inode);
+void *devpts_get_priv(struct dentry *);
 /* unlink */
-void devpts_pty_kill(struct inode *inode);
+void devpts_pty_kill(struct dentry *);
 
 #endif
 
index 43aa1f8..ec1411c 100644 (file)
@@ -352,6 +352,22 @@ struct sk_filter {
 
 #define BPF_SKB_CB_LEN QDISC_CB_PRIV_LEN
 
+struct bpf_skb_data_end {
+       struct qdisc_skb_cb qdisc_cb;
+       void *data_end;
+};
+
+/* compute the linear packet data range [data, data_end) which
+ * will be accessed by cls_bpf and act_bpf programs
+ */
+static inline void bpf_compute_data_end(struct sk_buff *skb)
+{
+       struct bpf_skb_data_end *cb = (struct bpf_skb_data_end *)skb->cb;
+
+       BUILD_BUG_ON(sizeof(*cb) > FIELD_SIZEOF(struct sk_buff, cb));
+       cb->data_end = skb->data + skb_headlen(skb);
+}
+
 static inline u8 *bpf_skb_cb(struct sk_buff *skb)
 {
        /* eBPF programs may read/write skb->cb[] area to transfer meta
index 1afde47..79c52fa 100644 (file)
 #error Wordsize not 32 or 64
 #endif
 
+/*
+ * The above primes are actively bad for hashing, since they are
+ * too sparse. The 32-bit one is mostly ok, the 64-bit one causes
+ * real problems. Besides, the "prime" part is pointless for the
+ * multiplicative hash.
+ *
+ * Although a random odd number will do, it turns out that the golden
+ * ratio phi = (sqrt(5)-1)/2, or its negative, has particularly nice
+ * properties.
+ *
+ * These are the negative, (1 - phi) = (phi^2) = (3 - sqrt(5))/2.
+ * (See Knuth vol 3, section 6.4, exercise 9.)
+ */
+#define GOLDEN_RATIO_32 0x61C88647
+#define GOLDEN_RATIO_64 0x61C8864680B583EBull
+
 static __always_inline u64 hash_64(u64 val, unsigned int bits)
 {
        u64 hash = val;
 
-#if defined(CONFIG_ARCH_HAS_FAST_MULTIPLIER) && BITS_PER_LONG == 64
-       hash = hash * GOLDEN_RATIO_PRIME_64;
+#if BITS_PER_LONG == 64
+       hash = hash * GOLDEN_RATIO_64;
 #else
        /*  Sigh, gcc can't optimise this alone like it does for 32 bits. */
        u64 n = hash;
index 7008623..d7b9e53 100644 (file)
@@ -152,6 +152,7 @@ static inline bool is_huge_zero_pmd(pmd_t pmd)
 }
 
 struct page *get_huge_zero_page(void);
+void put_huge_zero_page(void);
 
 #else /* CONFIG_TRANSPARENT_HUGEPAGE */
 #define HPAGE_PMD_SHIFT ({ BUILD_BUG(); 0; })
@@ -208,6 +209,10 @@ static inline bool is_huge_zero_page(struct page *page)
        return false;
 }
 
+static inline void put_huge_zero_page(void)
+{
+       BUILD_BUG();
+}
 
 static inline struct page *follow_devmap_pmd(struct vm_area_struct *vma,
                unsigned long addr, pmd_t *pmd, int flags)
index d3e4156..acedbb6 100644 (file)
@@ -47,6 +47,7 @@
 #define IEEE802154_ADDR_SHORT_UNSPEC   0xfffe
 
 #define IEEE802154_EXTENDED_ADDR_LEN   8
+#define IEEE802154_SHORT_ADDR_LEN      2
 
 #define IEEE802154_LIFS_PERIOD         40
 #define IEEE802154_SIFS_PERIOD         12
@@ -218,6 +219,7 @@ enum {
 /* frame control handling */
 #define IEEE802154_FCTL_FTYPE          0x0003
 #define IEEE802154_FCTL_ACKREQ         0x0020
+#define IEEE802154_FCTL_SECEN          0x0004
 #define IEEE802154_FCTL_INTRA_PAN      0x0040
 
 #define IEEE802154_FTYPE_DATA          0x0001
@@ -232,6 +234,15 @@ static inline int ieee802154_is_data(__le16 fc)
                cpu_to_le16(IEEE802154_FTYPE_DATA);
 }
 
+/**
+ * ieee802154_is_secen - check if Security bit is set
+ * @fc: frame control bytes in little-endian byteorder
+ */
+static inline bool ieee802154_is_secen(__le16 fc)
+{
+       return fc & cpu_to_le16(IEEE802154_FCTL_SECEN);
+}
+
 /**
  * ieee802154_is_ackreq - check if acknowledgment request bit is set
  * @fc: frame control bytes in little-endian byteorder
@@ -260,17 +271,17 @@ static inline bool ieee802154_is_intra_pan(__le16 fc)
  *
  * @len: psdu len with (MHR + payload + MFR)
  */
-static inline bool ieee802154_is_valid_psdu_len(const u8 len)
+static inline bool ieee802154_is_valid_psdu_len(u8 len)
 {
        return (len == IEEE802154_ACK_PSDU_LEN ||
                (len >= IEEE802154_MIN_PSDU_LEN && len <= IEEE802154_MTU));
 }
 
 /**
- * ieee802154_is_valid_psdu_len - check if extended addr is valid
+ * ieee802154_is_valid_extended_unicast_addr - check if extended addr is valid
  * @addr: extended addr to check
  */
-static inline bool ieee802154_is_valid_extended_unicast_addr(const __le64 addr)
+static inline bool ieee802154_is_valid_extended_unicast_addr(__le64 addr)
 {
        /* Bail out if the address is all zero, or if the group
         * address bit is set.
@@ -279,6 +290,34 @@ static inline bool ieee802154_is_valid_extended_unicast_addr(const __le64 addr)
                !(addr & cpu_to_le64(0x0100000000000000ULL)));
 }
 
+/**
+ * ieee802154_is_broadcast_short_addr - check if short addr is broadcast
+ * @addr: short addr to check
+ */
+static inline bool ieee802154_is_broadcast_short_addr(__le16 addr)
+{
+       return (addr == cpu_to_le16(IEEE802154_ADDR_SHORT_BROADCAST));
+}
+
+/**
+ * ieee802154_is_unspec_short_addr - check if short addr is unspecified
+ * @addr: short addr to check
+ */
+static inline bool ieee802154_is_unspec_short_addr(__le16 addr)
+{
+       return (addr == cpu_to_le16(IEEE802154_ADDR_SHORT_UNSPEC));
+}
+
+/**
+ * ieee802154_is_valid_src_short_addr - check if source short address is valid
+ * @addr: short addr to check
+ */
+static inline bool ieee802154_is_valid_src_short_addr(__le16 addr)
+{
+       return !(ieee802154_is_broadcast_short_addr(addr) ||
+                ieee802154_is_unspec_short_addr(addr));
+}
+
 /**
  * ieee802154_random_extended_addr - generates a random extended address
  * @addr: extended addr pointer to place the random address
index d556973..548fd53 100644 (file)
@@ -28,6 +28,11 @@ static inline struct ethhdr *eth_hdr(const struct sk_buff *skb)
        return (struct ethhdr *)skb_mac_header(skb);
 }
 
+static inline struct ethhdr *inner_eth_hdr(const struct sk_buff *skb)
+{
+       return (struct ethhdr *)skb_inner_mac_header(skb);
+}
+
 int eth_header_parse(const struct sk_buff *skb, unsigned char *haddr);
 
 extern ssize_t sysfs_format_mac(char *buf, const unsigned char *addr, int len);
index d026b19..d10ef06 100644 (file)
@@ -196,9 +196,11 @@ struct lock_list {
  * We record lock dependency chains, so that we can cache them:
  */
 struct lock_chain {
-       u8                              irq_context;
-       u8                              depth;
-       u16                             base;
+       /* see BUILD_BUG_ON()s in lookup_chain_cache() */
+       unsigned int                    irq_context :  2,
+                                       depth       :  6,
+                                       base        : 24;
+       /* 4 byte hole */
        struct hlist_node               entry;
        u64                             chain_key;
 };
index d1f904c..80dec87 100644 (file)
@@ -1058,7 +1058,7 @@ int mlx4_buf_alloc(struct mlx4_dev *dev, int size, int max_direct,
 void mlx4_buf_free(struct mlx4_dev *dev, int size, struct mlx4_buf *buf);
 static inline void *mlx4_buf_offset(struct mlx4_buf *buf, int offset)
 {
-       if (BITS_PER_LONG == 64 || buf->nbufs == 1)
+       if (buf->nbufs == 1)
                return buf->direct.buf + offset;
        else
                return buf->page_list[offset >> PAGE_SHIFT].buf +
@@ -1098,7 +1098,7 @@ int mlx4_db_alloc(struct mlx4_dev *dev, struct mlx4_db *db, int order,
 void mlx4_db_free(struct mlx4_dev *dev, struct mlx4_db *db);
 
 int mlx4_alloc_hwq_res(struct mlx4_dev *dev, struct mlx4_hwq_resources *wqres,
-                      int size, int max_direct);
+                      int size);
 void mlx4_free_hwq_res(struct mlx4_dev *mdev, struct mlx4_hwq_resources *wqres,
                       int size);
 
index 03f8d71..ee0d5a9 100644 (file)
@@ -59,6 +59,7 @@
 #define MLX5_FLD_SZ_BYTES(typ, fld) (__mlx5_bit_sz(typ, fld) / 8)
 #define MLX5_ST_SZ_BYTES(typ) (sizeof(struct mlx5_ifc_##typ##_bits) / 8)
 #define MLX5_ST_SZ_DW(typ) (sizeof(struct mlx5_ifc_##typ##_bits) / 32)
+#define MLX5_ST_SZ_QW(typ) (sizeof(struct mlx5_ifc_##typ##_bits) / 64)
 #define MLX5_UN_SZ_BYTES(typ) (sizeof(union mlx5_ifc_##typ##_bits) / 8)
 #define MLX5_UN_SZ_DW(typ) (sizeof(union mlx5_ifc_##typ##_bits) / 32)
 #define MLX5_BYTE_OFF(typ, fld) (__mlx5_bit_off(typ, fld) / 8)
@@ -392,6 +393,17 @@ enum {
        MLX5_CAP_OFF_CMDIF_CSUM         = 46,
 };
 
+enum {
+       /*
+        * Max wqe size for rdma read is 512 bytes, so this
+        * limits our max_sge_rd as the wqe needs to fit:
+        * - ctrl segment (16 bytes)
+        * - rdma segment (16 bytes)
+        * - scatter elements (16 bytes each)
+        */
+       MLX5_MAX_SGE_RD = (512 - 16 - 16) / 16
+};
+
 struct mlx5_inbox_hdr {
        __be16          opcode;
        u8              rsvd[4];
@@ -644,8 +656,9 @@ struct mlx5_err_cqe {
 };
 
 struct mlx5_cqe64 {
-       u8              rsvd0[2];
-       __be16          wqe_id;
+       u8              outer_l3_tunneled;
+       u8              rsvd0;
+       __be16          wqe_id;
        u8              lro_tcppsh_abort_dupack;
        u8              lro_min_ttl;
        __be16          lro_tcp_win;
@@ -658,7 +671,7 @@ struct mlx5_cqe64 {
        __be16          slid;
        __be32          flags_rqpn;
        u8              hds_ip_ext;
-       u8              l4_hdr_type_etc;
+       u8              l4_l3_hdr_type;
        __be16          vlan_info;
        __be32          srqn; /* [31:24]: lro_num_seg, [23:0]: srqn */
        __be32          imm_inval_pkey;
@@ -679,12 +692,22 @@ static inline int get_cqe_lro_tcppsh(struct mlx5_cqe64 *cqe)
 
 static inline u8 get_cqe_l4_hdr_type(struct mlx5_cqe64 *cqe)
 {
-       return (cqe->l4_hdr_type_etc >> 4) & 0x7;
+       return (cqe->l4_l3_hdr_type >> 4) & 0x7;
+}
+
+static inline u8 get_cqe_l3_hdr_type(struct mlx5_cqe64 *cqe)
+{
+       return (cqe->l4_l3_hdr_type >> 2) & 0x3;
+}
+
+static inline u8 cqe_is_tunneled(struct mlx5_cqe64 *cqe)
+{
+       return cqe->outer_l3_tunneled & 0x1;
 }
 
 static inline int cqe_has_vlan(struct mlx5_cqe64 *cqe)
 {
-       return !!(cqe->l4_hdr_type_etc & 0x1);
+       return !!(cqe->l4_l3_hdr_type & 0x1);
 }
 
 static inline u64 get_cqe_ts(struct mlx5_cqe64 *cqe)
@@ -1326,6 +1349,18 @@ enum mlx5_cap_type {
 #define MLX5_CAP_ESW_FLOWTABLE_FDB_MAX(mdev, cap) \
        MLX5_CAP_ESW_FLOWTABLE_MAX(mdev, flow_table_properties_nic_esw_fdb.cap)
 
+#define MLX5_CAP_ESW_EGRESS_ACL(mdev, cap) \
+       MLX5_CAP_ESW_FLOWTABLE(mdev, flow_table_properties_esw_acl_egress.cap)
+
+#define MLX5_CAP_ESW_EGRESS_ACL_MAX(mdev, cap) \
+       MLX5_CAP_ESW_FLOWTABLE_MAX(mdev, flow_table_properties_esw_acl_egress.cap)
+
+#define MLX5_CAP_ESW_INGRESS_ACL(mdev, cap) \
+       MLX5_CAP_ESW_FLOWTABLE(mdev, flow_table_properties_esw_acl_ingress.cap)
+
+#define MLX5_CAP_ESW_INGRESS_ACL_MAX(mdev, cap) \
+       MLX5_CAP_ESW_FLOWTABLE_MAX(mdev, flow_table_properties_esw_acl_ingress.cap)
+
 #define MLX5_CAP_ESW(mdev, cap) \
        MLX5_GET(e_switch_cap, \
                 mdev->hca_caps_cur[MLX5_CAP_ESWITCH], cap)
@@ -1368,6 +1403,7 @@ enum {
        MLX5_ETHERNET_EXTENDED_COUNTERS_GROUP = 0x5,
        MLX5_PER_PRIORITY_COUNTERS_GROUP      = 0x10,
        MLX5_PER_TRAFFIC_CLASS_COUNTERS_GROUP = 0x11,
+       MLX5_PHYSICAL_LAYER_COUNTERS_GROUP    = 0x12,
        MLX5_INFINIBAND_PORT_COUNTERS_GROUP   = 0x20,
 };
 
index dcd5ac8..9613143 100644 (file)
 #include <linux/mlx5/device.h>
 #include <linux/mlx5/doorbell.h>
 
+enum {
+       MLX5_RQ_BITMASK_VSD = 1 << 1,
+};
+
 enum {
        MLX5_BOARD_ID_LEN = 64,
        MLX5_MAX_NAME_LEN = 16,
@@ -112,9 +116,12 @@ enum {
        MLX5_REG_PMPE            = 0x5010,
        MLX5_REG_PELC            = 0x500e,
        MLX5_REG_PVLC            = 0x500f,
-       MLX5_REG_PMLP            = 0, /* TBD */
+       MLX5_REG_PCMR            = 0x5041,
+       MLX5_REG_PMLP            = 0x5002,
        MLX5_REG_NODE_DESC       = 0x6001,
        MLX5_REG_HOST_ENDIANNESS = 0x7004,
+       MLX5_REG_MCIA            = 0x9014,
+       MLX5_REG_MLCR            = 0x902b,
 };
 
 enum {
@@ -511,6 +518,8 @@ struct mlx5_priv {
        unsigned long           pci_dev_data;
        struct mlx5_flow_root_namespace *root_ns;
        struct mlx5_flow_root_namespace *fdb_root_ns;
+       struct mlx5_flow_root_namespace *esw_egress_root_ns;
+       struct mlx5_flow_root_namespace *esw_ingress_root_ns;
 };
 
 enum mlx5_device_state {
@@ -519,8 +528,9 @@ enum mlx5_device_state {
 };
 
 enum mlx5_interface_state {
-       MLX5_INTERFACE_STATE_DOWN,
-       MLX5_INTERFACE_STATE_UP,
+       MLX5_INTERFACE_STATE_DOWN = BIT(0),
+       MLX5_INTERFACE_STATE_UP = BIT(1),
+       MLX5_INTERFACE_STATE_SHUTDOWN = BIT(2),
 };
 
 enum mlx5_pci_status {
@@ -544,7 +554,7 @@ struct mlx5_core_dev {
        enum mlx5_device_state  state;
        /* sync interface state */
        struct mutex            intf_state_mutex;
-       enum mlx5_interface_state interface_state;
+       unsigned long           intf_state;
        void                    (*event) (struct mlx5_core_dev *dev,
                                          enum mlx5_dev_event event,
                                          unsigned long param);
@@ -552,6 +562,9 @@ struct mlx5_core_dev {
        struct mlx5_profile     *profile;
        atomic_t                num_qps;
        u32                     issi;
+#ifdef CONFIG_RFS_ACCEL
+       struct cpu_rmap         *rmap;
+#endif
 };
 
 struct mlx5_db {
index 8dec550..6467569 100644 (file)
@@ -58,6 +58,8 @@ enum mlx5_flow_namespace_type {
        MLX5_FLOW_NAMESPACE_LEFTOVERS,
        MLX5_FLOW_NAMESPACE_ANCHOR,
        MLX5_FLOW_NAMESPACE_FDB,
+       MLX5_FLOW_NAMESPACE_ESW_EGRESS,
+       MLX5_FLOW_NAMESPACE_ESW_INGRESS,
 };
 
 struct mlx5_flow_table;
@@ -82,12 +84,19 @@ struct mlx5_flow_table *
 mlx5_create_auto_grouped_flow_table(struct mlx5_flow_namespace *ns,
                                    int prio,
                                    int num_flow_table_entries,
-                                   int max_num_groups);
+                                   int max_num_groups,
+                                   u32 level);
 
 struct mlx5_flow_table *
 mlx5_create_flow_table(struct mlx5_flow_namespace *ns,
                       int prio,
-                      int num_flow_table_entries);
+                      int num_flow_table_entries,
+                      u32 level);
+struct mlx5_flow_table *
+mlx5_create_vport_flow_table(struct mlx5_flow_namespace *ns,
+                            int prio,
+                            int num_flow_table_entries,
+                            u32 level, u16 vport);
 int mlx5_destroy_flow_table(struct mlx5_flow_table *ft);
 
 /* inbox should be set with the following values:
@@ -113,4 +122,7 @@ mlx5_add_flow_rule(struct mlx5_flow_table *ft,
                   struct mlx5_flow_destination *dest);
 void mlx5_del_flow_rule(struct mlx5_flow_rule *fr);
 
+int mlx5_modify_rule_destination(struct mlx5_flow_rule *rule,
+                                struct mlx5_flow_destination *dest);
+
 #endif
index a1d145a..9851862 100644 (file)
 
 #include <linux/mlx5/driver.h>
 
+enum mlx5_beacon_duration {
+       MLX5_BEACON_DURATION_OFF = 0x0,
+       MLX5_BEACON_DURATION_INF = 0xffff,
+};
+
+enum mlx5_module_id {
+       MLX5_MODULE_ID_SFP              = 0x3,
+       MLX5_MODULE_ID_QSFP             = 0xC,
+       MLX5_MODULE_ID_QSFP_PLUS        = 0xD,
+       MLX5_MODULE_ID_QSFP28           = 0x11,
+};
+
+#define MLX5_EEPROM_MAX_BYTES                  32
+#define MLX5_EEPROM_IDENTIFIER_BYTE_MASK       0x000000ff
+#define MLX5_I2C_ADDR_LOW              0x50
+#define MLX5_I2C_ADDR_HIGH             0x51
+#define MLX5_EEPROM_PAGE_LENGTH                256
+
 int mlx5_set_port_caps(struct mlx5_core_dev *dev, u8 port_num, u32 caps);
 int mlx5_query_port_ptys(struct mlx5_core_dev *dev, u32 *ptys,
                         int ptys_size, int proto_mask, u8 local_port);
@@ -53,10 +71,11 @@ int mlx5_set_port_admin_status(struct mlx5_core_dev *dev,
                               enum mlx5_port_status status);
 int mlx5_query_port_admin_status(struct mlx5_core_dev *dev,
                                 enum mlx5_port_status *status);
+int mlx5_set_port_beacon(struct mlx5_core_dev *dev, u16 beacon_duration);
 
-int mlx5_set_port_mtu(struct mlx5_core_dev *dev, int mtu, u8 port);
-void mlx5_query_port_max_mtu(struct mlx5_core_dev *dev, int *max_mtu, u8 port);
-void mlx5_query_port_oper_mtu(struct mlx5_core_dev *dev, int *oper_mtu,
+int mlx5_set_port_mtu(struct mlx5_core_dev *dev, u16 mtu, u8 port);
+void mlx5_query_port_max_mtu(struct mlx5_core_dev *dev, u16 *max_mtu, u8 port);
+void mlx5_query_port_oper_mtu(struct mlx5_core_dev *dev, u16 *oper_mtu,
                              u8 port);
 
 int mlx5_query_port_vl_hw_cap(struct mlx5_core_dev *dev,
@@ -84,4 +103,10 @@ int mlx5_query_port_ets_rate_limit(struct mlx5_core_dev *mdev,
 int mlx5_set_port_wol(struct mlx5_core_dev *mdev, u8 wol_mode);
 int mlx5_query_port_wol(struct mlx5_core_dev *mdev, u8 *wol_mode);
 
+int mlx5_set_port_fcs(struct mlx5_core_dev *mdev, u8 enable);
+void mlx5_query_port_fcs(struct mlx5_core_dev *mdev, bool *supported,
+                        bool *enabled);
+int mlx5_query_module_eeprom(struct mlx5_core_dev *dev,
+                            u16 offset, u16 size, u8 *data);
+
 #endif /* __MLX5_PORT_H__ */
index bd93e63..301da4a 100644 (file)
@@ -45,6 +45,8 @@ int mlx5_query_nic_vport_mac_address(struct mlx5_core_dev *mdev,
                                     u16 vport, u8 *addr);
 int mlx5_modify_nic_vport_mac_address(struct mlx5_core_dev *dev,
                                      u16 vport, u8 *addr);
+int mlx5_query_nic_vport_mtu(struct mlx5_core_dev *mdev, u16 *mtu);
+int mlx5_modify_nic_vport_mtu(struct mlx5_core_dev *mdev, u16 mtu);
 int mlx5_query_nic_vport_system_image_guid(struct mlx5_core_dev *mdev,
                                           u64 *system_image_guid);
 int mlx5_query_nic_vport_node_guid(struct mlx5_core_dev *mdev, u64 *node_guid);
index a55e5be..864d722 100644 (file)
@@ -1031,6 +1031,8 @@ static inline bool page_mapped(struct page *page)
        page = compound_head(page);
        if (atomic_read(compound_mapcount_ptr(page)) >= 0)
                return true;
+       if (PageHuge(page))
+               return false;
        for (i = 0; i < hpage_nr_pages(page); i++) {
                if (atomic_read(&page[i]._mapcount) >= 0)
                        return true;
@@ -1138,6 +1140,8 @@ struct zap_details {
 
 struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
                pte_t pte);
+struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
+                               pmd_t pmd);
 
 int zap_vma_ptes(struct vm_area_struct *vma, unsigned long address,
                unsigned long size);
index 72c1e06..9aa49a0 100644 (file)
@@ -245,7 +245,15 @@ do {                                                               \
        net_ratelimited_function(pr_warn, fmt, ##__VA_ARGS__)
 #define net_info_ratelimited(fmt, ...)                         \
        net_ratelimited_function(pr_info, fmt, ##__VA_ARGS__)
-#if defined(DEBUG)
+#if defined(CONFIG_DYNAMIC_DEBUG)
+#define net_dbg_ratelimited(fmt, ...)                                  \
+do {                                                                   \
+       DEFINE_DYNAMIC_DEBUG_METADATA(descriptor, fmt);                 \
+       if (unlikely(descriptor.flags & _DPRINTK_FLAGS_PRINT) &&        \
+           net_ratelimit())                                            \
+               __dynamic_pr_debug(&descriptor, fmt, ##__VA_ARGS__);    \
+} while (0)
+#elif defined(DEBUG)
 #define net_dbg_ratelimited(fmt, ...)                          \
        net_ratelimited_function(pr_debug, fmt, ##__VA_ARGS__)
 #else
index 1f6d5db..63580e6 100644 (file)
@@ -106,7 +106,6 @@ enum netdev_tx {
        __NETDEV_TX_MIN  = INT_MIN,     /* make sure enum is signed */
        NETDEV_TX_OK     = 0x00,        /* driver took care of packet */
        NETDEV_TX_BUSY   = 0x10,        /* driver tx path was busy*/
-       NETDEV_TX_LOCKED = 0x20,        /* driver tx lock was already taken */
 };
 typedef enum netdev_tx netdev_tx_t;
 
@@ -570,28 +569,27 @@ struct netdev_queue {
 #if defined(CONFIG_XPS) && defined(CONFIG_NUMA)
        int                     numa_node;
 #endif
+       unsigned long           tx_maxrate;
+       /*
+        * Number of TX timeouts for this queue
+        * (/sys/class/net/DEV/Q/trans_timeout)
+        */
+       unsigned long           trans_timeout;
 /*
  * write-mostly part
  */
        spinlock_t              _xmit_lock ____cacheline_aligned_in_smp;
        int                     xmit_lock_owner;
        /*
-        * please use this field instead of dev->trans_start
+        * Time (in jiffies) of last Tx
         */
        unsigned long           trans_start;
 
-       /*
-        * Number of TX timeouts for this queue
-        * (/sys/class/net/DEV/Q/trans_timeout)
-        */
-       unsigned long           trans_timeout;
-
        unsigned long           state;
 
 #ifdef CONFIG_BQL
        struct dql              dql;
 #endif
-       unsigned long           tx_maxrate;
 } ____cacheline_aligned_in_smp;
 
 static inline int netdev_queue_numa_node_read(const struct netdev_queue *q)
@@ -831,7 +829,6 @@ struct tc_to_netdev {
  *     the queue before that can happen; it's for obsolete devices and weird
  *     corner cases, but the stack really does a non-trivial amount
  *     of useless work if you return NETDEV_TX_BUSY.
- *        (can also return NETDEV_TX_LOCKED iff NETIF_F_LLTX)
  *     Required; cannot be NULL.
  *
  * netdev_features_t (*ndo_fix_features)(struct net_device *dev,
@@ -1548,7 +1545,6 @@ enum netdev_priv_flags {
  *
  *     @offload_fwd_mark:      Offload device fwding mark
  *
- *     @trans_start:           Time (in jiffies) of last Tx
  *     @watchdog_timeo:        Represents the timeout that is used by
  *                             the watchdog (see dev_watchdog())
  *     @watchdog_timer:        List of timers
@@ -1797,13 +1793,6 @@ struct net_device {
 #endif
 
        /* These may be needed for future network-power-down code. */
-
-       /*
-        * trans_start here is expensive for high speed devices on SMP,
-        * please use netdev_queue->trans_start instead.
-        */
-       unsigned long           trans_start;
-
        struct timer_list       watchdog_timer;
 
        int __percpu            *pcpu_refcnt;
@@ -2737,7 +2726,6 @@ struct softnet_data {
        /* stats */
        unsigned int            processed;
        unsigned int            time_squeeze;
-       unsigned int            cpu_collision;
        unsigned int            received_rps;
 #ifdef CONFIG_RPS
        struct softnet_data     *rps_ipi_list;
@@ -2750,11 +2738,15 @@ struct softnet_data {
        struct sk_buff          *completion_queue;
 
 #ifdef CONFIG_RPS
-       /* Elements below can be accessed between CPUs for RPS */
+       /* input_queue_head should be written by cpu owning this struct,
+        * and only read by other cpus. Worth using a cache line.
+        */
+       unsigned int            input_queue_head ____cacheline_aligned_in_smp;
+
+       /* Elements below can be accessed between CPUs for RPS/RFS */
        struct call_single_data csd ____cacheline_aligned_in_smp;
        struct softnet_data     *rps_ipi_next;
        unsigned int            cpu;
-       unsigned int            input_queue_head;
        unsigned int            input_queue_tail;
 #endif
        unsigned int            dropped;
@@ -3263,7 +3255,8 @@ struct sk_buff *dev_hard_start_xmit(struct sk_buff *skb, struct net_device *dev,
                                    struct netdev_queue *txq, int *ret);
 int __dev_forward_skb(struct net_device *dev, struct sk_buff *skb);
 int dev_forward_skb(struct net_device *dev, struct sk_buff *skb);
-bool is_skb_forwardable(struct net_device *dev, struct sk_buff *skb);
+bool is_skb_forwardable(const struct net_device *dev,
+                       const struct sk_buff *skb);
 
 extern int             netdev_budget;
 
@@ -3480,6 +3473,15 @@ static inline void txq_trans_update(struct netdev_queue *txq)
                txq->trans_start = jiffies;
 }
 
+/* legacy drivers only, netdev_start_xmit() sets txq->trans_start */
+static inline void netif_trans_update(struct net_device *dev)
+{
+       struct netdev_queue *txq = netdev_get_tx_queue(dev, 0);
+
+       if (txq->trans_start != jiffies)
+               txq->trans_start = jiffies;
+}
+
 /**
  *     netif_tx_lock - grab network device transmit lock
  *     @dev: network device
@@ -3991,7 +3993,7 @@ netdev_features_t netif_skb_features(struct sk_buff *skb);
 
 static inline bool net_gso_ok(netdev_features_t features, int gso_type)
 {
-       netdev_features_t feature = gso_type << NETIF_F_GSO_SHIFT;
+       netdev_features_t feature = (netdev_features_t)gso_type << NETIF_F_GSO_SHIFT;
 
        /* check flags correspondence */
        BUILD_BUG_ON(SKB_GSO_TCPV4   != (NETIF_F_TSO >> NETIF_F_GSO_SHIFT));
index 167342c..0f6f660 100644 (file)
@@ -92,6 +92,8 @@ enum {
        IEEE802154_ATTR_LLSEC_DEV_OVERRIDE,
        IEEE802154_ATTR_LLSEC_DEV_KEY_MODE,
 
+       IEEE802154_ATTR_PAD,
+
        __IEEE802154_ATTR_MAX,
 };
 
index 67e8c20..d72c832 100644 (file)
@@ -110,6 +110,7 @@ struct qed_link_params {
 #define QED_LINK_OVERRIDE_SPEED_ADV_SPEEDS      BIT(1)
 #define QED_LINK_OVERRIDE_SPEED_FORCED_SPEED    BIT(2)
 #define QED_LINK_OVERRIDE_PAUSE_CONFIG          BIT(3)
+#define QED_LINK_OVERRIDE_LOOPBACK_MODE         BIT(4)
        u32     override_flags;
        bool    autoneg;
        u32     adv_speeds;
@@ -118,6 +119,12 @@ struct qed_link_params {
 #define QED_LINK_PAUSE_RX_ENABLE                BIT(1)
 #define QED_LINK_PAUSE_TX_ENABLE                BIT(2)
        u32     pause_config;
+#define QED_LINK_LOOPBACK_NONE                  BIT(0)
+#define QED_LINK_LOOPBACK_INT_PHY               BIT(1)
+#define QED_LINK_LOOPBACK_EXT_PHY               BIT(2)
+#define QED_LINK_LOOPBACK_EXT                   BIT(3)
+#define QED_LINK_LOOPBACK_MAC                   BIT(4)
+       u32     loopback_mode;
 };
 
 struct qed_link_output {
@@ -158,7 +165,47 @@ struct qed_common_cb_ops {
                               struct qed_link_output   *link);
 };
 
+struct qed_selftest_ops {
+/**
+ * @brief selftest_interrupt - Perform interrupt test
+ *
+ * @param cdev
+ *
+ * @return 0 on success, error otherwise.
+ */
+       int (*selftest_interrupt)(struct qed_dev *cdev);
+
+/**
+ * @brief selftest_memory - Perform memory test
+ *
+ * @param cdev
+ *
+ * @return 0 on success, error otherwise.
+ */
+       int (*selftest_memory)(struct qed_dev *cdev);
+
+/**
+ * @brief selftest_register - Perform register test
+ *
+ * @param cdev
+ *
+ * @return 0 on success, error otherwise.
+ */
+       int (*selftest_register)(struct qed_dev *cdev);
+
+/**
+ * @brief selftest_clock - Perform clock test
+ *
+ * @param cdev
+ *
+ * @return 0 on success, error otherwise.
+ */
+       int (*selftest_clock)(struct qed_dev *cdev);
+};
+
 struct qed_common_ops {
+       struct qed_selftest_ops *selftest;
+
        struct qed_dev* (*probe)(struct pci_dev *dev,
                                 enum qed_protocol protocol,
                                 u32 dp_module, u8 dp_level);
@@ -211,6 +258,16 @@ struct qed_common_ops {
 
        void            (*simd_handler_clean)(struct qed_dev *cdev,
                                              int index);
+
+/**
+ * @brief can_link_change - can the instance change the link or not
+ *
+ * @param cdev
+ *
+ * @return true if link-change is allowed, false otherwise.
+ */
+       bool (*can_link_change)(struct qed_dev *cdev);
+
 /**
  * @brief set_link - set links according to params
  *
@@ -384,16 +441,16 @@ struct qed_eth_stats {
 
        /* port */
        u64     rx_64_byte_packets;
-       u64     rx_127_byte_packets;
-       u64     rx_255_byte_packets;
-       u64     rx_511_byte_packets;
-       u64     rx_1023_byte_packets;
-       u64     rx_1518_byte_packets;
-       u64     rx_1522_byte_packets;
-       u64     rx_2047_byte_packets;
-       u64     rx_4095_byte_packets;
-       u64     rx_9216_byte_packets;
-       u64     rx_16383_byte_packets;
+       u64     rx_65_to_127_byte_packets;
+       u64     rx_128_to_255_byte_packets;
+       u64     rx_256_to_511_byte_packets;
+       u64     rx_512_to_1023_byte_packets;
+       u64     rx_1024_to_1518_byte_packets;
+       u64     rx_1519_to_1522_byte_packets;
+       u64     rx_1519_to_2047_byte_packets;
+       u64     rx_2048_to_4095_byte_packets;
+       u64     rx_4096_to_9216_byte_packets;
+       u64     rx_9217_to_16383_byte_packets;
        u64     rx_crc_errors;
        u64     rx_mac_crtl_frames;
        u64     rx_pause_frames;
index da0ace3..c413c58 100644 (file)
@@ -382,14 +382,10 @@ enum {
 
        /* generate software time stamp when entering packet scheduling */
        SKBTX_SCHED_TSTAMP = 1 << 6,
-
-       /* generate software timestamp on peer data acknowledgment */
-       SKBTX_ACK_TSTAMP = 1 << 7,
 };
 
 #define SKBTX_ANY_SW_TSTAMP    (SKBTX_SW_TSTAMP    | \
-                                SKBTX_SCHED_TSTAMP | \
-                                SKBTX_ACK_TSTAMP)
+                                SKBTX_SCHED_TSTAMP)
 #define SKBTX_ANY_TSTAMP       (SKBTX_HW_TSTAMP | SKBTX_ANY_SW_TSTAMP)
 
 /*
@@ -1329,6 +1325,16 @@ static inline int skb_header_cloned(const struct sk_buff *skb)
        return dataref != 1;
 }
 
+static inline int skb_header_unclone(struct sk_buff *skb, gfp_t pri)
+{
+       might_sleep_if(gfpflags_allow_blocking(pri));
+
+       if (skb_header_cloned(skb))
+               return pskb_expand_head(skb, 0, 0, pri);
+
+       return 0;
+}
+
 /**
  *     skb_header_release - release reference to header
  *     @skb: buffer to operate on
@@ -2986,6 +2992,8 @@ struct sk_buff *skb_vlan_untag(struct sk_buff *skb);
 int skb_ensure_writable(struct sk_buff *skb, int write_len);
 int skb_vlan_pop(struct sk_buff *skb);
 int skb_vlan_push(struct sk_buff *skb, __be16 vlan_proto, u16 vlan_tci);
+struct sk_buff *pskb_extract(struct sk_buff *skb, int off, int to_copy,
+                            gfp_t gfp);
 
 static inline int memcpy_from_msg(void *data, struct msghdr *msg, int len)
 {
index d0cb6d1..46a984f 100644 (file)
@@ -45,13 +45,39 @@ struct qcom_smd_driver {
        int (*callback)(struct qcom_smd_device *, const void *, size_t);
 };
 
+#if IS_ENABLED(CONFIG_QCOM_SMD)
+
 int qcom_smd_driver_register(struct qcom_smd_driver *drv);
 void qcom_smd_driver_unregister(struct qcom_smd_driver *drv);
 
+int qcom_smd_send(struct qcom_smd_channel *channel, const void *data, int len);
+
+#else
+
+static inline int qcom_smd_driver_register(struct qcom_smd_driver *drv)
+{
+       return -ENXIO;
+}
+
+static inline void qcom_smd_driver_unregister(struct qcom_smd_driver *drv)
+{
+       /* This shouldn't be possible */
+       WARN_ON(1);
+}
+
+static inline int qcom_smd_send(struct qcom_smd_channel *channel,
+                               const void *data, int len)
+{
+       /* This shouldn't be possible */
+       WARN_ON(1);
+       return -ENXIO;
+}
+
+#endif
+
 #define module_qcom_smd_driver(__smd_driver) \
        module_driver(__smd_driver, qcom_smd_driver_register, \
                      qcom_smd_driver_unregister)
 
-int qcom_smd_send(struct qcom_smd_channel *channel, const void *data, int len);
 
 #endif
index 73bf6c6..b5cc5a6 100644 (file)
@@ -201,8 +201,9 @@ struct ucred {
 #define AF_NFC         39      /* NFC sockets                  */
 #define AF_VSOCK       40      /* vSockets                     */
 #define AF_KCM         41      /* Kernel Connection Multiplexor*/
+#define AF_QIPCRTR     42      /* Qualcomm IPC Router          */
 
-#define AF_MAX         42      /* For now.. */
+#define AF_MAX         43      /* For now.. */
 
 /* Protocol families, same as address families. */
 #define PF_UNSPEC      AF_UNSPEC
@@ -249,6 +250,7 @@ struct ucred {
 #define PF_NFC         AF_NFC
 #define PF_VSOCK       AF_VSOCK
 #define PF_KCM         AF_KCM
+#define PF_QIPCRTR     AF_QIPCRTR
 #define PF_MAX         AF_MAX
 
 /* Maximum queue length specifiable by listen.  */
index a55d052..1b8a5a7 100644 (file)
@@ -352,8 +352,8 @@ struct thermal_zone_of_device_ops {
 
 struct thermal_trip {
        struct device_node *np;
-       unsigned long int temperature;
-       unsigned long int hysteresis;
+       int temperature;
+       int hysteresis;
        enum thermal_trip_type type;
 };
 
index 1610524..b742b5e 100644 (file)
@@ -7,7 +7,7 @@
  * defined; unless noted otherwise, they are optional, and can be
  * filled in with a null pointer.
  *
- * struct tty_struct * (*lookup)(struct tty_driver *self, int idx)
+ * struct tty_struct * (*lookup)(struct tty_driver *self, struct file *, int idx)
  *
  *     Return the tty device corresponding to idx, NULL if there is not
  *     one currently in use and an ERR_PTR value on error. Called under
@@ -250,7 +250,7 @@ struct serial_icounter_struct;
 
 struct tty_operations {
        struct tty_struct * (*lookup)(struct tty_driver *driver,
-                       struct inode *inode, int idx);
+                       struct file *filp, int idx);
        int  (*install)(struct tty_driver *driver, struct tty_struct *tty);
        void (*remove)(struct tty_driver *driver, struct tty_struct *tty);
        int  (*open)(struct tty_struct * tty, struct file * filp);
index df89c9b..d3a2bb7 100644 (file)
@@ -89,6 +89,20 @@ static inline void u64_stats_update_end(struct u64_stats_sync *syncp)
 #endif
 }
 
+static inline void u64_stats_update_begin_raw(struct u64_stats_sync *syncp)
+{
+#if BITS_PER_LONG==32 && defined(CONFIG_SMP)
+       raw_write_seqcount_begin(&syncp->seq);
+#endif
+}
+
+static inline void u64_stats_update_end_raw(struct u64_stats_sync *syncp)
+{
+#if BITS_PER_LONG==32 && defined(CONFIG_SMP)
+       raw_write_seqcount_end(&syncp->seq);
+#endif
+}
+
 static inline unsigned int u64_stats_fetch_begin(const struct u64_stats_sync *syncp)
 {
 #if BITS_PER_LONG==32 && defined(CONFIG_SMP)
index 8a0f55b..88e3ab4 100644 (file)
@@ -375,6 +375,9 @@ struct vb2_ops {
 /**
  * struct vb2_ops - driver-specific callbacks
  *
+ * @verify_planes_array: Verify that a given user space structure contains
+ *                     enough planes for the buffer. This is called
+ *                     for each dequeued buffer.
  * @fill_user_buffer:  given a vb2_buffer fill in the userspace structure.
  *                     For V4L2 this is a struct v4l2_buffer.
  * @fill_vb2_buffer:   given a userspace structure, fill in the vb2_buffer.
@@ -384,6 +387,7 @@ struct vb2_ops {
  *                     the vb2_buffer struct.
  */
 struct vb2_buf_ops {
+       int (*verify_planes_array)(struct vb2_buffer *vb, const void *pb);
        void (*fill_user_buffer)(struct vb2_buffer *vb, void *pb);
        int (*fill_vb2_buffer)(struct vb2_buffer *vb, const void *pb,
                                struct vb2_plane *planes);
@@ -400,6 +404,9 @@ struct vb2_buf_ops {
  * @fileio_read_once:          report EOF after reading the first buffer
  * @fileio_write_immediately:  queue buffer after each write() call
  * @allow_zero_bytesused:      allow bytesused == 0 to be passed to the driver
+ * @quirk_poll_must_check_waiting_for_buffers: Return POLLERR at poll when QBUF
+ *              has not been called. This is a vb1 idiom that has been adopted
+ *              also by vb2.
  * @lock:      pointer to a mutex that protects the vb2_queue struct. The
  *             driver can set this to a mutex to let the v4l2 core serialize
  *             the queuing ioctls. If the driver wants to handle locking
@@ -463,6 +470,7 @@ struct vb2_queue {
        unsigned                        fileio_read_once:1;
        unsigned                        fileio_write_immediately:1;
        unsigned                        allow_zero_bytesused:1;
+       unsigned                   quirk_poll_must_check_waiting_for_buffers:1;
 
        struct mutex                    *lock;
        void                            *owner;
index da3a77d..da84cf9 100644 (file)
@@ -58,6 +58,9 @@
 #include <net/ipv6.h>
 #include <net/net_namespace.h>
 
+/* special link-layer handling */
+#include <net/mac802154.h>
+
 #define EUI64_ADDR_LEN         8
 
 #define LOWPAN_NHC_MAX_ID_LEN  1
@@ -93,7 +96,7 @@ static inline bool lowpan_is_iphc(u8 dispatch)
 }
 
 #define LOWPAN_PRIV_SIZE(llpriv_size)  \
-       (sizeof(struct lowpan_priv) + llpriv_size)
+       (sizeof(struct lowpan_dev) + llpriv_size)
 
 enum lowpan_lltypes {
        LOWPAN_LLTYPE_BTLE,
@@ -129,7 +132,7 @@ lowpan_iphc_ctx_is_compression(const struct lowpan_iphc_ctx *ctx)
        return test_bit(LOWPAN_IPHC_CTX_FLAG_COMPRESSION, &ctx->flags);
 }
 
-struct lowpan_priv {
+struct lowpan_dev {
        enum lowpan_lltypes lltype;
        struct dentry *iface_debugfs;
        struct lowpan_iphc_ctx_table ctx;
@@ -139,11 +142,23 @@ struct lowpan_priv {
 };
 
 static inline
-struct lowpan_priv *lowpan_priv(const struct net_device *dev)
+struct lowpan_dev *lowpan_dev(const struct net_device *dev)
 {
        return netdev_priv(dev);
 }
 
+/* private device info */
+struct lowpan_802154_dev {
+       struct net_device       *wdev; /* wpan device ptr */
+       u16                     fragment_tag;
+};
+
+static inline struct
+lowpan_802154_dev *lowpan_802154_dev(const struct net_device *dev)
+{
+       return (struct lowpan_802154_dev *)lowpan_dev(dev)->priv;
+}
+
 struct lowpan_802154_cb {
        u16 d_tag;
        unsigned int d_size;
@@ -157,6 +172,22 @@ struct lowpan_802154_cb *lowpan_802154_cb(const struct sk_buff *skb)
        return (struct lowpan_802154_cb *)skb->cb;
 }
 
+static inline void lowpan_iphc_uncompress_eui64_lladdr(struct in6_addr *ipaddr,
+                                                      const void *lladdr)
+{
+       /* fe:80::XXXX:XXXX:XXXX:XXXX
+        *        \_________________/
+        *              hwaddr
+        */
+       ipaddr->s6_addr[0] = 0xFE;
+       ipaddr->s6_addr[1] = 0x80;
+       memcpy(&ipaddr->s6_addr[8], lladdr, EUI64_ADDR_LEN);
+       /* second bit-flip (Universe/Local)
+        * is done according RFC2464
+        */
+       ipaddr->s6_addr[8] ^= 0x02;
+}
+
 #ifdef DEBUG
 /* print data in line */
 static inline void raw_dump_inline(const char *caller, char *msg,
index 5d38d98..eefcf3e 100644 (file)
@@ -61,6 +61,8 @@
 #define HCI_RS232      4
 #define HCI_PCI                5
 #define HCI_SDIO       6
+#define HCI_SPI                7
+#define HCI_I2C                8
 
 /* HCI controller types */
 #define HCI_BREDR      0x00
index d168aca..a6e428f 100644 (file)
@@ -87,27 +87,6 @@ static inline codel_time_t codel_get_time(void)
         ((s32)((a) - (b)) >= 0))
 #define codel_time_before_eq(a, b)     codel_time_after_eq(b, a)
 
-/* Qdiscs using codel plugin must use codel_skb_cb in their own cb[] */
-struct codel_skb_cb {
-       codel_time_t enqueue_time;
-};
-
-static struct codel_skb_cb *get_codel_cb(const struct sk_buff *skb)
-{
-       qdisc_cb_private_validate(skb, sizeof(struct codel_skb_cb));
-       return (struct codel_skb_cb *)qdisc_skb_cb(skb)->data;
-}
-
-static codel_time_t codel_get_enqueue_time(const struct sk_buff *skb)
-{
-       return get_codel_cb(skb)->enqueue_time;
-}
-
-static void codel_set_enqueue_time(struct sk_buff *skb)
-{
-       get_codel_cb(skb)->enqueue_time = codel_get_time();
-}
-
 static inline u32 codel_time_to_us(codel_time_t val)
 {
        u64 valns = ((u64)val << CODEL_SHIFT);
@@ -176,198 +155,10 @@ struct codel_stats {
 
 #define CODEL_DISABLED_THRESHOLD INT_MAX
 
-static void codel_params_init(struct codel_params *params,
-                             const struct Qdisc *sch)
-{
-       params->interval = MS2TIME(100);
-       params->target = MS2TIME(5);
-       params->mtu = psched_mtu(qdisc_dev(sch));
-       params->ce_threshold = CODEL_DISABLED_THRESHOLD;
-       params->ecn = false;
-}
-
-static void codel_vars_init(struct codel_vars *vars)
-{
-       memset(vars, 0, sizeof(*vars));
-}
-
-static void codel_stats_init(struct codel_stats *stats)
-{
-       stats->maxpacket = 0;
-}
-
-/*
- * http://en.wikipedia.org/wiki/Methods_of_computing_square_roots#Iterative_methods_for_reciprocal_square_roots
- * new_invsqrt = (invsqrt / 2) * (3 - count * invsqrt^2)
- *
- * Here, invsqrt is a fixed point number (< 1.0), 32bit mantissa, aka Q0.32
- */
-static void codel_Newton_step(struct codel_vars *vars)
-{
-       u32 invsqrt = ((u32)vars->rec_inv_sqrt) << REC_INV_SQRT_SHIFT;
-       u32 invsqrt2 = ((u64)invsqrt * invsqrt) >> 32;
-       u64 val = (3LL << 32) - ((u64)vars->count * invsqrt2);
-
-       val >>= 2; /* avoid overflow in following multiply */
-       val = (val * invsqrt) >> (32 - 2 + 1);
-
-       vars->rec_inv_sqrt = val >> REC_INV_SQRT_SHIFT;
-}
-
-/*
- * CoDel control_law is t + interval/sqrt(count)
- * We maintain in rec_inv_sqrt the reciprocal value of sqrt(count) to avoid
- * both sqrt() and divide operation.
- */
-static codel_time_t codel_control_law(codel_time_t t,
-                                     codel_time_t interval,
-                                     u32 rec_inv_sqrt)
-{
-       return t + reciprocal_scale(interval, rec_inv_sqrt << REC_INV_SQRT_SHIFT);
-}
-
-static bool codel_should_drop(const struct sk_buff *skb,
-                             struct Qdisc *sch,
-                             struct codel_vars *vars,
-                             struct codel_params *params,
-                             struct codel_stats *stats,
-                             codel_time_t now)
-{
-       bool ok_to_drop;
-
-       if (!skb) {
-               vars->first_above_time = 0;
-               return false;
-       }
-
-       vars->ldelay = now - codel_get_enqueue_time(skb);
-       sch->qstats.backlog -= qdisc_pkt_len(skb);
-
-       if (unlikely(qdisc_pkt_len(skb) > stats->maxpacket))
-               stats->maxpacket = qdisc_pkt_len(skb);
-
-       if (codel_time_before(vars->ldelay, params->target) ||
-           sch->qstats.backlog <= params->mtu) {
-               /* went below - stay below for at least interval */
-               vars->first_above_time = 0;
-               return false;
-       }
-       ok_to_drop = false;
-       if (vars->first_above_time == 0) {
-               /* just went above from below. If we stay above
-                * for at least interval we'll say it's ok to drop
-                */
-               vars->first_above_time = now + params->interval;
-       } else if (codel_time_after(now, vars->first_above_time)) {
-               ok_to_drop = true;
-       }
-       return ok_to_drop;
-}
-
+typedef u32 (*codel_skb_len_t)(const struct sk_buff *skb);
+typedef codel_time_t (*codel_skb_time_t)(const struct sk_buff *skb);
+typedef void (*codel_skb_drop_t)(struct sk_buff *skb, void *ctx);
 typedef struct sk_buff * (*codel_skb_dequeue_t)(struct codel_vars *vars,
-                                               struct Qdisc *sch);
+                                               void *ctx);
 
-static struct sk_buff *codel_dequeue(struct Qdisc *sch,
-                                    struct codel_params *params,
-                                    struct codel_vars *vars,
-                                    struct codel_stats *stats,
-                                    codel_skb_dequeue_t dequeue_func)
-{
-       struct sk_buff *skb = dequeue_func(vars, sch);
-       codel_time_t now;
-       bool drop;
-
-       if (!skb) {
-               vars->dropping = false;
-               return skb;
-       }
-       now = codel_get_time();
-       drop = codel_should_drop(skb, sch, vars, params, stats, now);
-       if (vars->dropping) {
-               if (!drop) {
-                       /* sojourn time below target - leave dropping state */
-                       vars->dropping = false;
-               } else if (codel_time_after_eq(now, vars->drop_next)) {
-                       /* It's time for the next drop. Drop the current
-                        * packet and dequeue the next. The dequeue might
-                        * take us out of dropping state.
-                        * If not, schedule the next drop.
-                        * A large backlog might result in drop rates so high
-                        * that the next drop should happen now,
-                        * hence the while loop.
-                        */
-                       while (vars->dropping &&
-                              codel_time_after_eq(now, vars->drop_next)) {
-                               vars->count++; /* dont care of possible wrap
-                                               * since there is no more divide
-                                               */
-                               codel_Newton_step(vars);
-                               if (params->ecn && INET_ECN_set_ce(skb)) {
-                                       stats->ecn_mark++;
-                                       vars->drop_next =
-                                               codel_control_law(vars->drop_next,
-                                                                 params->interval,
-                                                                 vars->rec_inv_sqrt);
-                                       goto end;
-                               }
-                               stats->drop_len += qdisc_pkt_len(skb);
-                               qdisc_drop(skb, sch);
-                               stats->drop_count++;
-                               skb = dequeue_func(vars, sch);
-                               if (!codel_should_drop(skb, sch,
-                                                      vars, params, stats, now)) {
-                                       /* leave dropping state */
-                                       vars->dropping = false;
-                               } else {
-                                       /* and schedule the next drop */
-                                       vars->drop_next =
-                                               codel_control_law(vars->drop_next,
-                                                                 params->interval,
-                                                                 vars->rec_inv_sqrt);
-                               }
-                       }
-               }
-       } else if (drop) {
-               u32 delta;
-
-               if (params->ecn && INET_ECN_set_ce(skb)) {
-                       stats->ecn_mark++;
-               } else {
-                       stats->drop_len += qdisc_pkt_len(skb);
-                       qdisc_drop(skb, sch);
-                       stats->drop_count++;
-
-                       skb = dequeue_func(vars, sch);
-                       drop = codel_should_drop(skb, sch, vars, params,
-                                                stats, now);
-               }
-               vars->dropping = true;
-               /* if min went above target close to when we last went below it
-                * assume that the drop rate that controlled the queue on the
-                * last cycle is a good starting point to control it now.
-                */
-               delta = vars->count - vars->lastcount;
-               if (delta > 1 &&
-                   codel_time_before(now - vars->drop_next,
-                                     16 * params->interval)) {
-                       vars->count = delta;
-                       /* we dont care if rec_inv_sqrt approximation
-                        * is not very precise :
-                        * Next Newton steps will correct it quadratically.
-                        */
-                       codel_Newton_step(vars);
-               } else {
-                       vars->count = 1;
-                       vars->rec_inv_sqrt = ~0U >> REC_INV_SQRT_SHIFT;
-               }
-               vars->lastcount = vars->count;
-               vars->drop_next = codel_control_law(now, params->interval,
-                                                   vars->rec_inv_sqrt);
-       }
-end:
-       if (skb && codel_time_after(vars->ldelay, params->ce_threshold) &&
-           INET_ECN_set_ce(skb))
-               stats->ce_mark++;
-       return skb;
-}
 #endif
diff --git a/include/net/codel_impl.h b/include/net/codel_impl.h
new file mode 100644 (file)
index 0000000..d289b91
--- /dev/null
@@ -0,0 +1,255 @@
+#ifndef __NET_SCHED_CODEL_IMPL_H
+#define __NET_SCHED_CODEL_IMPL_H
+
+/*
+ * Codel - The Controlled-Delay Active Queue Management algorithm
+ *
+ *  Copyright (C) 2011-2012 Kathleen Nichols <nichols@pollere.com>
+ *  Copyright (C) 2011-2012 Van Jacobson <van@pollere.net>
+ *  Copyright (C) 2012 Michael D. Taht <dave.taht@bufferbloat.net>
+ *  Copyright (C) 2012,2015 Eric Dumazet <edumazet@google.com>
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions, and the following disclaimer,
+ *    without modification.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ * 3. The names of the authors may not be used to endorse or promote products
+ *    derived from this software without specific prior written permission.
+ *
+ * Alternatively, provided that this notice is retained in full, this
+ * software may be distributed under the terms of the GNU General
+ * Public License ("GPL") version 2, in which case the provisions of the
+ * GPL apply INSTEAD OF those given above.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
+ * DAMAGE.
+ *
+ */
+
+/* Controlling Queue Delay (CoDel) algorithm
+ * =========================================
+ * Source : Kathleen Nichols and Van Jacobson
+ * http://queue.acm.org/detail.cfm?id=2209336
+ *
+ * Implemented on linux by Dave Taht and Eric Dumazet
+ */
+
+static void codel_params_init(struct codel_params *params)
+{
+       params->interval = MS2TIME(100);
+       params->target = MS2TIME(5);
+       params->ce_threshold = CODEL_DISABLED_THRESHOLD;
+       params->ecn = false;
+}
+
+static void codel_vars_init(struct codel_vars *vars)
+{
+       memset(vars, 0, sizeof(*vars));
+}
+
+static void codel_stats_init(struct codel_stats *stats)
+{
+       stats->maxpacket = 0;
+}
+
+/*
+ * http://en.wikipedia.org/wiki/Methods_of_computing_square_roots#Iterative_methods_for_reciprocal_square_roots
+ * new_invsqrt = (invsqrt / 2) * (3 - count * invsqrt^2)
+ *
+ * Here, invsqrt is a fixed point number (< 1.0), 32bit mantissa, aka Q0.32
+ */
+static void codel_Newton_step(struct codel_vars *vars)
+{
+       u32 invsqrt = ((u32)vars->rec_inv_sqrt) << REC_INV_SQRT_SHIFT;
+       u32 invsqrt2 = ((u64)invsqrt * invsqrt) >> 32;
+       u64 val = (3LL << 32) - ((u64)vars->count * invsqrt2);
+
+       val >>= 2; /* avoid overflow in following multiply */
+       val = (val * invsqrt) >> (32 - 2 + 1);
+
+       vars->rec_inv_sqrt = val >> REC_INV_SQRT_SHIFT;
+}
+
+/*
+ * CoDel control_law is t + interval/sqrt(count)
+ * We maintain in rec_inv_sqrt the reciprocal value of sqrt(count) to avoid
+ * both sqrt() and divide operation.
+ */
+static codel_time_t codel_control_law(codel_time_t t,
+                                     codel_time_t interval,
+                                     u32 rec_inv_sqrt)
+{
+       return t + reciprocal_scale(interval, rec_inv_sqrt << REC_INV_SQRT_SHIFT);
+}
+
+static bool codel_should_drop(const struct sk_buff *skb,
+                             void *ctx,
+                             struct codel_vars *vars,
+                             struct codel_params *params,
+                             struct codel_stats *stats,
+                             codel_skb_len_t skb_len_func,
+                             codel_skb_time_t skb_time_func,
+                             u32 *backlog,
+                             codel_time_t now)
+{
+       bool ok_to_drop;
+       u32 skb_len;
+
+       if (!skb) {
+               vars->first_above_time = 0;
+               return false;
+       }
+
+       skb_len = skb_len_func(skb);
+       vars->ldelay = now - skb_time_func(skb);
+
+       if (unlikely(skb_len > stats->maxpacket))
+               stats->maxpacket = skb_len;
+
+       if (codel_time_before(vars->ldelay, params->target) ||
+           *backlog <= params->mtu) {
+               /* went below - stay below for at least interval */
+               vars->first_above_time = 0;
+               return false;
+       }
+       ok_to_drop = false;
+       if (vars->first_above_time == 0) {
+               /* just went above from below. If we stay above
+                * for at least interval we'll say it's ok to drop
+                */
+               vars->first_above_time = now + params->interval;
+       } else if (codel_time_after(now, vars->first_above_time)) {
+               ok_to_drop = true;
+       }
+       return ok_to_drop;
+}
+
+static struct sk_buff *codel_dequeue(void *ctx,
+                                    u32 *backlog,
+                                    struct codel_params *params,
+                                    struct codel_vars *vars,
+                                    struct codel_stats *stats,
+                                    codel_skb_len_t skb_len_func,
+                                    codel_skb_time_t skb_time_func,
+                                    codel_skb_drop_t drop_func,
+                                    codel_skb_dequeue_t dequeue_func)
+{
+       struct sk_buff *skb = dequeue_func(vars, ctx);
+       codel_time_t now;
+       bool drop;
+
+       if (!skb) {
+               vars->dropping = false;
+               return skb;
+       }
+       now = codel_get_time();
+       drop = codel_should_drop(skb, ctx, vars, params, stats,
+                                skb_len_func, skb_time_func, backlog, now);
+       if (vars->dropping) {
+               if (!drop) {
+                       /* sojourn time below target - leave dropping state */
+                       vars->dropping = false;
+               } else if (codel_time_after_eq(now, vars->drop_next)) {
+                       /* It's time for the next drop. Drop the current
+                        * packet and dequeue the next. The dequeue might
+                        * take us out of dropping state.
+                        * If not, schedule the next drop.
+                        * A large backlog might result in drop rates so high
+                        * that the next drop should happen now,
+                        * hence the while loop.
+                        */
+                       while (vars->dropping &&
+                              codel_time_after_eq(now, vars->drop_next)) {
+                               vars->count++; /* dont care of possible wrap
+                                               * since there is no more divide
+                                               */
+                               codel_Newton_step(vars);
+                               if (params->ecn && INET_ECN_set_ce(skb)) {
+                                       stats->ecn_mark++;
+                                       vars->drop_next =
+                                               codel_control_law(vars->drop_next,
+                                                                 params->interval,
+                                                                 vars->rec_inv_sqrt);
+                                       goto end;
+                               }
+                               stats->drop_len += skb_len_func(skb);
+                               drop_func(skb, ctx);
+                               stats->drop_count++;
+                               skb = dequeue_func(vars, ctx);
+                               if (!codel_should_drop(skb, ctx,
+                                                      vars, params, stats,
+                                                      skb_len_func,
+                                                      skb_time_func,
+                                                      backlog, now)) {
+                                       /* leave dropping state */
+                                       vars->dropping = false;
+                               } else {
+                                       /* and schedule the next drop */
+                                       vars->drop_next =
+                                               codel_control_law(vars->drop_next,
+                                                                 params->interval,
+                                                                 vars->rec_inv_sqrt);
+                               }
+                       }
+               }
+       } else if (drop) {
+               u32 delta;
+
+               if (params->ecn && INET_ECN_set_ce(skb)) {
+                       stats->ecn_mark++;
+               } else {
+                       stats->drop_len += skb_len_func(skb);
+                       drop_func(skb, ctx);
+                       stats->drop_count++;
+
+                       skb = dequeue_func(vars, ctx);
+                       drop = codel_should_drop(skb, ctx, vars, params,
+                                                stats, skb_len_func,
+                                                skb_time_func, backlog, now);
+               }
+               vars->dropping = true;
+               /* if min went above target close to when we last went below it
+                * assume that the drop rate that controlled the queue on the
+                * last cycle is a good starting point to control it now.
+                */
+               delta = vars->count - vars->lastcount;
+               if (delta > 1 &&
+                   codel_time_before(now - vars->drop_next,
+                                     16 * params->interval)) {
+                       vars->count = delta;
+                       /* we dont care if rec_inv_sqrt approximation
+                        * is not very precise :
+                        * Next Newton steps will correct it quadratically.
+                        */
+                       codel_Newton_step(vars);
+               } else {
+                       vars->count = 1;
+                       vars->rec_inv_sqrt = ~0U >> REC_INV_SQRT_SHIFT;
+               }
+               vars->lastcount = vars->count;
+               vars->drop_next = codel_control_law(now, params->interval,
+                                                   vars->rec_inv_sqrt);
+       }
+end:
+       if (skb && codel_time_after(vars->ldelay, params->ce_threshold) &&
+           INET_ECN_set_ce(skb))
+               stats->ce_mark++;
+       return skb;
+}
+
+#endif
diff --git a/include/net/codel_qdisc.h b/include/net/codel_qdisc.h
new file mode 100644 (file)
index 0000000..8144d9c
--- /dev/null
@@ -0,0 +1,73 @@
+#ifndef __NET_SCHED_CODEL_QDISC_H
+#define __NET_SCHED_CODEL_QDISC_H
+
+/*
+ * Codel - The Controlled-Delay Active Queue Management algorithm
+ *
+ *  Copyright (C) 2011-2012 Kathleen Nichols <nichols@pollere.com>
+ *  Copyright (C) 2011-2012 Van Jacobson <van@pollere.net>
+ *  Copyright (C) 2012 Michael D. Taht <dave.taht@bufferbloat.net>
+ *  Copyright (C) 2012,2015 Eric Dumazet <edumazet@google.com>
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions, and the following disclaimer,
+ *    without modification.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ * 3. The names of the authors may not be used to endorse or promote products
+ *    derived from this software without specific prior written permission.
+ *
+ * Alternatively, provided that this notice is retained in full, this
+ * software may be distributed under the terms of the GNU General
+ * Public License ("GPL") version 2, in which case the provisions of the
+ * GPL apply INSTEAD OF those given above.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
+ * DAMAGE.
+ *
+ */
+
+/* Controlling Queue Delay (CoDel) algorithm
+ * =========================================
+ * Source : Kathleen Nichols and Van Jacobson
+ * http://queue.acm.org/detail.cfm?id=2209336
+ *
+ * Implemented on linux by Dave Taht and Eric Dumazet
+ */
+
+/* Qdiscs using codel plugin must use codel_skb_cb in their own cb[] */
+struct codel_skb_cb {
+       codel_time_t enqueue_time;
+};
+
+static struct codel_skb_cb *get_codel_cb(const struct sk_buff *skb)
+{
+       qdisc_cb_private_validate(skb, sizeof(struct codel_skb_cb));
+       return (struct codel_skb_cb *)qdisc_skb_cb(skb)->data;
+}
+
+static codel_time_t codel_get_enqueue_time(const struct sk_buff *skb)
+{
+       return get_codel_cb(skb)->enqueue_time;
+}
+
+static void codel_set_enqueue_time(struct sk_buff *skb)
+{
+       get_codel_cb(skb)->enqueue_time = codel_get_time();
+}
+
+#endif
index 2d280ab..8e86af8 100644 (file)
@@ -110,6 +110,11 @@ struct dsa_switch_tree {
                                       struct net_device *orig_dev);
        enum dsa_tag_protocol   tag_protocol;
 
+       /*
+        * Original copy of the master netdev ethtool_ops
+        */
+       struct ethtool_ops      master_ethtool_ops;
+
        /*
         * The switch and port to which the CPU is attached.
         */
index 5c98443..6835d22 100644 (file)
@@ -85,12 +85,11 @@ struct dst_entry {
 #endif
 
 #ifdef CONFIG_64BIT
-       struct lwtunnel_state   *lwtstate;
        /*
         * Align __refcnt to a 64 bytes alignment
         * (L1_CACHE_SIZE would be too much)
         */
-       long                    __pad_to_align_refcnt[1];
+       long                    __pad_to_align_refcnt[2];
 #endif
        /*
         * __refcnt wants to be on a different cache line from
@@ -99,9 +98,7 @@ struct dst_entry {
        atomic_t                __refcnt;       /* client references    */
        int                     __use;
        unsigned long           lastuse;
-#ifndef CONFIG_64BIT
        struct lwtunnel_state   *lwtstate;
-#endif
        union {
                struct dst_entry        *next;
                struct rtable __rcu     *rt_next;
diff --git a/include/net/fq.h b/include/net/fq.h
new file mode 100644 (file)
index 0000000..268b490
--- /dev/null
@@ -0,0 +1,95 @@
+/*
+ * Copyright (c) 2016 Qualcomm Atheros, Inc
+ *
+ * GPL v2
+ *
+ * Based on net/sched/sch_fq_codel.c
+ */
+#ifndef __NET_SCHED_FQ_H
+#define __NET_SCHED_FQ_H
+
+struct fq_tin;
+
+/**
+ * struct fq_flow - per traffic flow queue
+ *
+ * @tin: owner of this flow. Used to manage collisions, i.e. when a packet
+ *     hashes to an index which points to a flow that is already owned by a
+ *     different tin the packet is destined to. In such case the implementer
+ *     must provide a fallback flow
+ * @flowchain: can be linked to fq_tin's new_flows or old_flows. Used for DRR++
+ *     (deficit round robin) based round robin queuing similar to the one
+ *     found in net/sched/sch_fq_codel.c
+ * @backlogchain: can be linked to other fq_flow and fq. Used to keep track of
+ *     fat flows and efficient head-dropping if packet limit is reached
+ * @queue: sk_buff queue to hold packets
+ * @backlog: number of bytes pending in the queue. The number of packets can be
+ *     found in @queue.qlen
+ * @deficit: used for DRR++
+ */
+struct fq_flow {
+       struct fq_tin *tin;
+       struct list_head flowchain;
+       struct list_head backlogchain;
+       struct sk_buff_head queue;
+       u32 backlog;
+       int deficit;
+};
+
+/**
+ * struct fq_tin - a logical container of fq_flows
+ *
+ * Used to group fq_flows into a logical aggregate. DRR++ scheme is used to
+ * pull interleaved packets out of the associated flows.
+ *
+ * @new_flows: linked list of fq_flow
+ * @old_flows: linked list of fq_flow
+ */
+struct fq_tin {
+       struct list_head new_flows;
+       struct list_head old_flows;
+       u32 backlog_bytes;
+       u32 backlog_packets;
+       u32 overlimit;
+       u32 collisions;
+       u32 flows;
+       u32 tx_bytes;
+       u32 tx_packets;
+};
+
+/**
+ * struct fq - main container for fair queuing purposes
+ *
+ * @backlogs: linked to fq_flows. Used to maintain fat flows for efficient
+ *     head-dropping when @backlog reaches @limit
+ * @limit: max number of packets that can be queued across all flows
+ * @backlog: number of packets queued across all flows
+ */
+struct fq {
+       struct fq_flow *flows;
+       struct list_head backlogs;
+       spinlock_t lock;
+       u32 flows_cnt;
+       u32 perturbation;
+       u32 limit;
+       u32 quantum;
+       u32 backlog;
+       u32 overlimit;
+       u32 collisions;
+};
+
+typedef struct sk_buff *fq_tin_dequeue_t(struct fq *,
+                                        struct fq_tin *,
+                                        struct fq_flow *flow);
+
+typedef void fq_skb_free_t(struct fq *,
+                          struct fq_tin *,
+                          struct fq_flow *,
+                          struct sk_buff *);
+
+typedef struct fq_flow *fq_flow_get_default_t(struct fq *,
+                                             struct fq_tin *,
+                                             int idx,
+                                             struct sk_buff *);
+
+#endif
diff --git a/include/net/fq_impl.h b/include/net/fq_impl.h
new file mode 100644 (file)
index 0000000..163f3ed
--- /dev/null
@@ -0,0 +1,277 @@
+/*
+ * Copyright (c) 2016 Qualcomm Atheros, Inc
+ *
+ * GPL v2
+ *
+ * Based on net/sched/sch_fq_codel.c
+ */
+#ifndef __NET_SCHED_FQ_IMPL_H
+#define __NET_SCHED_FQ_IMPL_H
+
+#include <net/fq.h>
+
+/* functions that are embedded into includer */
+
+static struct sk_buff *fq_flow_dequeue(struct fq *fq,
+                                      struct fq_flow *flow)
+{
+       struct fq_tin *tin = flow->tin;
+       struct fq_flow *i;
+       struct sk_buff *skb;
+
+       lockdep_assert_held(&fq->lock);
+
+       skb = __skb_dequeue(&flow->queue);
+       if (!skb)
+               return NULL;
+
+       tin->backlog_bytes -= skb->len;
+       tin->backlog_packets--;
+       flow->backlog -= skb->len;
+       fq->backlog--;
+
+       if (flow->backlog == 0) {
+               list_del_init(&flow->backlogchain);
+       } else {
+               i = flow;
+
+               list_for_each_entry_continue(i, &fq->backlogs, backlogchain)
+                       if (i->backlog < flow->backlog)
+                               break;
+
+               list_move_tail(&flow->backlogchain,
+                              &i->backlogchain);
+       }
+
+       return skb;
+}
+
+static struct sk_buff *fq_tin_dequeue(struct fq *fq,
+                                     struct fq_tin *tin,
+                                     fq_tin_dequeue_t dequeue_func)
+{
+       struct fq_flow *flow;
+       struct list_head *head;
+       struct sk_buff *skb;
+
+       lockdep_assert_held(&fq->lock);
+
+begin:
+       head = &tin->new_flows;
+       if (list_empty(head)) {
+               head = &tin->old_flows;
+               if (list_empty(head))
+                       return NULL;
+       }
+
+       flow = list_first_entry(head, struct fq_flow, flowchain);
+
+       if (flow->deficit <= 0) {
+               flow->deficit += fq->quantum;
+               list_move_tail(&flow->flowchain,
+                              &tin->old_flows);
+               goto begin;
+       }
+
+       skb = dequeue_func(fq, tin, flow);
+       if (!skb) {
+               /* force a pass through old_flows to prevent starvation */
+               if ((head == &tin->new_flows) &&
+                   !list_empty(&tin->old_flows)) {
+                       list_move_tail(&flow->flowchain, &tin->old_flows);
+               } else {
+                       list_del_init(&flow->flowchain);
+                       flow->tin = NULL;
+               }
+               goto begin;
+       }
+
+       flow->deficit -= skb->len;
+       tin->tx_bytes += skb->len;
+       tin->tx_packets++;
+
+       return skb;
+}
+
+static struct fq_flow *fq_flow_classify(struct fq *fq,
+                                       struct fq_tin *tin,
+                                       struct sk_buff *skb,
+                                       fq_flow_get_default_t get_default_func)
+{
+       struct fq_flow *flow;
+       u32 hash;
+       u32 idx;
+
+       lockdep_assert_held(&fq->lock);
+
+       hash = skb_get_hash_perturb(skb, fq->perturbation);
+       idx = reciprocal_scale(hash, fq->flows_cnt);
+       flow = &fq->flows[idx];
+
+       if (flow->tin && flow->tin != tin) {
+               flow = get_default_func(fq, tin, idx, skb);
+               tin->collisions++;
+               fq->collisions++;
+       }
+
+       if (!flow->tin)
+               tin->flows++;
+
+       return flow;
+}
+
+static void fq_recalc_backlog(struct fq *fq,
+                             struct fq_tin *tin,
+                             struct fq_flow *flow)
+{
+       struct fq_flow *i;
+
+       if (list_empty(&flow->backlogchain))
+               list_add_tail(&flow->backlogchain, &fq->backlogs);
+
+       i = flow;
+       list_for_each_entry_continue_reverse(i, &fq->backlogs,
+                                            backlogchain)
+               if (i->backlog > flow->backlog)
+                       break;
+
+       list_move(&flow->backlogchain, &i->backlogchain);
+}
+
+static void fq_tin_enqueue(struct fq *fq,
+                          struct fq_tin *tin,
+                          struct sk_buff *skb,
+                          fq_skb_free_t free_func,
+                          fq_flow_get_default_t get_default_func)
+{
+       struct fq_flow *flow;
+
+       lockdep_assert_held(&fq->lock);
+
+       flow = fq_flow_classify(fq, tin, skb, get_default_func);
+
+       flow->tin = tin;
+       flow->backlog += skb->len;
+       tin->backlog_bytes += skb->len;
+       tin->backlog_packets++;
+       fq->backlog++;
+
+       fq_recalc_backlog(fq, tin, flow);
+
+       if (list_empty(&flow->flowchain)) {
+               flow->deficit = fq->quantum;
+               list_add_tail(&flow->flowchain,
+                             &tin->new_flows);
+       }
+
+       __skb_queue_tail(&flow->queue, skb);
+
+       if (fq->backlog > fq->limit) {
+               flow = list_first_entry_or_null(&fq->backlogs,
+                                               struct fq_flow,
+                                               backlogchain);
+               if (!flow)
+                       return;
+
+               skb = fq_flow_dequeue(fq, flow);
+               if (!skb)
+                       return;
+
+               free_func(fq, flow->tin, flow, skb);
+
+               flow->tin->overlimit++;
+               fq->overlimit++;
+       }
+}
+
+static void fq_flow_reset(struct fq *fq,
+                         struct fq_flow *flow,
+                         fq_skb_free_t free_func)
+{
+       struct sk_buff *skb;
+
+       while ((skb = fq_flow_dequeue(fq, flow)))
+               free_func(fq, flow->tin, flow, skb);
+
+       if (!list_empty(&flow->flowchain))
+               list_del_init(&flow->flowchain);
+
+       if (!list_empty(&flow->backlogchain))
+               list_del_init(&flow->backlogchain);
+
+       flow->tin = NULL;
+
+       WARN_ON_ONCE(flow->backlog);
+}
+
+static void fq_tin_reset(struct fq *fq,
+                        struct fq_tin *tin,
+                        fq_skb_free_t free_func)
+{
+       struct list_head *head;
+       struct fq_flow *flow;
+
+       for (;;) {
+               head = &tin->new_flows;
+               if (list_empty(head)) {
+                       head = &tin->old_flows;
+                       if (list_empty(head))
+                               break;
+               }
+
+               flow = list_first_entry(head, struct fq_flow, flowchain);
+               fq_flow_reset(fq, flow, free_func);
+       }
+
+       WARN_ON_ONCE(tin->backlog_bytes);
+       WARN_ON_ONCE(tin->backlog_packets);
+}
+
+static void fq_flow_init(struct fq_flow *flow)
+{
+       INIT_LIST_HEAD(&flow->flowchain);
+       INIT_LIST_HEAD(&flow->backlogchain);
+       __skb_queue_head_init(&flow->queue);
+}
+
+static void fq_tin_init(struct fq_tin *tin)
+{
+       INIT_LIST_HEAD(&tin->new_flows);
+       INIT_LIST_HEAD(&tin->old_flows);
+}
+
+static int fq_init(struct fq *fq, int flows_cnt)
+{
+       int i;
+
+       memset(fq, 0, sizeof(fq[0]));
+       INIT_LIST_HEAD(&fq->backlogs);
+       spin_lock_init(&fq->lock);
+       fq->flows_cnt = max_t(u32, flows_cnt, 1);
+       fq->perturbation = prandom_u32();
+       fq->quantum = 300;
+       fq->limit = 8192;
+
+       fq->flows = kcalloc(fq->flows_cnt, sizeof(fq->flows[0]), GFP_KERNEL);
+       if (!fq->flows)
+               return -ENOMEM;
+
+       for (i = 0; i < fq->flows_cnt; i++)
+               fq_flow_init(&fq->flows[i]);
+
+       return 0;
+}
+
+static void fq_reset(struct fq *fq,
+                    fq_skb_free_t free_func)
+{
+       int i;
+
+       for (i = 0; i < fq->flows_cnt; i++)
+               fq_flow_reset(fq, &fq->flows[i], free_func);
+
+       kfree(fq->flows);
+       fq->flows = NULL;
+}
+
+#endif
index cbafa37..610cd39 100644 (file)
@@ -19,17 +19,19 @@ struct gnet_dump {
        /* Backward compatibility */
        int               compat_tc_stats;
        int               compat_xstats;
+       int               padattr;
        void *            xstats;
        int               xstats_len;
        struct tc_stats   tc_stats;
 };
 
 int gnet_stats_start_copy(struct sk_buff *skb, int type, spinlock_t *lock,
-                         struct gnet_dump *d);
+                         struct gnet_dump *d, int padattr);
 
 int gnet_stats_start_copy_compat(struct sk_buff *skb, int type,
                                 int tc_stats_type, int xstats_type,
-                                spinlock_t *lock, struct gnet_dump *d);
+                                spinlock_t *lock, struct gnet_dump *d,
+                                int padattr);
 
 int gnet_stats_copy_basic(struct gnet_dump *d,
                          struct gnet_stats_basic_cpu __percpu *cpu,
index 97eafdc..a14093c 100644 (file)
@@ -25,4 +25,108 @@ int gre_del_protocol(const struct gre_protocol *proto, u8 version);
 
 struct net_device *gretap_fb_dev_create(struct net *net, const char *name,
                                       u8 name_assign_type);
+int gre_parse_header(struct sk_buff *skb, struct tnl_ptk_info *tpi,
+                    bool *csum_err);
+
+static inline int gre_calc_hlen(__be16 o_flags)
+{
+       int addend = 4;
+
+       if (o_flags & TUNNEL_CSUM)
+               addend += 4;
+       if (o_flags & TUNNEL_KEY)
+               addend += 4;
+       if (o_flags & TUNNEL_SEQ)
+               addend += 4;
+       return addend;
+}
+
+static inline __be16 gre_flags_to_tnl_flags(__be16 flags)
+{
+       __be16 tflags = 0;
+
+       if (flags & GRE_CSUM)
+               tflags |= TUNNEL_CSUM;
+       if (flags & GRE_ROUTING)
+               tflags |= TUNNEL_ROUTING;
+       if (flags & GRE_KEY)
+               tflags |= TUNNEL_KEY;
+       if (flags & GRE_SEQ)
+               tflags |= TUNNEL_SEQ;
+       if (flags & GRE_STRICT)
+               tflags |= TUNNEL_STRICT;
+       if (flags & GRE_REC)
+               tflags |= TUNNEL_REC;
+       if (flags & GRE_VERSION)
+               tflags |= TUNNEL_VERSION;
+
+       return tflags;
+}
+
+static inline __be16 gre_tnl_flags_to_gre_flags(__be16 tflags)
+{
+       __be16 flags = 0;
+
+       if (tflags & TUNNEL_CSUM)
+               flags |= GRE_CSUM;
+       if (tflags & TUNNEL_ROUTING)
+               flags |= GRE_ROUTING;
+       if (tflags & TUNNEL_KEY)
+               flags |= GRE_KEY;
+       if (tflags & TUNNEL_SEQ)
+               flags |= GRE_SEQ;
+       if (tflags & TUNNEL_STRICT)
+               flags |= GRE_STRICT;
+       if (tflags & TUNNEL_REC)
+               flags |= GRE_REC;
+       if (tflags & TUNNEL_VERSION)
+               flags |= GRE_VERSION;
+
+       return flags;
+}
+
+static inline __sum16 gre_checksum(struct sk_buff *skb)
+{
+       __wsum csum;
+
+       if (skb->ip_summed == CHECKSUM_PARTIAL)
+               csum = lco_csum(skb);
+       else
+               csum = skb_checksum(skb, 0, skb->len, 0);
+       return csum_fold(csum);
+}
+
+static inline void gre_build_header(struct sk_buff *skb, int hdr_len,
+                                   __be16 flags, __be16 proto,
+                                   __be32 key, __be32 seq)
+{
+       struct gre_base_hdr *greh;
+
+       skb_push(skb, hdr_len);
+
+       skb_reset_transport_header(skb);
+       greh = (struct gre_base_hdr *)skb->data;
+       greh->flags = gre_tnl_flags_to_gre_flags(flags);
+       greh->protocol = proto;
+
+       if (flags & (TUNNEL_KEY | TUNNEL_CSUM | TUNNEL_SEQ)) {
+               __be32 *ptr = (__be32 *)(((u8 *)greh) + hdr_len - 4);
+
+               if (flags & TUNNEL_SEQ) {
+                       *ptr = seq;
+                       ptr--;
+               }
+               if (flags & TUNNEL_KEY) {
+                       *ptr = key;
+                       ptr--;
+               }
+               if (flags & TUNNEL_CSUM &&
+                   !(skb_shinfo(skb)->gso_type &
+                     (SKB_GSO_GRE | SKB_GSO_GRE_CSUM))) {
+                       *ptr = 0;
+                       *(__sum16 *)ptr = gre_checksum(skb);
+               }
+       }
+}
+
 #endif
index 970028e..3ef2743 100644 (file)
@@ -30,9 +30,9 @@ struct icmp_err {
 
 extern const struct icmp_err icmp_err_convert[];
 #define ICMP_INC_STATS(net, field)     SNMP_INC_STATS((net)->mib.icmp_statistics, field)
-#define ICMP_INC_STATS_BH(net, field)  SNMP_INC_STATS_BH((net)->mib.icmp_statistics, field)
+#define __ICMP_INC_STATS(net, field)   __SNMP_INC_STATS((net)->mib.icmp_statistics, field)
 #define ICMPMSGOUT_INC_STATS(net, field)       SNMP_INC_STATS_ATOMIC_LONG((net)->mib.icmpmsg_statistics, field+256)
-#define ICMPMSGIN_INC_STATS_BH(net, field)     SNMP_INC_STATS_ATOMIC_LONG((net)->mib.icmpmsg_statistics, field)
+#define ICMPMSGIN_INC_STATS(net, field)                SNMP_INC_STATS_ATOMIC_LONG((net)->mib.icmpmsg_statistics, field)
 
 struct dst_entry;
 struct net_proto_family;
index 93725e5..247ac82 100644 (file)
@@ -187,17 +187,15 @@ void ip_send_unicast_reply(struct sock *sk, struct sk_buff *skb,
                           unsigned int len);
 
 #define IP_INC_STATS(net, field)       SNMP_INC_STATS64((net)->mib.ip_statistics, field)
-#define IP_INC_STATS_BH(net, field)    SNMP_INC_STATS64_BH((net)->mib.ip_statistics, field)
+#define __IP_INC_STATS(net, field)     __SNMP_INC_STATS64((net)->mib.ip_statistics, field)
 #define IP_ADD_STATS(net, field, val)  SNMP_ADD_STATS64((net)->mib.ip_statistics, field, val)
-#define IP_ADD_STATS_BH(net, field, val) SNMP_ADD_STATS64_BH((net)->mib.ip_statistics, field, val)
+#define __IP_ADD_STATS(net, field, val) __SNMP_ADD_STATS64((net)->mib.ip_statistics, field, val)
 #define IP_UPD_PO_STATS(net, field, val) SNMP_UPD_PO_STATS64((net)->mib.ip_statistics, field, val)
-#define IP_UPD_PO_STATS_BH(net, field, val) SNMP_UPD_PO_STATS64_BH((net)->mib.ip_statistics, field, val)
+#define __IP_UPD_PO_STATS(net, field, val) __SNMP_UPD_PO_STATS64((net)->mib.ip_statistics, field, val)
 #define NET_INC_STATS(net, field)      SNMP_INC_STATS((net)->mib.net_statistics, field)
-#define NET_INC_STATS_BH(net, field)   SNMP_INC_STATS_BH((net)->mib.net_statistics, field)
-#define NET_INC_STATS_USER(net, field)         SNMP_INC_STATS_USER((net)->mib.net_statistics, field)
+#define __NET_INC_STATS(net, field)    __SNMP_INC_STATS((net)->mib.net_statistics, field)
 #define NET_ADD_STATS(net, field, adnd)        SNMP_ADD_STATS((net)->mib.net_statistics, field, adnd)
-#define NET_ADD_STATS_BH(net, field, adnd) SNMP_ADD_STATS_BH((net)->mib.net_statistics, field, adnd)
-#define NET_ADD_STATS_USER(net, field, adnd) SNMP_ADD_STATS_USER((net)->mib.net_statistics, field, adnd)
+#define __NET_ADD_STATS(net, field, adnd) __SNMP_ADD_STATS((net)->mib.net_statistics, field, adnd)
 
 u64 snmp_get_cpu_field(void __percpu *mib, int cpu, int offct);
 unsigned long snmp_fold_field(void __percpu *mib, int offt);
index 499a707..fb9e015 100644 (file)
@@ -42,6 +42,7 @@ struct ip6_tnl {
        struct __ip6_tnl_parm parms;    /* tunnel configuration parameters */
        struct flowi fl;        /* flowi template for xmit */
        struct dst_cache dst_cache;     /* cached dst */
+       struct gro_cells gro_cells;
 
        int err_count;
        unsigned long err_time;
@@ -49,8 +50,10 @@ struct ip6_tnl {
        /* These fields used only by GRE */
        __u32 i_seqno;  /* The last seen seqno  */
        __u32 o_seqno;  /* The last output seqno */
-       int hlen;       /* Precalculated GRE header length */
+       int hlen;       /* tun_hlen + encap_hlen */
+       int tun_hlen;   /* Precalculated header length */
        int mlink;
+
 };
 
 /* Tunnel encapsulation limit destination sub-option */
@@ -63,13 +66,19 @@ struct ipv6_tlv_tnl_enc_lim {
 
 int ip6_tnl_rcv_ctl(struct ip6_tnl *t, const struct in6_addr *laddr,
                const struct in6_addr *raddr);
+int ip6_tnl_rcv(struct ip6_tnl *tunnel, struct sk_buff *skb,
+               const struct tnl_ptk_info *tpi, struct metadata_dst *tun_dst,
+               bool log_ecn_error);
 int ip6_tnl_xmit_ctl(struct ip6_tnl *t, const struct in6_addr *laddr,
                     const struct in6_addr *raddr);
+int ip6_tnl_xmit(struct sk_buff *skb, struct net_device *dev, __u8 dsfield,
+                struct flowi6 *fl6, int encap_limit, __u32 *pmtu, __u8 proto);
 __u16 ip6_tnl_parse_tlv_enc_lim(struct sk_buff *skb, __u8 *raw);
 __u32 ip6_tnl_get_cap(struct ip6_tnl *t, const struct in6_addr *laddr,
                             const struct in6_addr *raddr);
 struct net *ip6_tnl_get_link_net(const struct net_device *dev);
 int ip6_tnl_get_iflink(const struct net_device *dev);
+int ip6_tnl_change_mtu(struct net_device *dev, int new_mtu);
 
 #ifdef CONFIG_INET
 static inline void ip6tunnel_xmit(struct sock *sk, struct sk_buff *skb,
index 6d79091..d916b43 100644 (file)
@@ -160,6 +160,7 @@ struct tnl_ptk_info {
 
 #define PACKET_RCVD    0
 #define PACKET_REJECT  1
+#define PACKET_NEXT    2
 
 #define IP_TNL_HASH_BITS   7
 #define IP_TNL_HASH_SIZE   (1 << IP_TNL_HASH_BITS)
index e93e947..11a0452 100644 (file)
@@ -121,21 +121,21 @@ struct frag_hdr {
 extern int sysctl_mld_max_msf;
 extern int sysctl_mld_qrv;
 
-#define _DEVINC(net, statname, modifier, idev, field)                  \
+#define _DEVINC(net, statname, mod, idev, field)                       \
 ({                                                                     \
        struct inet6_dev *_idev = (idev);                               \
        if (likely(_idev != NULL))                                      \
-               SNMP_INC_STATS##modifier((_idev)->stats.statname, (field)); \
-       SNMP_INC_STATS##modifier((net)->mib.statname##_statistics, (field));\
+               mod##SNMP_INC_STATS64((_idev)->stats.statname, (field));\
+       mod##SNMP_INC_STATS64((net)->mib.statname##_statistics, (field));\
 })
 
 /* per device counters are atomic_long_t */
-#define _DEVINCATOMIC(net, statname, modifier, idev, field)            \
+#define _DEVINCATOMIC(net, statname, mod, idev, field)                 \
 ({                                                                     \
        struct inet6_dev *_idev = (idev);                               \
        if (likely(_idev != NULL))                                      \
                SNMP_INC_STATS_ATOMIC_LONG((_idev)->stats.statname##dev, (field)); \
-       SNMP_INC_STATS##modifier((net)->mib.statname##_statistics, (field));\
+       mod##SNMP_INC_STATS((net)->mib.statname##_statistics, (field));\
 })
 
 /* per device and per net counters are atomic_long_t */
@@ -147,46 +147,44 @@ extern int sysctl_mld_qrv;
        SNMP_INC_STATS_ATOMIC_LONG((net)->mib.statname##_statistics, (field));\
 })
 
-#define _DEVADD(net, statname, modifier, idev, field, val)             \
+#define _DEVADD(net, statname, mod, idev, field, val)                  \
 ({                                                                     \
        struct inet6_dev *_idev = (idev);                               \
        if (likely(_idev != NULL))                                      \
-               SNMP_ADD_STATS##modifier((_idev)->stats.statname, (field), (val)); \
-       SNMP_ADD_STATS##modifier((net)->mib.statname##_statistics, (field), (val));\
+               mod##SNMP_ADD_STATS((_idev)->stats.statname, (field), (val)); \
+       mod##SNMP_ADD_STATS((net)->mib.statname##_statistics, (field), (val));\
 })
 
-#define _DEVUPD(net, statname, modifier, idev, field, val)             \
+#define _DEVUPD(net, statname, mod, idev, field, val)                  \
 ({                                                                     \
        struct inet6_dev *_idev = (idev);                               \
        if (likely(_idev != NULL))                                      \
-               SNMP_UPD_PO_STATS##modifier((_idev)->stats.statname, field, (val)); \
-       SNMP_UPD_PO_STATS##modifier((net)->mib.statname##_statistics, field, (val));\
+               mod##SNMP_UPD_PO_STATS((_idev)->stats.statname, field, (val)); \
+       mod##SNMP_UPD_PO_STATS((net)->mib.statname##_statistics, field, (val));\
 })
 
 /* MIBs */
 
 #define IP6_INC_STATS(net, idev,field)         \
-               _DEVINC(net, ipv6, 64, idev, field)
-#define IP6_INC_STATS_BH(net, idev,field)      \
-               _DEVINC(net, ipv6, 64_BH, idev, field)
+               _DEVINC(net, ipv6, , idev, field)
+#define __IP6_INC_STATS(net, idev,field)       \
+               _DEVINC(net, ipv6, __, idev, field)
 #define IP6_ADD_STATS(net, idev,field,val)     \
-               _DEVADD(net, ipv6, 64, idev, field, val)
-#define IP6_ADD_STATS_BH(net, idev,field,val)  \
-               _DEVADD(net, ipv6, 64_BH, idev, field, val)
+               _DEVADD(net, ipv6, , idev, field, val)
+#define __IP6_ADD_STATS(net, idev,field,val)   \
+               _DEVADD(net, ipv6, __, idev, field, val)
 #define IP6_UPD_PO_STATS(net, idev,field,val)   \
-               _DEVUPD(net, ipv6, 64, idev, field, val)
-#define IP6_UPD_PO_STATS_BH(net, idev,field,val)   \
-               _DEVUPD(net, ipv6, 64_BH, idev, field, val)
+               _DEVUPD(net, ipv6, , idev, field, val)
+#define __IP6_UPD_PO_STATS(net, idev,field,val)   \
+               _DEVUPD(net, ipv6, __, idev, field, val)
 #define ICMP6_INC_STATS(net, idev, field)      \
                _DEVINCATOMIC(net, icmpv6, , idev, field)
-#define ICMP6_INC_STATS_BH(net, idev, field)   \
-               _DEVINCATOMIC(net, icmpv6, _BH, idev, field)
+#define __ICMP6_INC_STATS(net, idev, field)    \
+               _DEVINCATOMIC(net, icmpv6, __, idev, field)
 
 #define ICMP6MSGOUT_INC_STATS(net, idev, field)                \
        _DEVINC_ATOMIC_ATOMIC(net, icmpv6msg, idev, field +256)
-#define ICMP6MSGOUT_INC_STATS_BH(net, idev, field)     \
-       _DEVINC_ATOMIC_ATOMIC(net, icmpv6msg, idev, field +256)
-#define ICMP6MSGIN_INC_STATS_BH(net, idev, field)      \
+#define ICMP6MSGIN_INC_STATS(net, idev, field) \
        _DEVINC_ATOMIC_ATOMIC(net, icmpv6msg, idev, field)
 
 struct ip6_ra_chain {
@@ -253,6 +251,13 @@ struct ipv6_fl_socklist {
        struct rcu_head                 rcu;
 };
 
+struct ipcm6_cookie {
+       __s16 hlimit;
+       __s16 tclass;
+       __s8  dontfrag;
+       struct ipv6_txoptions *opt;
+};
+
 static inline struct ipv6_txoptions *txopt_get(const struct ipv6_pinfo *np)
 {
        struct ipv6_txoptions *opt;
@@ -865,9 +870,9 @@ int ip6_find_1stfragopt(struct sk_buff *skb, u8 **nexthdr);
 int ip6_append_data(struct sock *sk,
                    int getfrag(void *from, char *to, int offset, int len,
                                int odd, struct sk_buff *skb),
-                   void *from, int length, int transhdrlen, int hlimit,
-                   int tclass, struct ipv6_txoptions *opt, struct flowi6 *fl6,
-                   struct rt6_info *rt, unsigned int flags, int dontfrag,
+                   void *from, int length, int transhdrlen,
+                   struct ipcm6_cookie *ipc6, struct flowi6 *fl6,
+                   struct rt6_info *rt, unsigned int flags,
                    const struct sockcm_cookie *sockc);
 
 int ip6_push_pending_frames(struct sock *sk);
@@ -883,9 +888,8 @@ struct sk_buff *ip6_make_skb(struct sock *sk,
                             int getfrag(void *from, char *to, int offset,
                                         int len, int odd, struct sk_buff *skb),
                             void *from, int length, int transhdrlen,
-                            int hlimit, int tclass, struct ipv6_txoptions *opt,
-                            struct flowi6 *fl6, struct rt6_info *rt,
-                            unsigned int flags, int dontfrag,
+                            struct ipcm6_cookie *ipc6, struct flowi6 *fl6,
+                            struct rt6_info *rt, unsigned int flags,
                             const struct sockcm_cookie *sockc);
 
 static inline struct sk_buff *ip6_finish_skb(struct sock *sk)
index 6cd7a70..e465c85 100644 (file)
@@ -287,6 +287,16 @@ static inline void ieee802154_le16_to_be16(void *be16_dst, const void *le16_src)
        put_unaligned_be16(get_unaligned_le16(le16_src), be16_dst);
 }
 
+/**
+ * ieee802154_be16_to_le16 - copies and convert be16 to le16
+ * @le16_dst: le16 destination pointer
+ * @be16_src: be16 source pointer
+ */
+static inline void ieee802154_be16_to_le16(void *le16_dst, const void *be16_src)
+{
+       put_unaligned_le16(get_unaligned_be16(be16_src), le16_dst);
+}
+
 /**
  * ieee802154_alloc_hw - Allocate a new hardware device
  *
index 2f87c1b..006a7b8 100644 (file)
@@ -47,6 +47,9 @@ static inline int rtnl_msg_family(const struct nlmsghdr *nlh)
  *     @get_num_rx_queues: Function to determine number of receive queues
  *                         to create when creating a new device.
  *     @get_link_net: Function to get the i/o netns of the device
+ *     @get_linkxstats_size: Function to calculate the required room for
+ *                           dumping device-specific extended link stats
+ *     @fill_linkxstats: Function to dump device-specific extended link stats
  */
 struct rtnl_link_ops {
        struct list_head        list;
@@ -95,6 +98,10 @@ struct rtnl_link_ops {
                                                   const struct net_device *dev,
                                                   const struct net_device *slave_dev);
        struct net              *(*get_link_net)(const struct net_device *dev);
+       size_t                  (*get_linkxstats_size)(const struct net_device *dev);
+       int                     (*fill_linkxstats)(struct sk_buff *skb,
+                                                  const struct net_device *dev,
+                                                  int *prividx);
 };
 
 int __rtnl_link_register(struct rtnl_link_ops *ops);
index 3f1c0ff..b392ac8 100644 (file)
@@ -205,10 +205,9 @@ extern int sysctl_sctp_wmem[3];
  */
 
 /* SCTP SNMP MIB stats handlers */
-#define SCTP_INC_STATS(net, field)      SNMP_INC_STATS((net)->sctp.sctp_statistics, field)
-#define SCTP_INC_STATS_BH(net, field)   SNMP_INC_STATS_BH((net)->sctp.sctp_statistics, field)
-#define SCTP_INC_STATS_USER(net, field) SNMP_INC_STATS_USER((net)->sctp.sctp_statistics, field)
-#define SCTP_DEC_STATS(net, field)      SNMP_DEC_STATS((net)->sctp.sctp_statistics, field)
+#define SCTP_INC_STATS(net, field)     SNMP_INC_STATS((net)->sctp.sctp_statistics, field)
+#define __SCTP_INC_STATS(net, field)   __SNMP_INC_STATS((net)->sctp.sctp_statistics, field)
+#define SCTP_DEC_STATS(net, field)     SNMP_DEC_STATS((net)->sctp.sctp_statistics, field)
 
 /* sctp mib definitions */
 enum {
index 558bae3..16b013a 100644 (file)
@@ -218,7 +218,7 @@ struct sctp_sock {
                frag_interleave:1,
                recvrcvinfo:1,
                recvnxtinfo:1,
-               pending_data_ready:1;
+               data_ready_signalled:1;
 
        atomic_t pd_mode;
        /* Receive to here while partial delivery is in effect. */
index 35512ac..c9228ad 100644 (file)
@@ -123,12 +123,9 @@ struct linux_xfrm_mib {
 #define DECLARE_SNMP_STAT(type, name)  \
        extern __typeof__(type) __percpu *name
 
-#define SNMP_INC_STATS_BH(mib, field)  \
+#define __SNMP_INC_STATS(mib, field)   \
                        __this_cpu_inc(mib->mibs[field])
 
-#define SNMP_INC_STATS_USER(mib, field)        \
-                       this_cpu_inc(mib->mibs[field])
-
 #define SNMP_INC_STATS_ATOMIC_LONG(mib, field) \
                        atomic_long_inc(&mib->mibs[field])
 
@@ -138,12 +135,9 @@ struct linux_xfrm_mib {
 #define SNMP_DEC_STATS(mib, field)     \
                        this_cpu_dec(mib->mibs[field])
 
-#define SNMP_ADD_STATS_BH(mib, field, addend)  \
+#define __SNMP_ADD_STATS(mib, field, addend)   \
                        __this_cpu_add(mib->mibs[field], addend)
 
-#define SNMP_ADD_STATS_USER(mib, field, addend)        \
-                       this_cpu_add(mib->mibs[field], addend)
-
 #define SNMP_ADD_STATS(mib, field, addend)     \
                        this_cpu_add(mib->mibs[field], addend)
 #define SNMP_UPD_PO_STATS(mib, basefield, addend)      \
@@ -152,7 +146,7 @@ struct linux_xfrm_mib {
                this_cpu_inc(ptr[basefield##PKTS]);             \
                this_cpu_add(ptr[basefield##OCTETS], addend);   \
        } while (0)
-#define SNMP_UPD_PO_STATS_BH(mib, basefield, addend)   \
+#define __SNMP_UPD_PO_STATS(mib, basefield, addend)    \
        do { \
                __typeof__((mib->mibs) + 0) ptr = mib->mibs;    \
                __this_cpu_inc(ptr[basefield##PKTS]);           \
@@ -162,7 +156,7 @@ struct linux_xfrm_mib {
 
 #if BITS_PER_LONG==32
 
-#define SNMP_ADD_STATS64_BH(mib, field, addend)                        \
+#define __SNMP_ADD_STATS64(mib, field, addend)                                 \
        do {                                                            \
                __typeof__(*mib) *ptr = raw_cpu_ptr(mib);               \
                u64_stats_update_begin(&ptr->syncp);                    \
@@ -170,20 +164,16 @@ struct linux_xfrm_mib {
                u64_stats_update_end(&ptr->syncp);                      \
        } while (0)
 
-#define SNMP_ADD_STATS64_USER(mib, field, addend)                      \
+#define SNMP_ADD_STATS64(mib, field, addend)                           \
        do {                                                            \
                local_bh_disable();                                     \
-               SNMP_ADD_STATS64_BH(mib, field, addend);                \
-               local_bh_enable();                                      \
+               __SNMP_ADD_STATS64(mib, field, addend);                 \
+               local_bh_enable();                              \
        } while (0)
 
-#define SNMP_ADD_STATS64(mib, field, addend)                           \
-               SNMP_ADD_STATS64_USER(mib, field, addend)
-
-#define SNMP_INC_STATS64_BH(mib, field) SNMP_ADD_STATS64_BH(mib, field, 1)
-#define SNMP_INC_STATS64_USER(mib, field) SNMP_ADD_STATS64_USER(mib, field, 1)
+#define __SNMP_INC_STATS64(mib, field) SNMP_ADD_STATS64(mib, field, 1)
 #define SNMP_INC_STATS64(mib, field) SNMP_ADD_STATS64(mib, field, 1)
-#define SNMP_UPD_PO_STATS64_BH(mib, basefield, addend)                 \
+#define __SNMP_UPD_PO_STATS64(mib, basefield, addend)                  \
        do {                                                            \
                __typeof__(*mib) *ptr;                          \
                ptr = raw_cpu_ptr((mib));                               \
@@ -195,19 +185,17 @@ struct linux_xfrm_mib {
 #define SNMP_UPD_PO_STATS64(mib, basefield, addend)                    \
        do {                                                            \
                local_bh_disable();                                     \
-               SNMP_UPD_PO_STATS64_BH(mib, basefield, addend);         \
-               local_bh_enable();                                      \
+               __SNMP_UPD_PO_STATS64(mib, basefield, addend);          \
+               local_bh_enable();                              \
        } while (0)
 #else
-#define SNMP_INC_STATS64_BH(mib, field)                SNMP_INC_STATS_BH(mib, field)
-#define SNMP_INC_STATS64_USER(mib, field)      SNMP_INC_STATS_USER(mib, field)
+#define __SNMP_INC_STATS64(mib, field)         __SNMP_INC_STATS(mib, field)
 #define SNMP_INC_STATS64(mib, field)           SNMP_INC_STATS(mib, field)
 #define SNMP_DEC_STATS64(mib, field)           SNMP_DEC_STATS(mib, field)
-#define SNMP_ADD_STATS64_BH(mib, field, addend) SNMP_ADD_STATS_BH(mib, field, addend)
-#define SNMP_ADD_STATS64_USER(mib, field, addend) SNMP_ADD_STATS_USER(mib, field, addend)
+#define __SNMP_ADD_STATS64(mib, field, addend) __SNMP_ADD_STATS(mib, field, addend)
 #define SNMP_ADD_STATS64(mib, field, addend)   SNMP_ADD_STATS(mib, field, addend)
 #define SNMP_UPD_PO_STATS64(mib, basefield, addend) SNMP_UPD_PO_STATS(mib, basefield, addend)
-#define SNMP_UPD_PO_STATS64_BH(mib, basefield, addend) SNMP_UPD_PO_STATS_BH(mib, basefield, addend)
+#define __SNMP_UPD_PO_STATS64(mib, basefield, addend) __SNMP_UPD_PO_STATS(mib, basefield, addend)
 #endif
 
 #endif
index 52448ba..c9c8b19 100644 (file)
@@ -630,7 +630,11 @@ static inline void sk_add_node(struct sock *sk, struct hlist_head *list)
 static inline void sk_add_node_rcu(struct sock *sk, struct hlist_head *list)
 {
        sock_hold(sk);
-       hlist_add_head_rcu(&sk->sk_node, list);
+       if (IS_ENABLED(CONFIG_IPV6) && sk->sk_reuseport &&
+           sk->sk_family == AF_INET6)
+               hlist_add_tail_rcu(&sk->sk_node, list);
+       else
+               hlist_add_head_rcu(&sk->sk_node, list);
 }
 
 static inline void __sk_nulls_add_node_rcu(struct sock *sk, struct hlist_nulls_head *list)
@@ -922,6 +926,17 @@ void sk_stream_kill_queues(struct sock *sk);
 void sk_set_memalloc(struct sock *sk);
 void sk_clear_memalloc(struct sock *sk);
 
+void __sk_flush_backlog(struct sock *sk);
+
+static inline bool sk_flush_backlog(struct sock *sk)
+{
+       if (unlikely(READ_ONCE(sk->sk_backlog.tail))) {
+               __sk_flush_backlog(sk);
+               return true;
+       }
+       return false;
+}
+
 int sk_wait_data(struct sock *sk, long *timeo, const struct sk_buff *skb);
 
 struct request_sock_ops;
@@ -1406,11 +1421,16 @@ static inline void unlock_sock_fast(struct sock *sk, bool slow)
  * accesses from user process context.
  */
 
-static inline bool sock_owned_by_user(const struct sock *sk)
+static inline void sock_owned_by_me(const struct sock *sk)
 {
 #ifdef CONFIG_LOCKDEP
-       WARN_ON(!lockdep_sock_is_held(sk));
+       WARN_ON_ONCE(!lockdep_sock_is_held(sk) && debug_locks);
 #endif
+}
+
+static inline bool sock_owned_by_user(const struct sock *sk)
+{
+       sock_owned_by_me(sk);
        return sk->sk_lock.owned;
 }
 
@@ -1430,6 +1450,7 @@ struct sock *sk_clone_lock(const struct sock *sk, const gfp_t priority);
 
 struct sk_buff *sock_wmalloc(struct sock *sk, unsigned long size, int force,
                             gfp_t priority);
+void __sock_wfree(struct sk_buff *skb);
 void sock_wfree(struct sk_buff *skb);
 void skb_orphan_partial(struct sk_buff *skb);
 void sock_rfree(struct sk_buff *skb);
@@ -1936,11 +1957,19 @@ static inline unsigned long sock_wspace(struct sock *sk)
  */
 static inline void sk_set_bit(int nr, struct sock *sk)
 {
+       if ((nr == SOCKWQ_ASYNC_NOSPACE || nr == SOCKWQ_ASYNC_WAITDATA) &&
+           !sock_flag(sk, SOCK_FASYNC))
+               return;
+
        set_bit(nr, &sk->sk_wq_raw->flags);
 }
 
 static inline void sk_clear_bit(int nr, struct sock *sk)
 {
+       if ((nr == SOCKWQ_ASYNC_NOSPACE || nr == SOCKWQ_ASYNC_WAITDATA) &&
+           !sock_flag(sk, SOCK_FASYNC))
+               return;
+
        clear_bit(nr, &sk->sk_wq_raw->flags);
 }
 
index d451122..51d77b2 100644 (file)
@@ -54,6 +54,8 @@ struct switchdev_attr {
        struct net_device *orig_dev;
        enum switchdev_attr_id id;
        u32 flags;
+       void *complete_priv;
+       void (*complete)(struct net_device *dev, int err, void *priv);
        union {
                struct netdev_phys_item_id ppid;        /* PORT_PARENT_ID */
                u8 stp_state;                           /* PORT_STP_STATE */
@@ -75,6 +77,8 @@ struct switchdev_obj {
        struct net_device *orig_dev;
        enum switchdev_obj_id id;
        u32 flags;
+       void *complete_priv;
+       void (*complete)(struct net_device *dev, int err, void *priv);
 };
 
 /* SWITCHDEV_OBJ_ID_PORT_VLAN */
index dae96ba..e891835 100644 (file)
@@ -2,6 +2,7 @@
 #define __NET_TC_MIR_H
 
 #include <net/act_api.h>
+#include <linux/tc_act/tc_mirred.h>
 
 struct tcf_mirred {
        struct tcf_common       common;
@@ -14,4 +15,18 @@ struct tcf_mirred {
 #define to_mirred(a) \
        container_of(a->priv, struct tcf_mirred, common)
 
+static inline bool is_tcf_mirred_redirect(const struct tc_action *a)
+{
+#ifdef CONFIG_NET_CLS_ACT
+       if (a->ops && a->ops->type == TCA_ACT_MIRRED)
+               return to_mirred(a)->tcfm_eaction == TCA_EGRESS_REDIR;
+#endif
+       return false;
+}
+
+static inline int tcf_mirred_ifindex(const struct tc_action *a)
+{
+       return to_mirred(a)->tcfm_ifindex;
+}
+
 #endif /* __NET_TC_MIR_H */
index 7f2553d..4775a1b 100644 (file)
@@ -332,9 +332,8 @@ bool tcp_check_oom(struct sock *sk, int shift);
 extern struct proto tcp_prot;
 
 #define TCP_INC_STATS(net, field)      SNMP_INC_STATS((net)->mib.tcp_statistics, field)
-#define TCP_INC_STATS_BH(net, field)   SNMP_INC_STATS_BH((net)->mib.tcp_statistics, field)
+#define __TCP_INC_STATS(net, field)    __SNMP_INC_STATS((net)->mib.tcp_statistics, field)
 #define TCP_DEC_STATS(net, field)      SNMP_DEC_STATS((net)->mib.tcp_statistics, field)
-#define TCP_ADD_STATS_USER(net, field, val) SNMP_ADD_STATS_USER((net)->mib.tcp_statistics, field, val)
 #define TCP_ADD_STATS(net, field, val) SNMP_ADD_STATS((net)->mib.tcp_statistics, field, val)
 
 void tcp_tasklet_init(void);
@@ -762,14 +761,20 @@ struct tcp_skb_cb {
 
        __u8            ip_dsfield;     /* IPv4 tos or IPv6 dsfield     */
        __u8            txstamp_ack:1,  /* Record TX timestamp for ack? */
-                       unused:7;
+                       eor:1,          /* Is skb MSG_EOR marked? */
+                       unused:6;
        __u32           ack_seq;        /* Sequence number ACK'd        */
        union {
-               struct inet_skb_parm    h4;
+               struct {
+                       /* There is space for up to 20 bytes */
+               } tx;   /* only used for outgoing skbs */
+               union {
+                       struct inet_skb_parm    h4;
 #if IS_ENABLED(CONFIG_IPV6)
-               struct inet6_skb_parm   h6;
+                       struct inet6_skb_parm   h6;
 #endif
-       } header;       /* For incoming frames          */
+               } header;       /* For incoming skbs */
+       };
 };
 
 #define TCP_SKB_CB(__skb)      ((struct tcp_skb_cb *)&((__skb)->cb[0]))
@@ -809,6 +814,11 @@ static inline int tcp_skb_mss(const struct sk_buff *skb)
        return TCP_SKB_CB(skb)->tcp_gso_size;
 }
 
+static inline bool tcp_skb_can_collapse_to(const struct sk_buff *skb)
+{
+       return likely(!TCP_SKB_CB(skb)->eor);
+}
+
 /* Events passed to congestion control interface */
 enum tcp_ca_event {
        CA_EVENT_TX_START,      /* first transmit when no packets in flight */
@@ -1298,10 +1308,10 @@ bool tcp_oow_rate_limited(struct net *net, const struct sk_buff *skb,
 static inline void tcp_mib_init(struct net *net)
 {
        /* See RFC 2012 */
-       TCP_ADD_STATS_USER(net, TCP_MIB_RTOALGORITHM, 1);
-       TCP_ADD_STATS_USER(net, TCP_MIB_RTOMIN, TCP_RTO_MIN*1000/HZ);
-       TCP_ADD_STATS_USER(net, TCP_MIB_RTOMAX, TCP_RTO_MAX*1000/HZ);
-       TCP_ADD_STATS_USER(net, TCP_MIB_MAXCONN, -1);
+       TCP_ADD_STATS(net, TCP_MIB_RTOALGORITHM, 1);
+       TCP_ADD_STATS(net, TCP_MIB_RTOMIN, TCP_RTO_MIN*1000/HZ);
+       TCP_ADD_STATS(net, TCP_MIB_RTOMAX, TCP_RTO_MAX*1000/HZ);
+       TCP_ADD_STATS(net, TCP_MIB_MAXCONN, -1);
 }
 
 /* from STCP */
@@ -1744,7 +1754,7 @@ static inline __u32 cookie_init_sequence(const struct tcp_request_sock_ops *ops,
                                         __u16 *mss)
 {
        tcp_synq_overflow(sk);
-       NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_SYNCOOKIESSENT);
+       __NET_INC_STATS(sock_net(sk), LINUX_MIB_SYNCOOKIESSENT);
        return ops->cookie_init_seq(skb, mss);
 }
 #else
@@ -1853,7 +1863,7 @@ static inline void tcp_segs_in(struct tcp_sock *tp, const struct sk_buff *skb)
 static inline void tcp_listendrop(const struct sock *sk)
 {
        atomic_inc(&((struct sock *)sk)->sk_drops);
-       NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_LISTENDROPS);
+       __NET_INC_STATS(sock_net(sk), LINUX_MIB_LISTENDROPS);
 }
 
 #endif /* _TCP_H */
index 2b1c345..276f976 100644 (file)
@@ -41,8 +41,7 @@ void ip6_datagram_recv_specific_ctl(struct sock *sk, struct msghdr *msg,
                                    struct sk_buff *skb);
 
 int ip6_datagram_send_ctl(struct net *net, struct sock *sk, struct msghdr *msg,
-                         struct flowi6 *fl6, struct ipv6_txoptions *opt,
-                         int *hlimit, int *tclass, int *dontfrag,
+                         struct flowi6 *fl6, struct ipcm6_cookie *ipc6,
                          struct sockcm_cookie *sockc);
 
 void ip6_dgram_sock_seq_show(struct seq_file *seq, struct sock *sp,
index 3c5a65e..ae07f37 100644 (file)
@@ -289,32 +289,32 @@ struct sock *udp6_lib_lookup_skb(struct sk_buff *skb,
 /*
  *     SNMP statistics for UDP and UDP-Lite
  */
-#define UDP_INC_STATS_USER(net, field, is_udplite)           do { \
-       if (is_udplite) SNMP_INC_STATS_USER((net)->mib.udplite_statistics, field);       \
-       else            SNMP_INC_STATS_USER((net)->mib.udp_statistics, field);  }  while(0)
-#define UDP_INC_STATS_BH(net, field, is_udplite)             do { \
-       if (is_udplite) SNMP_INC_STATS_BH((net)->mib.udplite_statistics, field);         \
-       else            SNMP_INC_STATS_BH((net)->mib.udp_statistics, field);    }  while(0)
-
-#define UDP6_INC_STATS_BH(net, field, is_udplite)          do { \
-       if (is_udplite) SNMP_INC_STATS_BH((net)->mib.udplite_stats_in6, field);\
-       else            SNMP_INC_STATS_BH((net)->mib.udp_stats_in6, field);  \
+#define UDP_INC_STATS(net, field, is_udplite)                do { \
+       if (is_udplite) SNMP_INC_STATS((net)->mib.udplite_statistics, field);       \
+       else            SNMP_INC_STATS((net)->mib.udp_statistics, field);  }  while(0)
+#define __UDP_INC_STATS(net, field, is_udplite)              do { \
+       if (is_udplite) __SNMP_INC_STATS((net)->mib.udplite_statistics, field);         \
+       else            __SNMP_INC_STATS((net)->mib.udp_statistics, field);    }  while(0)
+
+#define __UDP6_INC_STATS(net, field, is_udplite)           do { \
+       if (is_udplite) __SNMP_INC_STATS((net)->mib.udplite_stats_in6, field);\
+       else            __SNMP_INC_STATS((net)->mib.udp_stats_in6, field);  \
 } while(0)
-#define UDP6_INC_STATS_USER(net, field, __lite)                    do { \
-       if (__lite) SNMP_INC_STATS_USER((net)->mib.udplite_stats_in6, field);  \
-       else        SNMP_INC_STATS_USER((net)->mib.udp_stats_in6, field);      \
+#define UDP6_INC_STATS(net, field, __lite)                 do { \
+       if (__lite) SNMP_INC_STATS((net)->mib.udplite_stats_in6, field);  \
+       else        SNMP_INC_STATS((net)->mib.udp_stats_in6, field);      \
 } while(0)
 
 #if IS_ENABLED(CONFIG_IPV6)
-#define UDPX_INC_STATS_BH(sk, field)                                   \
+#define __UDPX_INC_STATS(sk, field)                                    \
 do {                                                                   \
        if ((sk)->sk_family == AF_INET)                                 \
-               UDP_INC_STATS_BH(sock_net(sk), field, 0);               \
+               __UDP_INC_STATS(sock_net(sk), field, 0);                \
        else                                                            \
-               UDP6_INC_STATS_BH(sock_net(sk), field, 0);              \
+               __UDP6_INC_STATS(sock_net(sk), field, 0);               \
 } while (0)
 #else
-#define UDPX_INC_STATS_BH(sk, field) UDP_INC_STATS_BH(sock_net(sk), field, 0)
+#define __UDPX_INC_STATS(sk, field) __UDP_INC_STATS(sock_net(sk), field, 0)
 #endif
 
 /* /proc */
index 673e9f9..b880316 100644 (file)
@@ -317,7 +317,9 @@ static inline netdev_features_t vxlan_features_check(struct sk_buff *skb,
            (skb->inner_protocol_type != ENCAP_TYPE_ETHER ||
             skb->inner_protocol != htons(ETH_P_TEB) ||
             (skb_inner_mac_header(skb) - skb_transport_header(skb) !=
-             sizeof(struct udphdr) + sizeof(struct vxlanhdr))))
+             sizeof(struct udphdr) + sizeof(struct vxlanhdr)) ||
+            (skb->ip_summed != CHECKSUM_NONE &&
+             !can_checksum_protocol(features, inner_eth_hdr(skb)->h_proto))))
                return features & ~(NETIF_F_CSUM_MASK | NETIF_F_GSO_MASK);
 
        return features;
index d6f6e50..adfebd6 100644 (file)
 
 #ifdef CONFIG_XFRM_STATISTICS
 #define XFRM_INC_STATS(net, field)     SNMP_INC_STATS((net)->mib.xfrm_statistics, field)
-#define XFRM_INC_STATS_BH(net, field)  SNMP_INC_STATS_BH((net)->mib.xfrm_statistics, field)
-#define XFRM_INC_STATS_USER(net, field)        SNMP_INC_STATS_USER((net)-mib.xfrm_statistics, field)
 #else
 #define XFRM_INC_STATS(net, field)     ((void)(net))
-#define XFRM_INC_STATS_BH(net, field)  ((void)(net))
-#define XFRM_INC_STATS_USER(net, field)        ((void)(net))
 #endif
 
 
index cf8f9e7..a6b9370 100644 (file)
@@ -34,6 +34,7 @@
 #define _RDMA_IB_H
 
 #include <linux/types.h>
+#include <linux/sched.h>
 
 struct ib_addr {
        union {
@@ -86,4 +87,19 @@ struct sockaddr_ib {
        __u64                   sib_scope_id;
 };
 
+/*
+ * The IB interfaces that use write() as bi-directional ioctl() are
+ * fundamentally unsafe, since there are lots of ways to trigger "write()"
+ * calls from various contexts with elevated privileges. That includes the
+ * traditional suid executable error message writes, but also various kernel
+ * interfaces that can write to file descriptors.
+ *
+ * This function provides protection for the legacy API by restricting the
+ * calling context.
+ */
+static inline bool ib_safe_file_access(struct file *filp)
+{
+       return filp->f_cred == current_cred() && segment_eq(get_fs(), USER_DS);
+}
+
 #endif /* _RDMA_IB_H */
index fa341fc..f5842bc 100644 (file)
@@ -9,7 +9,7 @@
 #ifdef CONFIG_SND_HDA_I915
 int snd_hdac_set_codec_wakeup(struct hdac_bus *bus, bool enable);
 int snd_hdac_display_power(struct hdac_bus *bus, bool enable);
-int snd_hdac_get_display_clk(struct hdac_bus *bus);
+void snd_hdac_i915_set_bclk(struct hdac_bus *bus);
 int snd_hdac_sync_audio_rate(struct hdac_bus *bus, hda_nid_t nid, int rate);
 int snd_hdac_acomp_get_eld(struct hdac_bus *bus, hda_nid_t nid,
                           bool *audio_enabled, char *buffer, int max_bytes);
@@ -25,9 +25,8 @@ static inline int snd_hdac_display_power(struct hdac_bus *bus, bool enable)
 {
        return 0;
 }
-static inline int snd_hdac_get_display_clk(struct hdac_bus *bus)
+static inline void snd_hdac_i915_set_bclk(struct hdac_bus *bus)
 {
-       return 0;
 }
 static inline int snd_hdac_sync_audio_rate(struct hdac_bus *bus, hda_nid_t nid,
                                           int rate)
index 2767c55..ca64f0f 100644 (file)
@@ -17,6 +17,8 @@ int snd_hdac_regmap_add_vendor_verb(struct hdac_device *codec,
                                    unsigned int verb);
 int snd_hdac_regmap_read_raw(struct hdac_device *codec, unsigned int reg,
                             unsigned int *val);
+int snd_hdac_regmap_read_raw_uncached(struct hdac_device *codec,
+                                     unsigned int reg, unsigned int *val);
 int snd_hdac_regmap_write_raw(struct hdac_device *codec, unsigned int reg,
                              unsigned int val);
 int snd_hdac_regmap_update_raw(struct hdac_device *codec, unsigned int reg,
index 2622b33..6e0f5f0 100644 (file)
@@ -717,9 +717,13 @@ __SYSCALL(__NR_membarrier, sys_membarrier)
 __SYSCALL(__NR_mlock2, sys_mlock2)
 #define __NR_copy_file_range 285
 __SYSCALL(__NR_copy_file_range, sys_copy_file_range)
+#define __NR_preadv2 286
+__SYSCALL(__NR_preadv2, sys_preadv2)
+#define __NR_pwritev2 287
+__SYSCALL(__NR_pwritev2, sys_pwritev2)
 
 #undef __NR_syscalls
-#define __NR_syscalls 286
+#define __NR_syscalls 288
 
 /*
  * All syscalls below here should go away really,
index b7b0fb1..406459b 100644 (file)
@@ -370,6 +370,8 @@ struct __sk_buff {
        __u32 cb[5];
        __u32 hash;
        __u32 tc_classid;
+       __u32 data;
+       __u32 data_end;
 };
 
 struct bpf_tunnel_key {
index 6487317..52deccc 100644 (file)
@@ -10,6 +10,7 @@ enum {
        TCA_STATS_QUEUE,
        TCA_STATS_APP,
        TCA_STATS_RATE_EST64,
+       TCA_STATS_PAD,
        __TCA_STATS_MAX,
 };
 #define TCA_STATS_MAX (__TCA_STATS_MAX - 1)
index 0536eef..397d503 100644 (file)
@@ -134,6 +134,16 @@ struct bridge_vlan_info {
        __u16 vid;
 };
 
+struct bridge_vlan_xstats {
+       __u64 rx_bytes;
+       __u64 rx_packets;
+       __u64 tx_bytes;
+       __u64 tx_packets;
+       __u16 vid;
+       __u16 pad1;
+       __u32 pad2;
+};
+
 /* Bridge multicast database attributes
  * [MDBA_MDB] = {
  *     [MDBA_MDB_ENTRY] = {
@@ -233,4 +243,12 @@ enum {
 };
 #define MDBA_SET_ENTRY_MAX (__MDBA_SET_ENTRY_MAX - 1)
 
+/* Embedded inside LINK_XSTATS_TYPE_BRIDGE */
+enum {
+       BRIDGE_XSTATS_UNSPEC,
+       BRIDGE_XSTATS_VLAN,
+       __BRIDGE_XSTATS_MAX
+};
+#define BRIDGE_XSTATS_MAX (__BRIDGE_XSTATS_MAX - 1)
+
 #endif /* _UAPI_LINUX_IF_BRIDGE_H */
index ba69d44..d2d7fd4 100644 (file)
@@ -271,6 +271,8 @@ enum {
        IFLA_BR_NF_CALL_IP6TABLES,
        IFLA_BR_NF_CALL_ARPTABLES,
        IFLA_BR_VLAN_DEFAULT_PVID,
+       IFLA_BR_PAD,
+       IFLA_BR_VLAN_STATS_ENABLED,
        __IFLA_BR_MAX,
 };
 
@@ -313,6 +315,7 @@ enum {
        IFLA_BRPORT_HOLD_TIMER,
        IFLA_BRPORT_FLUSH,
        IFLA_BRPORT_MULTICAST_ROUTER,
+       IFLA_BRPORT_PAD,
        __IFLA_BRPORT_MAX
 };
 #define IFLA_BRPORT_MAX (__IFLA_BRPORT_MAX - 1)
@@ -432,6 +435,7 @@ enum {
        IFLA_MACSEC_SCB,
        IFLA_MACSEC_REPLAY_PROTECT,
        IFLA_MACSEC_VALIDATION,
+       IFLA_MACSEC_PAD,
        __IFLA_MACSEC_MAX,
 };
 
@@ -517,6 +521,14 @@ enum {
 };
 #define IFLA_GENEVE_MAX        (__IFLA_GENEVE_MAX - 1)
 
+/* PPP section */
+enum {
+       IFLA_PPP_UNSPEC,
+       IFLA_PPP_DEV_FD,
+       __IFLA_PPP_MAX
+};
+#define IFLA_PPP_MAX (__IFLA_PPP_MAX - 1)
+
 /* Bonding section */
 
 enum {
@@ -666,6 +678,7 @@ enum {
        IFLA_VF_STATS_TX_BYTES,
        IFLA_VF_STATS_BROADCAST,
        IFLA_VF_STATS_MULTICAST,
+       IFLA_VF_STATS_PAD,
        __IFLA_VF_STATS_MAX,
 };
 
@@ -798,6 +811,7 @@ struct if_stats_msg {
 enum {
        IFLA_STATS_UNSPEC, /* also used as 64bit pad attribute */
        IFLA_STATS_LINK_64,
+       IFLA_STATS_LINK_XSTATS,
        __IFLA_STATS_MAX,
 };
 
@@ -805,4 +819,16 @@ enum {
 
 #define IFLA_STATS_FILTER_BIT(ATTR)    (1 << (ATTR - 1))
 
+/* These are embedded into IFLA_STATS_LINK_XSTATS:
+ * [IFLA_STATS_LINK_XSTATS]
+ * -> [LINK_XSTATS_TYPE_xxx]
+ *    -> [rtnl link type specific attributes]
+ */
+enum {
+       LINK_XSTATS_TYPE_UNSPEC,
+       LINK_XSTATS_TYPE_BRIDGE,
+       __LINK_XSTATS_TYPE_MAX
+};
+#define LINK_XSTATS_TYPE_MAX (__LINK_XSTATS_TYPE_MAX - 1)
+
 #endif /* _UAPI_LINUX_IF_LINK_H */
index 26b0d1e..897a949 100644 (file)
@@ -19,8 +19,8 @@
 
 #define MACSEC_MAX_KEY_LEN 128
 
-#define DEFAULT_CIPHER_ID   0x0080020001000001ULL
-#define DEFAULT_CIPHER_ALT  0x0080C20001000001ULL
+#define MACSEC_DEFAULT_CIPHER_ID   0x0080020001000001ULL
+#define MACSEC_DEFAULT_CIPHER_ALT  0x0080C20001000001ULL
 
 #define MACSEC_MIN_ICV_LEN 8
 #define MACSEC_MAX_ICV_LEN 32
@@ -55,6 +55,7 @@ enum macsec_secy_attrs {
        MACSEC_SECY_ATTR_INC_SCI,
        MACSEC_SECY_ATTR_ES,
        MACSEC_SECY_ATTR_SCB,
+       MACSEC_SECY_ATTR_PAD,
        __MACSEC_SECY_ATTR_END,
        NUM_MACSEC_SECY_ATTR = __MACSEC_SECY_ATTR_END,
        MACSEC_SECY_ATTR_MAX = __MACSEC_SECY_ATTR_END - 1,
@@ -66,6 +67,7 @@ enum macsec_rxsc_attrs {
        MACSEC_RXSC_ATTR_ACTIVE,  /* config/dump, u8 0..1 */
        MACSEC_RXSC_ATTR_SA_LIST, /* dump, nested */
        MACSEC_RXSC_ATTR_STATS,   /* dump, nested, macsec_rxsc_stats_attr */
+       MACSEC_RXSC_ATTR_PAD,
        __MACSEC_RXSC_ATTR_END,
        NUM_MACSEC_RXSC_ATTR = __MACSEC_RXSC_ATTR_END,
        MACSEC_RXSC_ATTR_MAX = __MACSEC_RXSC_ATTR_END - 1,
@@ -79,6 +81,7 @@ enum macsec_sa_attrs {
        MACSEC_SA_ATTR_KEY,    /* config, data */
        MACSEC_SA_ATTR_KEYID,  /* config/dump, u64 */
        MACSEC_SA_ATTR_STATS,  /* dump, nested, macsec_sa_stats_attr */
+       MACSEC_SA_ATTR_PAD,
        __MACSEC_SA_ATTR_END,
        NUM_MACSEC_SA_ATTR = __MACSEC_SA_ATTR_END,
        MACSEC_SA_ATTR_MAX = __MACSEC_SA_ATTR_END - 1,
@@ -110,6 +113,7 @@ enum macsec_rxsc_stats_attr {
        MACSEC_RXSC_STATS_ATTR_IN_PKTS_NOT_VALID,
        MACSEC_RXSC_STATS_ATTR_IN_PKTS_NOT_USING_SA,
        MACSEC_RXSC_STATS_ATTR_IN_PKTS_UNUSED_SA,
+       MACSEC_RXSC_STATS_ATTR_PAD,
        __MACSEC_RXSC_STATS_ATTR_END,
        NUM_MACSEC_RXSC_STATS_ATTR = __MACSEC_RXSC_STATS_ATTR_END,
        MACSEC_RXSC_STATS_ATTR_MAX = __MACSEC_RXSC_STATS_ATTR_END - 1,
@@ -137,6 +141,7 @@ enum macsec_txsc_stats_attr {
        MACSEC_TXSC_STATS_ATTR_OUT_PKTS_ENCRYPTED,
        MACSEC_TXSC_STATS_ATTR_OUT_OCTETS_PROTECTED,
        MACSEC_TXSC_STATS_ATTR_OUT_OCTETS_ENCRYPTED,
+       MACSEC_TXSC_STATS_ATTR_PAD,
        __MACSEC_TXSC_STATS_ATTR_END,
        NUM_MACSEC_TXSC_STATS_ATTR = __MACSEC_TXSC_STATS_ATTR_END,
        MACSEC_TXSC_STATS_ATTR_MAX = __MACSEC_TXSC_STATS_ATTR_END - 1,
@@ -153,6 +158,7 @@ enum macsec_secy_stats_attr {
        MACSEC_SECY_STATS_ATTR_IN_PKTS_UNKNOWN_SCI,
        MACSEC_SECY_STATS_ATTR_IN_PKTS_NO_SCI,
        MACSEC_SECY_STATS_ATTR_IN_PKTS_OVERRUN,
+       MACSEC_SECY_STATS_ATTR_PAD,
        __MACSEC_SECY_STATS_ATTR_END,
        NUM_MACSEC_SECY_STATS_ATTR = __MACSEC_SECY_STATS_ATTR_END,
        MACSEC_SECY_STATS_ATTR_MAX = __MACSEC_SECY_STATS_ATTR_END - 1,
index abde7bb..948c0a9 100644 (file)
@@ -14,6 +14,8 @@ enum {
        ILA_ATTR_LOCATOR_MATCH,                 /* u64 */
        ILA_ATTR_IFINDEX,                       /* s32 */
        ILA_ATTR_DIR,                           /* u32 */
+       ILA_ATTR_PAD,
+       ILA_ATTR_CSUM_MODE,                     /* u8 */
 
        __ILA_ATTR_MAX,
 };
@@ -34,4 +36,10 @@ enum {
 #define ILA_DIR_IN     (1 << 0)
 #define ILA_DIR_OUT    (1 << 1)
 
+enum {
+       ILA_CSUM_ADJUST_TRANSPORT,
+       ILA_CSUM_NEUTRAL_MAP,
+       ILA_CSUM_NO_ACTION,
+};
+
 #endif /* _UAPI_LINUX_ILA_H */
index f5f3629..a166437 100644 (file)
@@ -115,9 +115,11 @@ enum {
        INET_DIAG_SKV6ONLY,
        INET_DIAG_LOCALS,
        INET_DIAG_PEERS,
+       INET_DIAG_PAD,
+       __INET_DIAG_MAX,
 };
 
-#define INET_DIAG_MAX INET_DIAG_SKV6ONLY
+#define INET_DIAG_MAX (__INET_DIAG_MAX - 1)
 
 /* INET_DIAG_MEM */
 
index 391395c..22d6989 100644 (file)
@@ -435,6 +435,7 @@ enum {
        IPVS_STATS_ATTR_OUTPPS,         /* current out packet rate */
        IPVS_STATS_ATTR_INBPS,          /* current in byte rate */
        IPVS_STATS_ATTR_OUTBPS,         /* current out byte rate */
+       IPVS_STATS_ATTR_PAD,
        __IPVS_STATS_ATTR_MAX,
 };
 
index 3386a99..4bd27d0 100644 (file)
@@ -143,6 +143,7 @@ enum {
        L2TP_ATTR_RX_SEQ_DISCARDS,      /* u64 */
        L2TP_ATTR_RX_OOS_PACKETS,       /* u64 */
        L2TP_ATTR_RX_ERRORS,            /* u64 */
+       L2TP_ATTR_STATS_PAD,
        __L2TP_ATTR_STATS_MAX,
 };
 
index 1df655d..2c55dd1 100644 (file)
@@ -2197,6 +2197,8 @@ enum nl80211_attrs {
 
        NL80211_ATTR_STA_SUPPORT_P2P_PS,
 
+       NL80211_ATTR_PAD,
+
        /* add attributes here, update the policy in nl80211.c */
 
        __NL80211_ATTR_AFTER_LAST,
@@ -3023,6 +3025,7 @@ enum nl80211_survey_info {
        NL80211_SURVEY_INFO_TIME_RX,
        NL80211_SURVEY_INFO_TIME_TX,
        NL80211_SURVEY_INFO_TIME_SCAN,
+       NL80211_SURVEY_INFO_PAD,
 
        /* keep last */
        __NL80211_SURVEY_INFO_AFTER_LAST,
@@ -3468,6 +3471,7 @@ enum nl80211_bss {
        NL80211_BSS_BEACON_TSF,
        NL80211_BSS_PRESP_DATA,
        NL80211_BSS_LAST_SEEN_BOOTTIME,
+       NL80211_BSS_PAD,
 
        /* keep last */
        __NL80211_BSS_AFTER_LAST,
index 0358f94..bb0d515 100644 (file)
@@ -84,6 +84,7 @@ enum ovs_datapath_attr {
        OVS_DP_ATTR_STATS,              /* struct ovs_dp_stats */
        OVS_DP_ATTR_MEGAFLOW_STATS,     /* struct ovs_dp_megaflow_stats */
        OVS_DP_ATTR_USER_FEATURES,      /* OVS_DP_F_*  */
+       OVS_DP_ATTR_PAD,
        __OVS_DP_ATTR_MAX
 };
 
@@ -253,6 +254,7 @@ enum ovs_vport_attr {
        OVS_VPORT_ATTR_UPCALL_PID, /* array of u32 Netlink socket PIDs for */
                                /* receiving upcalls */
        OVS_VPORT_ATTR_STATS,   /* struct ovs_vport_stats */
+       OVS_VPORT_ATTR_PAD,
        __OVS_VPORT_ATTR_MAX
 };
 
@@ -519,6 +521,7 @@ enum ovs_flow_attr {
                                  * logging should be suppressed. */
        OVS_FLOW_ATTR_UFID,      /* Variable length unique flow identifier. */
        OVS_FLOW_ATTR_UFID_FLAGS,/* u32 of OVS_UFID_F_*. */
+       OVS_FLOW_ATTR_PAD,
        __OVS_FLOW_ATTR_MAX
 };
 
index c43c5f7..8466090 100644 (file)
@@ -66,6 +66,7 @@ enum {
        TCA_ACT_OPTIONS,
        TCA_ACT_INDEX,
        TCA_ACT_STATS,
+       TCA_ACT_PAD,
        __TCA_ACT_MAX
 };
 
@@ -173,6 +174,7 @@ enum {
        TCA_U32_PCNT,
        TCA_U32_MARK,
        TCA_U32_FLAGS,
+       TCA_U32_PAD,
        __TCA_U32_MAX
 };
 
index 8cb18b4..2382eed 100644 (file)
@@ -179,6 +179,7 @@ enum {
        TCA_TBF_PRATE64,
        TCA_TBF_BURST,
        TCA_TBF_PBURST,
+       TCA_TBF_PAD,
        __TCA_TBF_MAX,
 };
 
@@ -368,6 +369,7 @@ enum {
        TCA_HTB_DIRECT_QLEN,
        TCA_HTB_RATE64,
        TCA_HTB_CEIL64,
+       TCA_HTB_PAD,
        __TCA_HTB_MAX,
 };
 
@@ -531,6 +533,7 @@ enum {
        TCA_NETEM_RATE,
        TCA_NETEM_ECN,
        TCA_NETEM_RATE64,
+       TCA_NETEM_PAD,
        __TCA_NETEM_MAX,
 };
 
@@ -715,6 +718,8 @@ enum {
        TCA_FQ_CODEL_FLOWS,
        TCA_FQ_CODEL_QUANTUM,
        TCA_FQ_CODEL_CE_THRESHOLD,
+       TCA_FQ_CODEL_DROP_BATCH_SIZE,
+       TCA_FQ_CODEL_MEMORY_LIMIT,
        __TCA_FQ_CODEL_MAX
 };
 
@@ -739,6 +744,8 @@ struct tc_fq_codel_qd_stats {
        __u32   new_flows_len;  /* count of flows in new list */
        __u32   old_flows_len;  /* count of flows in old list */
        __u32   ce_mark;        /* packets above ce_threshold */
+       __u32   memory_usage;   /* in bytes */
+       __u32   drop_overmemory;
 };
 
 struct tc_fq_codel_cl_stats {
diff --git a/include/uapi/linux/qrtr.h b/include/uapi/linux/qrtr.h
new file mode 100644 (file)
index 0000000..66c0748
--- /dev/null
@@ -0,0 +1,12 @@
+#ifndef _LINUX_QRTR_H
+#define _LINUX_QRTR_H
+
+#include <linux/socket.h>
+
+struct sockaddr_qrtr {
+       __kernel_sa_family_t sq_family;
+       __u32 sq_node;
+       __u32 sq_port;
+};
+
+#endif /* _LINUX_QRTR_H */
index 38baddb..4d2489e 100644 (file)
@@ -191,6 +191,7 @@ enum {
        QUOTA_NL_A_DEV_MAJOR,
        QUOTA_NL_A_DEV_MINOR,
        QUOTA_NL_A_CAUSED_ID,
+       QUOTA_NL_A_PAD,
        __QUOTA_NL_A_MAX,
 };
 #define QUOTA_NL_A_MAX (__QUOTA_NL_A_MAX - 1)
index a94e0b6..262f037 100644 (file)
@@ -542,6 +542,7 @@ enum {
        TCA_FCNT,
        TCA_STATS2,
        TCA_STAB,
+       TCA_PAD,
        __TCA_MAX
 };
 
index 07f17cc..063d9d4 100644 (file)
@@ -26,6 +26,7 @@ enum {
        TCA_ACT_BPF_OPS,
        TCA_ACT_BPF_FD,
        TCA_ACT_BPF_NAME,
+       TCA_ACT_BPF_PAD,
        __TCA_ACT_BPF_MAX,
 };
 #define TCA_ACT_BPF_MAX (__TCA_ACT_BPF_MAX - 1)
index 994b097..62a5e94 100644 (file)
@@ -15,6 +15,7 @@ enum {
        TCA_CONNMARK_UNSPEC,
        TCA_CONNMARK_PARMS,
        TCA_CONNMARK_TM,
+       TCA_CONNMARK_PAD,
        __TCA_CONNMARK_MAX
 };
 #define TCA_CONNMARK_MAX (__TCA_CONNMARK_MAX - 1)
index a047c49..8ac8041 100644 (file)
@@ -10,6 +10,7 @@ enum {
        TCA_CSUM_UNSPEC,
        TCA_CSUM_PARMS,
        TCA_CSUM_TM,
+       TCA_CSUM_PAD,
        __TCA_CSUM_MAX
 };
 #define TCA_CSUM_MAX (__TCA_CSUM_MAX - 1)
index 17dddb4..d2a3abb 100644 (file)
@@ -12,6 +12,7 @@ enum {
        TCA_DEF_TM,
        TCA_DEF_PARMS,
        TCA_DEF_DATA,
+       TCA_DEF_PAD,
        __TCA_DEF_MAX
 };
 #define TCA_DEF_MAX (__TCA_DEF_MAX - 1)
index f7bf94e..70b536a 100644 (file)
@@ -25,6 +25,7 @@ enum {
        TCA_GACT_TM,
        TCA_GACT_PARMS,
        TCA_GACT_PROB,
+       TCA_GACT_PAD,
        __TCA_GACT_MAX
 };
 #define TCA_GACT_MAX (__TCA_GACT_MAX - 1)
index d648ff6..4ece02a 100644 (file)
@@ -23,6 +23,7 @@ enum {
        TCA_IFE_SMAC,
        TCA_IFE_TYPE,
        TCA_IFE_METALST,
+       TCA_IFE_PAD,
        __TCA_IFE_MAX
 };
 #define TCA_IFE_MAX (__TCA_IFE_MAX - 1)
index 130aaad..7c6e155 100644 (file)
@@ -14,6 +14,7 @@ enum {
        TCA_IPT_CNT,
        TCA_IPT_TM,
        TCA_IPT_TARG,
+       TCA_IPT_PAD,
        __TCA_IPT_MAX
 };
 #define TCA_IPT_MAX (__TCA_IPT_MAX - 1)
index 7561750..3d7a2b3 100644 (file)
@@ -20,6 +20,7 @@ enum {
        TCA_MIRRED_UNSPEC,
        TCA_MIRRED_TM,
        TCA_MIRRED_PARMS,
+       TCA_MIRRED_PAD,
        __TCA_MIRRED_MAX
 };
 #define TCA_MIRRED_MAX (__TCA_MIRRED_MAX - 1)
index 6663aeb..923457c 100644 (file)
@@ -10,6 +10,7 @@ enum {
        TCA_NAT_UNSPEC,
        TCA_NAT_PARMS,
        TCA_NAT_TM,
+       TCA_NAT_PAD,
        __TCA_NAT_MAX
 };
 #define TCA_NAT_MAX (__TCA_NAT_MAX - 1)
index 716cfab..6389959 100644 (file)
@@ -10,6 +10,7 @@ enum {
        TCA_PEDIT_UNSPEC,
        TCA_PEDIT_TM,
        TCA_PEDIT_PARMS,
+       TCA_PEDIT_PAD,
        __TCA_PEDIT_MAX
 };
 #define TCA_PEDIT_MAX (__TCA_PEDIT_MAX - 1)
index 7a2e910..fecb5cc 100644 (file)
@@ -39,6 +39,7 @@ enum {
        TCA_SKBEDIT_PRIORITY,
        TCA_SKBEDIT_QUEUE_MAPPING,
        TCA_SKBEDIT_MARK,
+       TCA_SKBEDIT_PAD,
        __TCA_SKBEDIT_MAX
 };
 #define TCA_SKBEDIT_MAX (__TCA_SKBEDIT_MAX - 1)
index f7b8d44..31151ff 100644 (file)
@@ -28,6 +28,7 @@ enum {
        TCA_VLAN_PARMS,
        TCA_VLAN_PUSH_VLAN_ID,
        TCA_VLAN_PUSH_VLAN_PROTOCOL,
+       TCA_VLAN_PAD,
        __TCA_VLAN_MAX,
 };
 #define TCA_VLAN_MAX (__TCA_VLAN_MAX - 1)
index c039f1d..086168e 100644 (file)
 
 #define V4L2_DV_BT_CEA_3840X2160P24 { \
        .type = V4L2_DV_BT_656_1120, \
-       V4L2_INIT_BT_TIMINGS(3840, 2160, 0, V4L2_DV_HSYNC_POS_POL, \
+       V4L2_INIT_BT_TIMINGS(3840, 2160, 0, \
+               V4L2_DV_HSYNC_POS_POL | V4L2_DV_VSYNC_POS_POL, \
                297000000, 1276, 88, 296, 8, 10, 72, 0, 0, 0, \
                V4L2_DV_BT_STD_CEA861, \
                V4L2_DV_FL_CAN_REDUCE_FPS | V4L2_DV_FL_IS_CE_VIDEO) \
 
 #define V4L2_DV_BT_CEA_3840X2160P25 { \
        .type = V4L2_DV_BT_656_1120, \
-       V4L2_INIT_BT_TIMINGS(3840, 2160, 0, V4L2_DV_HSYNC_POS_POL, \
+       V4L2_INIT_BT_TIMINGS(3840, 2160, 0, \
+               V4L2_DV_HSYNC_POS_POL | V4L2_DV_VSYNC_POS_POL, \
                297000000, 1056, 88, 296, 8, 10, 72, 0, 0, 0, \
                V4L2_DV_BT_STD_CEA861, V4L2_DV_FL_IS_CE_VIDEO) \
 }
 
 #define V4L2_DV_BT_CEA_3840X2160P30 { \
        .type = V4L2_DV_BT_656_1120, \
-       V4L2_INIT_BT_TIMINGS(3840, 2160, 0, V4L2_DV_HSYNC_POS_POL, \
+       V4L2_INIT_BT_TIMINGS(3840, 2160, 0, \
+               V4L2_DV_HSYNC_POS_POL | V4L2_DV_VSYNC_POS_POL, \
                297000000, 176, 88, 296, 8, 10, 72, 0, 0, 0, \
                V4L2_DV_BT_STD_CEA861, \
                V4L2_DV_FL_CAN_REDUCE_FPS | V4L2_DV_FL_IS_CE_VIDEO) \
 
 #define V4L2_DV_BT_CEA_3840X2160P50 { \
        .type = V4L2_DV_BT_656_1120, \
-       V4L2_INIT_BT_TIMINGS(3840, 2160, 0, V4L2_DV_HSYNC_POS_POL, \
+       V4L2_INIT_BT_TIMINGS(3840, 2160, 0, \
+               V4L2_DV_HSYNC_POS_POL | V4L2_DV_VSYNC_POS_POL, \
                594000000, 1056, 88, 296, 8, 10, 72, 0, 0, 0, \
                V4L2_DV_BT_STD_CEA861, V4L2_DV_FL_IS_CE_VIDEO) \
 }
 
 #define V4L2_DV_BT_CEA_3840X2160P60 { \
        .type = V4L2_DV_BT_656_1120, \
-       V4L2_INIT_BT_TIMINGS(3840, 2160, 0, V4L2_DV_HSYNC_POS_POL, \
+       V4L2_INIT_BT_TIMINGS(3840, 2160, 0, \
+               V4L2_DV_HSYNC_POS_POL | V4L2_DV_VSYNC_POS_POL, \
                594000000, 176, 88, 296, 8, 10, 72, 0, 0, 0, \
                V4L2_DV_BT_STD_CEA861, \
                V4L2_DV_FL_CAN_REDUCE_FPS | V4L2_DV_FL_IS_CE_VIDEO) \
 
 #define V4L2_DV_BT_CEA_4096X2160P24 { \
        .type = V4L2_DV_BT_656_1120, \
-       V4L2_INIT_BT_TIMINGS(4096, 2160, 0, V4L2_DV_HSYNC_POS_POL, \
+       V4L2_INIT_BT_TIMINGS(4096, 2160, 0, \
+               V4L2_DV_HSYNC_POS_POL | V4L2_DV_VSYNC_POS_POL, \
                297000000, 1020, 88, 296, 8, 10, 72, 0, 0, 0, \
                V4L2_DV_BT_STD_CEA861, \
                V4L2_DV_FL_CAN_REDUCE_FPS | V4L2_DV_FL_IS_CE_VIDEO) \
 
 #define V4L2_DV_BT_CEA_4096X2160P25 { \
        .type = V4L2_DV_BT_656_1120, \
-       V4L2_INIT_BT_TIMINGS(4096, 2160, 0, V4L2_DV_HSYNC_POS_POL, \
+       V4L2_INIT_BT_TIMINGS(4096, 2160, 0, \
+               V4L2_DV_HSYNC_POS_POL | V4L2_DV_VSYNC_POS_POL, \
                297000000, 968, 88, 128, 8, 10, 72, 0, 0, 0, \
                V4L2_DV_BT_STD_CEA861, V4L2_DV_FL_IS_CE_VIDEO) \
 }
 
 #define V4L2_DV_BT_CEA_4096X2160P30 { \
        .type = V4L2_DV_BT_656_1120, \
-       V4L2_INIT_BT_TIMINGS(4096, 2160, 0, V4L2_DV_HSYNC_POS_POL, \
+       V4L2_INIT_BT_TIMINGS(4096, 2160, 0, \
+               V4L2_DV_HSYNC_POS_POL | V4L2_DV_VSYNC_POS_POL, \
                297000000, 88, 88, 128, 8, 10, 72, 0, 0, 0, \
                V4L2_DV_BT_STD_CEA861, \
                V4L2_DV_FL_CAN_REDUCE_FPS | V4L2_DV_FL_IS_CE_VIDEO) \
 
 #define V4L2_DV_BT_CEA_4096X2160P50 { \
        .type = V4L2_DV_BT_656_1120, \
-       V4L2_INIT_BT_TIMINGS(4096, 2160, 0, V4L2_DV_HSYNC_POS_POL, \
+       V4L2_INIT_BT_TIMINGS(4096, 2160, 0, \
+               V4L2_DV_HSYNC_POS_POL | V4L2_DV_VSYNC_POS_POL, \
                594000000, 968, 88, 128, 8, 10, 72, 0, 0, 0, \
                V4L2_DV_BT_STD_CEA861, V4L2_DV_FL_IS_CE_VIDEO) \
 }
 
 #define V4L2_DV_BT_CEA_4096X2160P60 { \
        .type = V4L2_DV_BT_656_1120, \
-       V4L2_INIT_BT_TIMINGS(4096, 2160, 0, V4L2_DV_HSYNC_POS_POL, \
+       V4L2_INIT_BT_TIMINGS(4096, 2160, 0, \
+               V4L2_DV_HSYNC_POS_POL | V4L2_DV_VSYNC_POS_POL, \
                594000000, 88, 88, 128, 8, 10, 72, 0, 0, 0, \
                V4L2_DV_BT_STD_CEA861, \
                V4L2_DV_FL_CAN_REDUCE_FPS | V4L2_DV_FL_IS_CE_VIDEO) \
index e4248fe..d781b07 100644 (file)
@@ -794,6 +794,11 @@ void __weak bpf_int_jit_compile(struct bpf_prog *prog)
 {
 }
 
+bool __weak bpf_helper_changes_skb_data(void *func)
+{
+       return false;
+}
+
 /* To execute LD_ABS/LD_IND instructions __bpf_prog_run() may call
  * skb_copy_bits(), so provide a weak definition of it for NET-less config.
  */
index f2ece3c..8f94ca1 100644 (file)
@@ -31,10 +31,10 @@ static void *bpf_any_get(void *raw, enum bpf_type type)
 {
        switch (type) {
        case BPF_TYPE_PROG:
-               atomic_inc(&((struct bpf_prog *)raw)->aux->refcnt);
+               raw = bpf_prog_inc(raw);
                break;
        case BPF_TYPE_MAP:
-               bpf_map_inc(raw, true);
+               raw = bpf_map_inc(raw, true);
                break;
        default:
                WARN_ON_ONCE(1);
@@ -297,7 +297,8 @@ static void *bpf_obj_do_get(const struct filename *pathname,
                goto out;
 
        raw = bpf_any_get(inode->i_private, *type);
-       touch_atime(&path);
+       if (!IS_ERR(raw))
+               touch_atime(&path);
 
        path_put(&path);
        return raw;
index adc5e4b..cf5e9f7 100644 (file)
@@ -218,11 +218,18 @@ struct bpf_map *__bpf_map_get(struct fd f)
        return f.file->private_data;
 }
 
-void bpf_map_inc(struct bpf_map *map, bool uref)
+/* prog's and map's refcnt limit */
+#define BPF_MAX_REFCNT 32768
+
+struct bpf_map *bpf_map_inc(struct bpf_map *map, bool uref)
 {
-       atomic_inc(&map->refcnt);
+       if (atomic_inc_return(&map->refcnt) > BPF_MAX_REFCNT) {
+               atomic_dec(&map->refcnt);
+               return ERR_PTR(-EBUSY);
+       }
        if (uref)
                atomic_inc(&map->usercnt);
+       return map;
 }
 
 struct bpf_map *bpf_map_get_with_uref(u32 ufd)
@@ -234,7 +241,7 @@ struct bpf_map *bpf_map_get_with_uref(u32 ufd)
        if (IS_ERR(map))
                return map;
 
-       bpf_map_inc(map, true);
+       map = bpf_map_inc(map, true);
        fdput(f);
 
        return map;
@@ -658,6 +665,15 @@ static struct bpf_prog *__bpf_prog_get(struct fd f)
        return f.file->private_data;
 }
 
+struct bpf_prog *bpf_prog_inc(struct bpf_prog *prog)
+{
+       if (atomic_inc_return(&prog->aux->refcnt) > BPF_MAX_REFCNT) {
+               atomic_dec(&prog->aux->refcnt);
+               return ERR_PTR(-EBUSY);
+       }
+       return prog;
+}
+
 /* called by sockets/tracing/seccomp before attaching program to an event
  * pairs with bpf_prog_put()
  */
@@ -670,7 +686,7 @@ struct bpf_prog *bpf_prog_get(u32 ufd)
        if (IS_ERR(prog))
                return prog;
 
-       atomic_inc(&prog->aux->refcnt);
+       prog = bpf_prog_inc(prog);
        fdput(f);
 
        return prog;
index 6345623..84bff68 100644 (file)
@@ -1,4 +1,5 @@
 /* Copyright (c) 2011-2014 PLUMgrid, http://plumgrid.com
+ * Copyright (c) 2016 Facebook
  *
  * This program is free software; you can redistribute it and/or
  * modify it under the terms of version 2 of the GNU General Public
@@ -136,13 +137,32 @@ enum bpf_reg_type {
        FRAME_PTR,               /* reg == frame_pointer */
        PTR_TO_STACK,            /* reg == frame_pointer + imm */
        CONST_IMM,               /* constant integer value */
+
+       /* PTR_TO_PACKET represents:
+        * skb->data
+        * skb->data + imm
+        * skb->data + (u16) var
+        * skb->data + (u16) var + imm
+        * if (range > 0) then [ptr, ptr + range - off) is safe to access
+        * if (id > 0) means that some 'var' was added
+        * if (off > 0) menas that 'imm' was added
+        */
+       PTR_TO_PACKET,
+       PTR_TO_PACKET_END,       /* skb->data + headlen */
 };
 
 struct reg_state {
        enum bpf_reg_type type;
        union {
-               /* valid when type == CONST_IMM | PTR_TO_STACK */
-               long imm;
+               /* valid when type == CONST_IMM | PTR_TO_STACK | UNKNOWN_VALUE */
+               s64 imm;
+
+               /* valid when type == PTR_TO_PACKET* */
+               struct {
+                       u32 id;
+                       u16 off;
+                       u16 range;
+               };
 
                /* valid when type == CONST_PTR_TO_MAP | PTR_TO_MAP_VALUE |
                 *   PTR_TO_MAP_VALUE_OR_NULL
@@ -247,40 +267,39 @@ static const char * const reg_type_str[] = {
        [FRAME_PTR]             = "fp",
        [PTR_TO_STACK]          = "fp",
        [CONST_IMM]             = "imm",
+       [PTR_TO_PACKET]         = "pkt",
+       [PTR_TO_PACKET_END]     = "pkt_end",
 };
 
-static const struct {
-       int map_type;
-       int func_id;
-} func_limit[] = {
-       {BPF_MAP_TYPE_PROG_ARRAY, BPF_FUNC_tail_call},
-       {BPF_MAP_TYPE_PERF_EVENT_ARRAY, BPF_FUNC_perf_event_read},
-       {BPF_MAP_TYPE_PERF_EVENT_ARRAY, BPF_FUNC_perf_event_output},
-       {BPF_MAP_TYPE_STACK_TRACE, BPF_FUNC_get_stackid},
-};
-
-static void print_verifier_state(struct verifier_env *env)
+static void print_verifier_state(struct verifier_state *state)
 {
+       struct reg_state *reg;
        enum bpf_reg_type t;
        int i;
 
        for (i = 0; i < MAX_BPF_REG; i++) {
-               t = env->cur_state.regs[i].type;
+               reg = &state->regs[i];
+               t = reg->type;
                if (t == NOT_INIT)
                        continue;
                verbose(" R%d=%s", i, reg_type_str[t]);
                if (t == CONST_IMM || t == PTR_TO_STACK)
-                       verbose("%ld", env->cur_state.regs[i].imm);
+                       verbose("%lld", reg->imm);
+               else if (t == PTR_TO_PACKET)
+                       verbose("(id=%d,off=%d,r=%d)",
+                               reg->id, reg->off, reg->range);
+               else if (t == UNKNOWN_VALUE && reg->imm)
+                       verbose("%lld", reg->imm);
                else if (t == CONST_PTR_TO_MAP || t == PTR_TO_MAP_VALUE ||
                         t == PTR_TO_MAP_VALUE_OR_NULL)
                        verbose("(ks=%d,vs=%d)",
-                               env->cur_state.regs[i].map_ptr->key_size,
-                               env->cur_state.regs[i].map_ptr->value_size);
+                               reg->map_ptr->key_size,
+                               reg->map_ptr->value_size);
        }
        for (i = 0; i < MAX_BPF_STACK; i += BPF_REG_SIZE) {
-               if (env->cur_state.stack_slot_type[i] == STACK_SPILL)
+               if (state->stack_slot_type[i] == STACK_SPILL)
                        verbose(" fp%d=%s", -MAX_BPF_STACK + i,
-                               reg_type_str[env->cur_state.spilled_regs[i / BPF_REG_SIZE].type]);
+                               reg_type_str[state->spilled_regs[i / BPF_REG_SIZE].type]);
        }
        verbose("\n");
 }
@@ -556,6 +575,8 @@ static bool is_spillable_regtype(enum bpf_reg_type type)
        case PTR_TO_MAP_VALUE_OR_NULL:
        case PTR_TO_STACK:
        case PTR_TO_CTX:
+       case PTR_TO_PACKET:
+       case PTR_TO_PACKET_END:
        case FRAME_PTR:
        case CONST_PTR_TO_MAP:
                return true;
@@ -655,6 +676,27 @@ static int check_map_access(struct verifier_env *env, u32 regno, int off,
        return 0;
 }
 
+#define MAX_PACKET_OFF 0xffff
+
+static int check_packet_access(struct verifier_env *env, u32 regno, int off,
+                              int size)
+{
+       struct reg_state *regs = env->cur_state.regs;
+       struct reg_state *reg = &regs[regno];
+       int linear_size = (int) reg->range - (int) reg->off;
+
+       if (linear_size < 0 || linear_size >= MAX_PACKET_OFF) {
+               verbose("verifier bug\n");
+               return -EFAULT;
+       }
+       if (off < 0 || off + size > linear_size) {
+               verbose("invalid access to packet, off=%d size=%d, allowed=%d\n",
+                       off, size, linear_size);
+               return -EACCES;
+       }
+       return 0;
+}
+
 /* check access to 'struct bpf_context' fields */
 static int check_ctx_access(struct verifier_env *env, int off, int size,
                            enum bpf_access_type t)
@@ -685,6 +727,45 @@ static bool is_pointer_value(struct verifier_env *env, int regno)
        }
 }
 
+static int check_ptr_alignment(struct verifier_env *env, struct reg_state *reg,
+                              int off, int size)
+{
+       if (reg->type != PTR_TO_PACKET) {
+               if (off % size != 0) {
+                       verbose("misaligned access off %d size %d\n", off, size);
+                       return -EACCES;
+               } else {
+                       return 0;
+               }
+       }
+
+       switch (env->prog->type) {
+       case BPF_PROG_TYPE_SCHED_CLS:
+       case BPF_PROG_TYPE_SCHED_ACT:
+               break;
+       default:
+               verbose("verifier is misconfigured\n");
+               return -EACCES;
+       }
+
+       if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS))
+               /* misaligned access to packet is ok on x86,arm,arm64 */
+               return 0;
+
+       if (reg->id && size != 1) {
+               verbose("Unknown packet alignment. Only byte-sized access allowed\n");
+               return -EACCES;
+       }
+
+       /* skb->data is NET_IP_ALIGN-ed */
+       if ((NET_IP_ALIGN + reg->off + off) % size != 0) {
+               verbose("misaligned packet access off %d+%d+%d size %d\n",
+                       NET_IP_ALIGN, reg->off, off, size);
+               return -EACCES;
+       }
+       return 0;
+}
+
 /* check whether memory at (regno + off) is accessible for t = (read | write)
  * if t==write, value_regno is a register which value is stored into memory
  * if t==read, value_regno is a register which will receive the value from memory
@@ -696,21 +777,21 @@ static int check_mem_access(struct verifier_env *env, u32 regno, int off,
                            int value_regno)
 {
        struct verifier_state *state = &env->cur_state;
+       struct reg_state *reg = &state->regs[regno];
        int size, err = 0;
 
-       if (state->regs[regno].type == PTR_TO_STACK)
-               off += state->regs[regno].imm;
+       if (reg->type == PTR_TO_STACK)
+               off += reg->imm;
 
        size = bpf_size_to_bytes(bpf_size);
        if (size < 0)
                return size;
 
-       if (off % size != 0) {
-               verbose("misaligned access off %d size %d\n", off, size);
-               return -EACCES;
-       }
+       err = check_ptr_alignment(env, reg, off, size);
+       if (err)
+               return err;
 
-       if (state->regs[regno].type == PTR_TO_MAP_VALUE) {
+       if (reg->type == PTR_TO_MAP_VALUE) {
                if (t == BPF_WRITE && value_regno >= 0 &&
                    is_pointer_value(env, value_regno)) {
                        verbose("R%d leaks addr into map\n", value_regno);
@@ -720,18 +801,25 @@ static int check_mem_access(struct verifier_env *env, u32 regno, int off,
                if (!err && t == BPF_READ && value_regno >= 0)
                        mark_reg_unknown_value(state->regs, value_regno);
 
-       } else if (state->regs[regno].type == PTR_TO_CTX) {
+       } else if (reg->type == PTR_TO_CTX) {
                if (t == BPF_WRITE && value_regno >= 0 &&
                    is_pointer_value(env, value_regno)) {
                        verbose("R%d leaks addr into ctx\n", value_regno);
                        return -EACCES;
                }
                err = check_ctx_access(env, off, size, t);
-               if (!err && t == BPF_READ && value_regno >= 0)
+               if (!err && t == BPF_READ && value_regno >= 0) {
                        mark_reg_unknown_value(state->regs, value_regno);
+                       if (off == offsetof(struct __sk_buff, data) &&
+                           env->allow_ptr_leaks)
+                               /* note that reg.[id|off|range] == 0 */
+                               state->regs[value_regno].type = PTR_TO_PACKET;
+                       else if (off == offsetof(struct __sk_buff, data_end) &&
+                                env->allow_ptr_leaks)
+                               state->regs[value_regno].type = PTR_TO_PACKET_END;
+               }
 
-       } else if (state->regs[regno].type == FRAME_PTR ||
-                  state->regs[regno].type == PTR_TO_STACK) {
+       } else if (reg->type == FRAME_PTR || reg->type == PTR_TO_STACK) {
                if (off >= 0 || off < -MAX_BPF_STACK) {
                        verbose("invalid stack off=%d size=%d\n", off, size);
                        return -EACCES;
@@ -747,11 +835,28 @@ static int check_mem_access(struct verifier_env *env, u32 regno, int off,
                } else {
                        err = check_stack_read(state, off, size, value_regno);
                }
+       } else if (state->regs[regno].type == PTR_TO_PACKET) {
+               if (t == BPF_WRITE) {
+                       verbose("cannot write into packet\n");
+                       return -EACCES;
+               }
+               err = check_packet_access(env, regno, off, size);
+               if (!err && t == BPF_READ && value_regno >= 0)
+                       mark_reg_unknown_value(state->regs, value_regno);
        } else {
                verbose("R%d invalid mem access '%s'\n",
-                       regno, reg_type_str[state->regs[regno].type]);
+                       regno, reg_type_str[reg->type]);
                return -EACCES;
        }
+
+       if (!err && size <= 2 && value_regno >= 0 && env->allow_ptr_leaks &&
+           state->regs[value_regno].type == UNKNOWN_VALUE) {
+               /* 1 or 2 byte load zero-extends, determine the number of
+                * zero upper bits. Not doing it fo 4 byte load, since
+                * such values cannot be added to ptr_to_packet anyway.
+                */
+               state->regs[value_regno].imm = 64 - size * 8;
+       }
        return err;
 }
 
@@ -943,27 +1048,52 @@ static int check_func_arg(struct verifier_env *env, u32 regno,
 
 static int check_map_func_compatibility(struct bpf_map *map, int func_id)
 {
-       bool bool_map, bool_func;
-       int i;
-
        if (!map)
                return 0;
 
-       for (i = 0; i < ARRAY_SIZE(func_limit); i++) {
-               bool_map = (map->map_type == func_limit[i].map_type);
-               bool_func = (func_id == func_limit[i].func_id);
-               /* only when map & func pair match it can continue.
-                * don't allow any other map type to be passed into
-                * the special func;
-                */
-               if (bool_func && bool_map != bool_func) {
-                       verbose("cannot pass map_type %d into func %d\n",
-                               map->map_type, func_id);
-                       return -EINVAL;
-               }
+       /* We need a two way check, first is from map perspective ... */
+       switch (map->map_type) {
+       case BPF_MAP_TYPE_PROG_ARRAY:
+               if (func_id != BPF_FUNC_tail_call)
+                       goto error;
+               break;
+       case BPF_MAP_TYPE_PERF_EVENT_ARRAY:
+               if (func_id != BPF_FUNC_perf_event_read &&
+                   func_id != BPF_FUNC_perf_event_output)
+                       goto error;
+               break;
+       case BPF_MAP_TYPE_STACK_TRACE:
+               if (func_id != BPF_FUNC_get_stackid)
+                       goto error;
+               break;
+       default:
+               break;
+       }
+
+       /* ... and second from the function itself. */
+       switch (func_id) {
+       case BPF_FUNC_tail_call:
+               if (map->map_type != BPF_MAP_TYPE_PROG_ARRAY)
+                       goto error;
+               break;
+       case BPF_FUNC_perf_event_read:
+       case BPF_FUNC_perf_event_output:
+               if (map->map_type != BPF_MAP_TYPE_PERF_EVENT_ARRAY)
+                       goto error;
+               break;
+       case BPF_FUNC_get_stackid:
+               if (map->map_type != BPF_MAP_TYPE_STACK_TRACE)
+                       goto error;
+               break;
+       default:
+               break;
        }
 
        return 0;
+error:
+       verbose("cannot pass map_type %d into func %d\n",
+               map->map_type, func_id);
+       return -EINVAL;
 }
 
 static int check_raw_mode(const struct bpf_func_proto *fn)
@@ -984,6 +1114,29 @@ static int check_raw_mode(const struct bpf_func_proto *fn)
        return count > 1 ? -EINVAL : 0;
 }
 
+static void clear_all_pkt_pointers(struct verifier_env *env)
+{
+       struct verifier_state *state = &env->cur_state;
+       struct reg_state *regs = state->regs, *reg;
+       int i;
+
+       for (i = 0; i < MAX_BPF_REG; i++)
+               if (regs[i].type == PTR_TO_PACKET ||
+                   regs[i].type == PTR_TO_PACKET_END)
+                       mark_reg_unknown_value(regs, i);
+
+       for (i = 0; i < MAX_BPF_STACK; i += BPF_REG_SIZE) {
+               if (state->stack_slot_type[i] != STACK_SPILL)
+                       continue;
+               reg = &state->spilled_regs[i / BPF_REG_SIZE];
+               if (reg->type != PTR_TO_PACKET &&
+                   reg->type != PTR_TO_PACKET_END)
+                       continue;
+               reg->type = UNKNOWN_VALUE;
+               reg->imm = 0;
+       }
+}
+
 static int check_call(struct verifier_env *env, int func_id)
 {
        struct verifier_state *state = &env->cur_state;
@@ -991,6 +1144,7 @@ static int check_call(struct verifier_env *env, int func_id)
        struct reg_state *regs = state->regs;
        struct reg_state *reg;
        struct bpf_call_arg_meta meta;
+       bool changes_data;
        int i, err;
 
        /* find function prototype */
@@ -1013,6 +1167,8 @@ static int check_call(struct verifier_env *env, int func_id)
                return -EINVAL;
        }
 
+       changes_data = bpf_helper_changes_skb_data(fn->func);
+
        memset(&meta, 0, sizeof(meta));
 
        /* We only support one arg being in raw mode at the moment, which
@@ -1083,13 +1239,196 @@ static int check_call(struct verifier_env *env, int func_id)
        if (err)
                return err;
 
+       if (changes_data)
+               clear_all_pkt_pointers(env);
+       return 0;
+}
+
+static int check_packet_ptr_add(struct verifier_env *env, struct bpf_insn *insn)
+{
+       struct reg_state *regs = env->cur_state.regs;
+       struct reg_state *dst_reg = &regs[insn->dst_reg];
+       struct reg_state *src_reg = &regs[insn->src_reg];
+       s32 imm;
+
+       if (BPF_SRC(insn->code) == BPF_K) {
+               /* pkt_ptr += imm */
+               imm = insn->imm;
+
+add_imm:
+               if (imm <= 0) {
+                       verbose("addition of negative constant to packet pointer is not allowed\n");
+                       return -EACCES;
+               }
+               if (imm >= MAX_PACKET_OFF ||
+                   imm + dst_reg->off >= MAX_PACKET_OFF) {
+                       verbose("constant %d is too large to add to packet pointer\n",
+                               imm);
+                       return -EACCES;
+               }
+               /* a constant was added to pkt_ptr.
+                * Remember it while keeping the same 'id'
+                */
+               dst_reg->off += imm;
+       } else {
+               if (src_reg->type == CONST_IMM) {
+                       /* pkt_ptr += reg where reg is known constant */
+                       imm = src_reg->imm;
+                       goto add_imm;
+               }
+               /* disallow pkt_ptr += reg
+                * if reg is not uknown_value with guaranteed zero upper bits
+                * otherwise pkt_ptr may overflow and addition will become
+                * subtraction which is not allowed
+                */
+               if (src_reg->type != UNKNOWN_VALUE) {
+                       verbose("cannot add '%s' to ptr_to_packet\n",
+                               reg_type_str[src_reg->type]);
+                       return -EACCES;
+               }
+               if (src_reg->imm < 48) {
+                       verbose("cannot add integer value with %lld upper zero bits to ptr_to_packet\n",
+                               src_reg->imm);
+                       return -EACCES;
+               }
+               /* dst_reg stays as pkt_ptr type and since some positive
+                * integer value was added to the pointer, increment its 'id'
+                */
+               dst_reg->id++;
+
+               /* something was added to pkt_ptr, set range and off to zero */
+               dst_reg->off = 0;
+               dst_reg->range = 0;
+       }
+       return 0;
+}
+
+static int evaluate_reg_alu(struct verifier_env *env, struct bpf_insn *insn)
+{
+       struct reg_state *regs = env->cur_state.regs;
+       struct reg_state *dst_reg = &regs[insn->dst_reg];
+       u8 opcode = BPF_OP(insn->code);
+       s64 imm_log2;
+
+       /* for type == UNKNOWN_VALUE:
+        * imm > 0 -> number of zero upper bits
+        * imm == 0 -> don't track which is the same as all bits can be non-zero
+        */
+
+       if (BPF_SRC(insn->code) == BPF_X) {
+               struct reg_state *src_reg = &regs[insn->src_reg];
+
+               if (src_reg->type == UNKNOWN_VALUE && src_reg->imm > 0 &&
+                   dst_reg->imm && opcode == BPF_ADD) {
+                       /* dreg += sreg
+                        * where both have zero upper bits. Adding them
+                        * can only result making one more bit non-zero
+                        * in the larger value.
+                        * Ex. 0xffff (imm=48) + 1 (imm=63) = 0x10000 (imm=47)
+                        *     0xffff (imm=48) + 0xffff = 0x1fffe (imm=47)
+                        */
+                       dst_reg->imm = min(dst_reg->imm, src_reg->imm);
+                       dst_reg->imm--;
+                       return 0;
+               }
+               if (src_reg->type == CONST_IMM && src_reg->imm > 0 &&
+                   dst_reg->imm && opcode == BPF_ADD) {
+                       /* dreg += sreg
+                        * where dreg has zero upper bits and sreg is const.
+                        * Adding them can only result making one more bit
+                        * non-zero in the larger value.
+                        */
+                       imm_log2 = __ilog2_u64((long long)src_reg->imm);
+                       dst_reg->imm = min(dst_reg->imm, 63 - imm_log2);
+                       dst_reg->imm--;
+                       return 0;
+               }
+               /* all other cases non supported yet, just mark dst_reg */
+               dst_reg->imm = 0;
+               return 0;
+       }
+
+       /* sign extend 32-bit imm into 64-bit to make sure that
+        * negative values occupy bit 63. Note ilog2() would have
+        * been incorrect, since sizeof(insn->imm) == 4
+        */
+       imm_log2 = __ilog2_u64((long long)insn->imm);
+
+       if (dst_reg->imm && opcode == BPF_LSH) {
+               /* reg <<= imm
+                * if reg was a result of 2 byte load, then its imm == 48
+                * which means that upper 48 bits are zero and shifting this reg
+                * left by 4 would mean that upper 44 bits are still zero
+                */
+               dst_reg->imm -= insn->imm;
+       } else if (dst_reg->imm && opcode == BPF_MUL) {
+               /* reg *= imm
+                * if multiplying by 14 subtract 4
+                * This is conservative calculation of upper zero bits.
+                * It's not trying to special case insn->imm == 1 or 0 cases
+                */
+               dst_reg->imm -= imm_log2 + 1;
+       } else if (opcode == BPF_AND) {
+               /* reg &= imm */
+               dst_reg->imm = 63 - imm_log2;
+       } else if (dst_reg->imm && opcode == BPF_ADD) {
+               /* reg += imm */
+               dst_reg->imm = min(dst_reg->imm, 63 - imm_log2);
+               dst_reg->imm--;
+       } else if (opcode == BPF_RSH) {
+               /* reg >>= imm
+                * which means that after right shift, upper bits will be zero
+                * note that verifier already checked that
+                * 0 <= imm < 64 for shift insn
+                */
+               dst_reg->imm += insn->imm;
+               if (unlikely(dst_reg->imm > 64))
+                       /* some dumb code did:
+                        * r2 = *(u32 *)mem;
+                        * r2 >>= 32;
+                        * and all bits are zero now */
+                       dst_reg->imm = 64;
+       } else {
+               /* all other alu ops, means that we don't know what will
+                * happen to the value, mark it with unknown number of zero bits
+                */
+               dst_reg->imm = 0;
+       }
+
+       if (dst_reg->imm < 0) {
+               /* all 64 bits of the register can contain non-zero bits
+                * and such value cannot be added to ptr_to_packet, since it
+                * may overflow, mark it as unknown to avoid further eval
+                */
+               dst_reg->imm = 0;
+       }
+       return 0;
+}
+
+static int evaluate_reg_imm_alu(struct verifier_env *env, struct bpf_insn *insn)
+{
+       struct reg_state *regs = env->cur_state.regs;
+       struct reg_state *dst_reg = &regs[insn->dst_reg];
+       struct reg_state *src_reg = &regs[insn->src_reg];
+       u8 opcode = BPF_OP(insn->code);
+
+       /* dst_reg->type == CONST_IMM here, simulate execution of 'add' insn.
+        * Don't care about overflow or negative values, just add them
+        */
+       if (opcode == BPF_ADD && BPF_SRC(insn->code) == BPF_K)
+               dst_reg->imm += insn->imm;
+       else if (opcode == BPF_ADD && BPF_SRC(insn->code) == BPF_X &&
+                src_reg->type == CONST_IMM)
+               dst_reg->imm += src_reg->imm;
+       else
+               mark_reg_unknown_value(regs, insn->dst_reg);
        return 0;
 }
 
 /* check validity of 32-bit and 64-bit arithmetic operations */
 static int check_alu_op(struct verifier_env *env, struct bpf_insn *insn)
 {
-       struct reg_state *regs = env->cur_state.regs;
+       struct reg_state *regs = env->cur_state.regs, *dst_reg;
        u8 opcode = BPF_OP(insn->code);
        int err;
 
@@ -1178,8 +1517,6 @@ static int check_alu_op(struct verifier_env *env, struct bpf_insn *insn)
 
        } else {        /* all other ALU ops: and, sub, xor, add, ... */
 
-               bool stack_relative = false;
-
                if (BPF_SRC(insn->code) == BPF_X) {
                        if (insn->imm != 0 || insn->off != 0) {
                                verbose("BPF_ALU uses reserved fields\n");
@@ -1217,11 +1554,34 @@ static int check_alu_op(struct verifier_env *env, struct bpf_insn *insn)
                        }
                }
 
+               /* check dest operand */
+               err = check_reg_arg(regs, insn->dst_reg, DST_OP_NO_MARK);
+               if (err)
+                       return err;
+
+               dst_reg = &regs[insn->dst_reg];
+
                /* pattern match 'bpf_add Rx, imm' instruction */
                if (opcode == BPF_ADD && BPF_CLASS(insn->code) == BPF_ALU64 &&
-                   regs[insn->dst_reg].type == FRAME_PTR &&
-                   BPF_SRC(insn->code) == BPF_K) {
-                       stack_relative = true;
+                   dst_reg->type == FRAME_PTR && BPF_SRC(insn->code) == BPF_K) {
+                       dst_reg->type = PTR_TO_STACK;
+                       dst_reg->imm = insn->imm;
+                       return 0;
+               } else if (opcode == BPF_ADD &&
+                          BPF_CLASS(insn->code) == BPF_ALU64 &&
+                          dst_reg->type == PTR_TO_PACKET) {
+                       /* ptr_to_packet += K|X */
+                       return check_packet_ptr_add(env, insn);
+               } else if (BPF_CLASS(insn->code) == BPF_ALU64 &&
+                          dst_reg->type == UNKNOWN_VALUE &&
+                          env->allow_ptr_leaks) {
+                       /* unknown += K|X */
+                       return evaluate_reg_alu(env, insn);
+               } else if (BPF_CLASS(insn->code) == BPF_ALU64 &&
+                          dst_reg->type == CONST_IMM &&
+                          env->allow_ptr_leaks) {
+                       /* reg_imm += K|X */
+                       return evaluate_reg_imm_alu(env, insn);
                } else if (is_pointer_value(env, insn->dst_reg)) {
                        verbose("R%d pointer arithmetic prohibited\n",
                                insn->dst_reg);
@@ -1233,24 +1593,45 @@ static int check_alu_op(struct verifier_env *env, struct bpf_insn *insn)
                        return -EACCES;
                }
 
-               /* check dest operand */
-               err = check_reg_arg(regs, insn->dst_reg, DST_OP);
-               if (err)
-                       return err;
-
-               if (stack_relative) {
-                       regs[insn->dst_reg].type = PTR_TO_STACK;
-                       regs[insn->dst_reg].imm = insn->imm;
-               }
+               /* mark dest operand */
+               mark_reg_unknown_value(regs, insn->dst_reg);
        }
 
        return 0;
 }
 
+static void find_good_pkt_pointers(struct verifier_env *env,
+                                  struct reg_state *dst_reg)
+{
+       struct verifier_state *state = &env->cur_state;
+       struct reg_state *regs = state->regs, *reg;
+       int i;
+       /* r2 = r3;
+        * r2 += 8
+        * if (r2 > pkt_end) goto somewhere
+        * r2 == dst_reg, pkt_end == src_reg,
+        * r2=pkt(id=n,off=8,r=0)
+        * r3=pkt(id=n,off=0,r=0)
+        * find register r3 and mark its range as r3=pkt(id=n,off=0,r=8)
+        * so that range of bytes [r3, r3 + 8) is safe to access
+        */
+       for (i = 0; i < MAX_BPF_REG; i++)
+               if (regs[i].type == PTR_TO_PACKET && regs[i].id == dst_reg->id)
+                       regs[i].range = dst_reg->off;
+
+       for (i = 0; i < MAX_BPF_STACK; i += BPF_REG_SIZE) {
+               if (state->stack_slot_type[i] != STACK_SPILL)
+                       continue;
+               reg = &state->spilled_regs[i / BPF_REG_SIZE];
+               if (reg->type == PTR_TO_PACKET && reg->id == dst_reg->id)
+                       reg->range = dst_reg->off;
+       }
+}
+
 static int check_cond_jmp_op(struct verifier_env *env,
                             struct bpf_insn *insn, int *insn_idx)
 {
-       struct reg_state *regs = env->cur_state.regs;
+       struct reg_state *regs = env->cur_state.regs, *dst_reg;
        struct verifier_state *other_branch;
        u8 opcode = BPF_OP(insn->code);
        int err;
@@ -1288,11 +1669,12 @@ static int check_cond_jmp_op(struct verifier_env *env,
        if (err)
                return err;
 
+       dst_reg = &regs[insn->dst_reg];
+
        /* detect if R == 0 where R was initialized to zero earlier */
        if (BPF_SRC(insn->code) == BPF_K &&
            (opcode == BPF_JEQ || opcode == BPF_JNE) &&
-           regs[insn->dst_reg].type == CONST_IMM &&
-           regs[insn->dst_reg].imm == insn->imm) {
+           dst_reg->type == CONST_IMM && dst_reg->imm == insn->imm) {
                if (opcode == BPF_JEQ) {
                        /* if (imm == imm) goto pc+off;
                         * only follow the goto, ignore fall-through
@@ -1314,44 +1696,30 @@ static int check_cond_jmp_op(struct verifier_env *env,
 
        /* detect if R == 0 where R is returned value from bpf_map_lookup_elem() */
        if (BPF_SRC(insn->code) == BPF_K &&
-           insn->imm == 0 && (opcode == BPF_JEQ ||
-                              opcode == BPF_JNE) &&
-           regs[insn->dst_reg].type == PTR_TO_MAP_VALUE_OR_NULL) {
+           insn->imm == 0 && (opcode == BPF_JEQ || opcode == BPF_JNE) &&
+           dst_reg->type == PTR_TO_MAP_VALUE_OR_NULL) {
                if (opcode == BPF_JEQ) {
                        /* next fallthrough insn can access memory via
                         * this register
                         */
                        regs[insn->dst_reg].type = PTR_TO_MAP_VALUE;
                        /* branch targer cannot access it, since reg == 0 */
-                       other_branch->regs[insn->dst_reg].type = CONST_IMM;
-                       other_branch->regs[insn->dst_reg].imm = 0;
+                       mark_reg_unknown_value(other_branch->regs,
+                                              insn->dst_reg);
                } else {
                        other_branch->regs[insn->dst_reg].type = PTR_TO_MAP_VALUE;
-                       regs[insn->dst_reg].type = CONST_IMM;
-                       regs[insn->dst_reg].imm = 0;
+                       mark_reg_unknown_value(regs, insn->dst_reg);
                }
+       } else if (BPF_SRC(insn->code) == BPF_X && opcode == BPF_JGT &&
+                  dst_reg->type == PTR_TO_PACKET &&
+                  regs[insn->src_reg].type == PTR_TO_PACKET_END) {
+               find_good_pkt_pointers(env, dst_reg);
        } else if (is_pointer_value(env, insn->dst_reg)) {
                verbose("R%d pointer comparison prohibited\n", insn->dst_reg);
                return -EACCES;
-       } else if (BPF_SRC(insn->code) == BPF_K &&
-                  (opcode == BPF_JEQ || opcode == BPF_JNE)) {
-
-               if (opcode == BPF_JEQ) {
-                       /* detect if (R == imm) goto
-                        * and in the target state recognize that R = imm
-                        */
-                       other_branch->regs[insn->dst_reg].type = CONST_IMM;
-                       other_branch->regs[insn->dst_reg].imm = insn->imm;
-               } else {
-                       /* detect if (R != imm) goto
-                        * and in the fall-through state recognize that R = imm
-                        */
-                       regs[insn->dst_reg].type = CONST_IMM;
-                       regs[insn->dst_reg].imm = insn->imm;
-               }
        }
        if (log_level)
-               print_verifier_state(env);
+               print_verifier_state(&env->cur_state);
        return 0;
 }
 
@@ -1429,14 +1797,14 @@ static int check_ld_abs(struct verifier_env *env, struct bpf_insn *insn)
        int i, err;
 
        if (!may_access_skb(env->prog->type)) {
-               verbose("BPF_LD_ABS|IND instructions not allowed for this program type\n");
+               verbose("BPF_LD_[ABS|IND] instructions not allowed for this program type\n");
                return -EINVAL;
        }
 
        if (insn->dst_reg != BPF_REG_0 || insn->off != 0 ||
            BPF_SIZE(insn->code) == BPF_DW ||
            (mode == BPF_ABS && insn->src_reg != BPF_REG_0)) {
-               verbose("BPF_LD_ABS uses reserved fields\n");
+               verbose("BPF_LD_[ABS|IND] uses reserved fields\n");
                return -EINVAL;
        }
 
@@ -1669,6 +2037,58 @@ err_free:
        return ret;
 }
 
+/* the following conditions reduce the number of explored insns
+ * from ~140k to ~80k for ultra large programs that use a lot of ptr_to_packet
+ */
+static bool compare_ptrs_to_packet(struct reg_state *old, struct reg_state *cur)
+{
+       if (old->id != cur->id)
+               return false;
+
+       /* old ptr_to_packet is more conservative, since it allows smaller
+        * range. Ex:
+        * old(off=0,r=10) is equal to cur(off=0,r=20), because
+        * old(off=0,r=10) means that with range=10 the verifier proceeded
+        * further and found no issues with the program. Now we're in the same
+        * spot with cur(off=0,r=20), so we're safe too, since anything further
+        * will only be looking at most 10 bytes after this pointer.
+        */
+       if (old->off == cur->off && old->range < cur->range)
+               return true;
+
+       /* old(off=20,r=10) is equal to cur(off=22,re=22 or 5 or 0)
+        * since both cannot be used for packet access and safe(old)
+        * pointer has smaller off that could be used for further
+        * 'if (ptr > data_end)' check
+        * Ex:
+        * old(off=20,r=10) and cur(off=22,r=22) and cur(off=22,r=0) mean
+        * that we cannot access the packet.
+        * The safe range is:
+        * [ptr, ptr + range - off)
+        * so whenever off >=range, it means no safe bytes from this pointer.
+        * When comparing old->off <= cur->off, it means that older code
+        * went with smaller offset and that offset was later
+        * used to figure out the safe range after 'if (ptr > data_end)' check
+        * Say, 'old' state was explored like:
+        * ... R3(off=0, r=0)
+        * R4 = R3 + 20
+        * ... now R4(off=20,r=0)  <-- here
+        * if (R4 > data_end)
+        * ... R4(off=20,r=20), R3(off=0,r=20) and R3 can be used to access.
+        * ... the code further went all the way to bpf_exit.
+        * Now the 'cur' state at the mark 'here' has R4(off=30,r=0).
+        * old_R4(off=20,r=0) equal to cur_R4(off=30,r=0), since if the verifier
+        * goes further, such cur_R4 will give larger safe packet range after
+        * 'if (R4 > data_end)' and all further insn were already good with r=20,
+        * so they will be good with r=30 and we can prune the search.
+        */
+       if (old->off <= cur->off &&
+           old->off >= old->range && cur->off >= cur->range)
+               return true;
+
+       return false;
+}
+
 /* compare two verifier states
  *
  * all states stored in state_list are known to be valid, since
@@ -1697,17 +2117,25 @@ err_free:
  */
 static bool states_equal(struct verifier_state *old, struct verifier_state *cur)
 {
+       struct reg_state *rold, *rcur;
        int i;
 
        for (i = 0; i < MAX_BPF_REG; i++) {
-               if (memcmp(&old->regs[i], &cur->regs[i],
-                          sizeof(old->regs[0])) != 0) {
-                       if (old->regs[i].type == NOT_INIT ||
-                           (old->regs[i].type == UNKNOWN_VALUE &&
-                            cur->regs[i].type != NOT_INIT))
-                               continue;
-                       return false;
-               }
+               rold = &old->regs[i];
+               rcur = &cur->regs[i];
+
+               if (memcmp(rold, rcur, sizeof(*rold)) == 0)
+                       continue;
+
+               if (rold->type == NOT_INIT ||
+                   (rold->type == UNKNOWN_VALUE && rcur->type != NOT_INIT))
+                       continue;
+
+               if (rold->type == PTR_TO_PACKET && rcur->type == PTR_TO_PACKET &&
+                   compare_ptrs_to_packet(rold, rcur))
+                       continue;
+
+               return false;
        }
 
        for (i = 0; i < MAX_BPF_STACK; i++) {
@@ -1829,7 +2257,7 @@ static int do_check(struct verifier_env *env)
 
                if (log_level && do_print_state) {
                        verbose("\nfrom %d to %d:", prev_insn_idx, insn_idx);
-                       print_verifier_state(env);
+                       print_verifier_state(&env->cur_state);
                        do_print_state = false;
                }
 
@@ -2041,6 +2469,7 @@ process_bpf_exit:
                insn_idx++;
        }
 
+       verbose("processed %d insns\n", insn_processed);
        return 0;
 }
 
@@ -2092,7 +2521,6 @@ static int replace_map_fd_with_map_ptr(struct verifier_env *env)
                        if (IS_ERR(map)) {
                                verbose("fd %d is not pointing to valid bpf_map\n",
                                        insn->imm);
-                               fdput(f);
                                return PTR_ERR(map);
                        }
 
@@ -2112,15 +2540,18 @@ static int replace_map_fd_with_map_ptr(struct verifier_env *env)
                                return -E2BIG;
                        }
 
-                       /* remember this map */
-                       env->used_maps[env->used_map_cnt++] = map;
-
                        /* hold the map. If the program is rejected by verifier,
                         * the map will be released by release_maps() or it
                         * will be used by the valid program until it's unloaded
                         * and all maps are released in free_bpf_prog_info()
                         */
-                       bpf_map_inc(map, false);
+                       map = bpf_map_inc(map, false);
+                       if (IS_ERR(map)) {
+                               fdput(f);
+                               return PTR_ERR(map);
+                       }
+                       env->used_maps[env->used_map_cnt++] = map;
+
                        fdput(f);
 next_insn:
                        insn++;
index 671dc05..909a7d3 100644 (file)
@@ -2825,9 +2825,10 @@ static ssize_t __cgroup_procs_write(struct kernfs_open_file *of, char *buf,
                                    size_t nbytes, loff_t off, bool threadgroup)
 {
        struct task_struct *tsk;
+       struct cgroup_subsys *ss;
        struct cgroup *cgrp;
        pid_t pid;
-       int ret;
+       int ssid, ret;
 
        if (kstrtoint(strstrip(buf), 0, &pid) || pid < 0)
                return -EINVAL;
@@ -2875,8 +2876,10 @@ out_unlock_rcu:
        rcu_read_unlock();
 out_unlock_threadgroup:
        percpu_up_write(&cgroup_threadgroup_rwsem);
+       for_each_subsys(ss, ssid)
+               if (ss->post_attach)
+                       ss->post_attach();
        cgroup_kn_unlock(of->kn);
-       cpuset_post_attach_flush();
        return ret ?: nbytes;
 }
 
index 6ea42e8..3e3f6e4 100644 (file)
@@ -36,6 +36,7 @@
  * @target:    The target state
  * @thread:    Pointer to the hotplug thread
  * @should_run:        Thread should execute
+ * @rollback:  Perform a rollback
  * @cb_stat:   The state for a single callback (install/uninstall)
  * @cb:                Single callback function (install/uninstall)
  * @result:    Result of the operation
@@ -47,6 +48,7 @@ struct cpuhp_cpu_state {
 #ifdef CONFIG_SMP
        struct task_struct      *thread;
        bool                    should_run;
+       bool                    rollback;
        enum cpuhp_state        cb_state;
        int                     (*cb)(unsigned int cpu);
        int                     result;
@@ -301,6 +303,11 @@ static int cpu_notify(unsigned long val, unsigned int cpu)
        return __cpu_notify(val, cpu, -1, NULL);
 }
 
+static void cpu_notify_nofail(unsigned long val, unsigned int cpu)
+{
+       BUG_ON(cpu_notify(val, cpu));
+}
+
 /* Notifier wrappers for transitioning to state machine */
 static int notify_prepare(unsigned int cpu)
 {
@@ -477,6 +484,16 @@ static void cpuhp_thread_fun(unsigned int cpu)
                } else {
                        ret = cpuhp_invoke_callback(cpu, st->cb_state, st->cb);
                }
+       } else if (st->rollback) {
+               BUG_ON(st->state < CPUHP_AP_ONLINE_IDLE);
+
+               undo_cpu_down(cpu, st, cpuhp_ap_states);
+               /*
+                * This is a momentary workaround to keep the notifier users
+                * happy. Will go away once we got rid of the notifiers.
+                */
+               cpu_notify_nofail(CPU_DOWN_FAILED, cpu);
+               st->rollback = false;
        } else {
                /* Cannot happen .... */
                BUG_ON(st->state < CPUHP_AP_ONLINE_IDLE);
@@ -636,11 +653,6 @@ static inline void check_for_tasks(int dead_cpu)
        read_unlock(&tasklist_lock);
 }
 
-static void cpu_notify_nofail(unsigned long val, unsigned int cpu)
-{
-       BUG_ON(cpu_notify(val, cpu));
-}
-
 static int notify_down_prepare(unsigned int cpu)
 {
        int err, nr_calls = 0;
@@ -721,9 +733,10 @@ static int takedown_cpu(unsigned int cpu)
         */
        err = stop_machine(take_cpu_down, NULL, cpumask_of(cpu));
        if (err) {
-               /* CPU didn't die: tell everyone.  Can't complain. */
-               cpu_notify_nofail(CPU_DOWN_FAILED, cpu);
+               /* CPU refused to die */
                irq_unlock_sparse();
+               /* Unpark the hotplug thread so we can rollback there */
+               kthread_unpark(per_cpu_ptr(&cpuhp_state, cpu)->thread);
                return err;
        }
        BUG_ON(cpu_online(cpu));
@@ -832,6 +845,11 @@ static int __ref _cpu_down(unsigned int cpu, int tasks_frozen,
         * to do the further cleanups.
         */
        ret = cpuhp_down_callbacks(cpu, st, cpuhp_bp_states, target);
+       if (ret && st->state > CPUHP_TEARDOWN_CPU && st->state < prev_state) {
+               st->target = prev_state;
+               st->rollback = true;
+               cpuhp_kick_ap_work(cpu);
+       }
 
        hasdied = prev_state != st->state && st->state == CPUHP_OFFLINE;
 out:
@@ -1249,6 +1267,7 @@ static struct cpuhp_step cpuhp_ap_states[] = {
                .name                   = "notify:online",
                .startup                = notify_online,
                .teardown               = notify_down_prepare,
+               .skip_onerr             = true,
        },
 #endif
        /*
index 00ab5c2..1902956 100644 (file)
@@ -58,7 +58,6 @@
 #include <asm/uaccess.h>
 #include <linux/atomic.h>
 #include <linux/mutex.h>
-#include <linux/workqueue.h>
 #include <linux/cgroup.h>
 #include <linux/wait.h>
 
@@ -1016,7 +1015,7 @@ static void cpuset_migrate_mm(struct mm_struct *mm, const nodemask_t *from,
        }
 }
 
-void cpuset_post_attach_flush(void)
+static void cpuset_post_attach(void)
 {
        flush_workqueue(cpuset_migrate_mm_wq);
 }
@@ -2087,6 +2086,7 @@ struct cgroup_subsys cpuset_cgrp_subsys = {
        .can_attach     = cpuset_can_attach,
        .cancel_attach  = cpuset_cancel_attach,
        .attach         = cpuset_attach,
+       .post_attach    = cpuset_post_attach,
        .bind           = cpuset_bind,
        .legacy_cftypes = files,
        .early_init     = true,
index 9eb23dc..0bdc6e7 100644 (file)
@@ -412,7 +412,8 @@ int perf_cpu_time_max_percent_handler(struct ctl_table *table, int write,
        if (ret || !write)
                return ret;
 
-       if (sysctl_perf_cpu_time_max_percent == 100) {
+       if (sysctl_perf_cpu_time_max_percent == 100 ||
+           sysctl_perf_cpu_time_max_percent == 0) {
                printk(KERN_WARNING
                       "perf: Dynamic interrupt throttling disabled, can hang your system!\n");
                WRITE_ONCE(perf_sample_allowed_ns, 0);
@@ -1105,6 +1106,7 @@ static void put_ctx(struct perf_event_context *ctx)
  * function.
  *
  * Lock order:
+ *    cred_guard_mutex
  *     task_struct::perf_event_mutex
  *       perf_event_context::mutex
  *         perf_event::child_mutex;
@@ -3420,7 +3422,6 @@ static struct task_struct *
 find_lively_task_by_vpid(pid_t vpid)
 {
        struct task_struct *task;
-       int err;
 
        rcu_read_lock();
        if (!vpid)
@@ -3434,16 +3435,7 @@ find_lively_task_by_vpid(pid_t vpid)
        if (!task)
                return ERR_PTR(-ESRCH);
 
-       /* Reuse ptrace permission checks for now. */
-       err = -EACCES;
-       if (!ptrace_may_access(task, PTRACE_MODE_READ_REALCREDS))
-               goto errout;
-
        return task;
-errout:
-       put_task_struct(task);
-       return ERR_PTR(err);
-
 }
 
 /*
@@ -8446,6 +8438,24 @@ SYSCALL_DEFINE5(perf_event_open,
 
        get_online_cpus();
 
+       if (task) {
+               err = mutex_lock_interruptible(&task->signal->cred_guard_mutex);
+               if (err)
+                       goto err_cpus;
+
+               /*
+                * Reuse ptrace permission checks for now.
+                *
+                * We must hold cred_guard_mutex across this and any potential
+                * perf_install_in_context() call for this new event to
+                * serialize against exec() altering our credentials (and the
+                * perf_event_exit_task() that could imply).
+                */
+               err = -EACCES;
+               if (!ptrace_may_access(task, PTRACE_MODE_READ_REALCREDS))
+                       goto err_cred;
+       }
+
        if (flags & PERF_FLAG_PID_CGROUP)
                cgroup_fd = pid;
 
@@ -8453,7 +8463,7 @@ SYSCALL_DEFINE5(perf_event_open,
                                 NULL, NULL, cgroup_fd);
        if (IS_ERR(event)) {
                err = PTR_ERR(event);
-               goto err_cpus;
+               goto err_cred;
        }
 
        if (is_sampling_event(event)) {
@@ -8512,11 +8522,6 @@ SYSCALL_DEFINE5(perf_event_open,
                goto err_context;
        }
 
-       if (task) {
-               put_task_struct(task);
-               task = NULL;
-       }
-
        /*
         * Look up the group leader (we will attach this event to it):
         */
@@ -8614,6 +8619,11 @@ SYSCALL_DEFINE5(perf_event_open,
 
        WARN_ON_ONCE(ctx->parent_ctx);
 
+       /*
+        * This is the point on no return; we cannot fail hereafter. This is
+        * where we start modifying current state.
+        */
+
        if (move_group) {
                /*
                 * See perf_event_ctx_lock() for comments on the details
@@ -8685,6 +8695,11 @@ SYSCALL_DEFINE5(perf_event_open,
                mutex_unlock(&gctx->mutex);
        mutex_unlock(&ctx->mutex);
 
+       if (task) {
+               mutex_unlock(&task->signal->cred_guard_mutex);
+               put_task_struct(task);
+       }
+
        put_online_cpus();
 
        mutex_lock(&current->perf_event_mutex);
@@ -8717,6 +8732,9 @@ err_alloc:
         */
        if (!event_file)
                free_event(event);
+err_cred:
+       if (task)
+               mutex_unlock(&task->signal->cred_guard_mutex);
 err_cpus:
        put_online_cpus();
 err_task:
@@ -9001,6 +9019,9 @@ static void perf_event_exit_task_context(struct task_struct *child, int ctxn)
 
 /*
  * When a child task exits, feed back event values to parent events.
+ *
+ * Can be called with cred_guard_mutex held when called from
+ * install_exec_creds().
  */
 void perf_event_exit_task(struct task_struct *child)
 {
index a5d2e74..c20f06f 100644 (file)
@@ -1295,10 +1295,20 @@ static int wake_futex_pi(u32 __user *uaddr, u32 uval, struct futex_q *this,
        if (unlikely(should_fail_futex(true)))
                ret = -EFAULT;
 
-       if (cmpxchg_futex_value_locked(&curval, uaddr, uval, newval))
+       if (cmpxchg_futex_value_locked(&curval, uaddr, uval, newval)) {
                ret = -EFAULT;
-       else if (curval != uval)
-               ret = -EINVAL;
+       } else if (curval != uval) {
+               /*
+                * If a unconditional UNLOCK_PI operation (user space did not
+                * try the TID->0 transition) raced with a waiter setting the
+                * FUTEX_WAITERS flag between get_user() and locking the hash
+                * bucket lock, retry the operation.
+                */
+               if ((FUTEX_TID_MASK & curval) == uval)
+                       ret = -EAGAIN;
+               else
+                       ret = -EINVAL;
+       }
        if (ret) {
                raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);
                return ret;
@@ -1525,8 +1535,8 @@ void requeue_futex(struct futex_q *q, struct futex_hash_bucket *hb1,
        if (likely(&hb1->chain != &hb2->chain)) {
                plist_del(&q->list, &hb1->chain);
                hb_waiters_dec(hb1);
-               plist_add(&q->list, &hb2->chain);
                hb_waiters_inc(hb2);
+               plist_add(&q->list, &hb2->chain);
                q->lock_ptr = &hb2->lock;
        }
        get_futex_key_refs(key2);
@@ -2622,6 +2632,15 @@ retry:
                 */
                if (ret == -EFAULT)
                        goto pi_faulted;
+               /*
+                * A unconditional UNLOCK_PI op raced against a waiter
+                * setting the FUTEX_WAITERS bit. Try again.
+                */
+               if (ret == -EAGAIN) {
+                       spin_unlock(&hb->lock);
+                       put_futex_key(&key);
+                       goto retry;
+               }
                /*
                 * wake_futex_pi has detected invalid state. Tell user
                 * space.
index c37f34b..14777af 100644 (file)
@@ -94,6 +94,7 @@ unsigned int irq_reserve_ipi(struct irq_domain *domain,
                data = irq_get_irq_data(virq + i);
                cpumask_copy(data->common->affinity, dest);
                data->common->ipi_offset = offset;
+               irq_set_status_flags(virq + i, IRQ_NO_BALANCING);
        }
        return virq;
 
index 3efbee0..a02f2dd 100644 (file)
@@ -1,5 +1,6 @@
 #define pr_fmt(fmt) "kcov: " fmt
 
+#define DISABLE_BRANCH_PROFILING
 #include <linux/compiler.h>
 #include <linux/types.h>
 #include <linux/file.h>
@@ -43,7 +44,7 @@ struct kcov {
  * Entry point from instrumented code.
  * This is called once per basic-block/edge.
  */
-void __sanitizer_cov_trace_pc(void)
+void notrace __sanitizer_cov_trace_pc(void)
 {
        struct task_struct *t;
        enum kcov_mode mode;
index 8d34308..1391d3e 100644 (file)
@@ -1415,6 +1415,9 @@ static int __init crash_save_vmcoreinfo_init(void)
        VMCOREINFO_OFFSET(page, lru);
        VMCOREINFO_OFFSET(page, _mapcount);
        VMCOREINFO_OFFSET(page, private);
+       VMCOREINFO_OFFSET(page, compound_dtor);
+       VMCOREINFO_OFFSET(page, compound_order);
+       VMCOREINFO_OFFSET(page, compound_head);
        VMCOREINFO_OFFSET(pglist_data, node_zones);
        VMCOREINFO_OFFSET(pglist_data, nr_zones);
 #ifdef CONFIG_FLAT_NODE_MEM_MAP
@@ -1447,8 +1450,8 @@ static int __init crash_save_vmcoreinfo_init(void)
 #ifdef CONFIG_X86
        VMCOREINFO_NUMBER(KERNEL_IMAGE_SIZE);
 #endif
-#ifdef CONFIG_HUGETLBFS
-       VMCOREINFO_SYMBOL(free_huge_page);
+#ifdef CONFIG_HUGETLB_PAGE
+       VMCOREINFO_NUMBER(HUGETLB_PAGE_DTOR);
 #endif
 
        arch_crash_save_vmcoreinfo();
index ed94109..78c1c0e 100644 (file)
@@ -2176,15 +2176,37 @@ cache_hit:
        chain->irq_context = hlock->irq_context;
        i = get_first_held_lock(curr, hlock);
        chain->depth = curr->lockdep_depth + 1 - i;
+
+       BUILD_BUG_ON((1UL << 24) <= ARRAY_SIZE(chain_hlocks));
+       BUILD_BUG_ON((1UL << 6)  <= ARRAY_SIZE(curr->held_locks));
+       BUILD_BUG_ON((1UL << 8*sizeof(chain_hlocks[0])) <= ARRAY_SIZE(lock_classes));
+
        if (likely(nr_chain_hlocks + chain->depth <= MAX_LOCKDEP_CHAIN_HLOCKS)) {
                chain->base = nr_chain_hlocks;
-               nr_chain_hlocks += chain->depth;
                for (j = 0; j < chain->depth - 1; j++, i++) {
                        int lock_id = curr->held_locks[i].class_idx - 1;
                        chain_hlocks[chain->base + j] = lock_id;
                }
                chain_hlocks[chain->base + j] = class - lock_classes;
        }
+
+       if (nr_chain_hlocks < MAX_LOCKDEP_CHAIN_HLOCKS)
+               nr_chain_hlocks += chain->depth;
+
+#ifdef CONFIG_DEBUG_LOCKDEP
+       /*
+        * Important for check_no_collision().
+        */
+       if (unlikely(nr_chain_hlocks > MAX_LOCKDEP_CHAIN_HLOCKS)) {
+               if (debug_locks_off_graph_unlock())
+                       return 0;
+
+               print_lockdep_off("BUG: MAX_LOCKDEP_CHAIN_HLOCKS too low!");
+               dump_stack();
+               return 0;
+       }
+#endif
+
        hlist_add_head_rcu(&chain->entry, hash_head);
        debug_atomic_inc(chain_lookup_misses);
        inc_chains();
@@ -2932,6 +2954,11 @@ static int mark_irqflags(struct task_struct *curr, struct held_lock *hlock)
        return 1;
 }
 
+static inline unsigned int task_irq_context(struct task_struct *task)
+{
+       return 2 * !!task->hardirq_context + !!task->softirq_context;
+}
+
 static int separate_irq_context(struct task_struct *curr,
                struct held_lock *hlock)
 {
@@ -2940,8 +2967,6 @@ static int separate_irq_context(struct task_struct *curr,
        /*
         * Keep track of points where we cross into an interrupt context:
         */
-       hlock->irq_context = 2*(curr->hardirq_context ? 1 : 0) +
-                               curr->softirq_context;
        if (depth) {
                struct held_lock *prev_hlock;
 
@@ -2973,6 +2998,11 @@ static inline int mark_irqflags(struct task_struct *curr,
        return 1;
 }
 
+static inline unsigned int task_irq_context(struct task_struct *task)
+{
+       return 0;
+}
+
 static inline int separate_irq_context(struct task_struct *curr,
                struct held_lock *hlock)
 {
@@ -3241,6 +3271,7 @@ static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass,
        hlock->acquire_ip = ip;
        hlock->instance = lock;
        hlock->nest_lock = nest_lock;
+       hlock->irq_context = task_irq_context(curr);
        hlock->trylock = trylock;
        hlock->read = read;
        hlock->check = check;
index dbb61a3..a0f61ef 100644 (file)
@@ -141,6 +141,8 @@ static int lc_show(struct seq_file *m, void *v)
        int i;
 
        if (v == SEQ_START_TOKEN) {
+               if (nr_chain_hlocks > MAX_LOCKDEP_CHAIN_HLOCKS)
+                       seq_printf(m, "(buggered) ");
                seq_printf(m, "all lock chains:\n");
                return 0;
        }
index eb2a2c9..d734b75 100644 (file)
@@ -136,10 +136,12 @@ static ssize_t qstat_read(struct file *file, char __user *user_buf,
        }
 
        if (counter == qstat_pv_hash_hops) {
-               u64 frac;
+               u64 frac = 0;
 
-               frac = 100ULL * do_div(stat, kicks);
-               frac = DIV_ROUND_CLOSEST_ULL(frac, kicks);
+               if (kicks) {
+                       frac = 100ULL * do_div(stat, kicks);
+                       frac = DIV_ROUND_CLOSEST_ULL(frac, kicks);
+               }
 
                /*
                 * Return a X.XX decimal number
index 2232ae3..3bfdff0 100644 (file)
@@ -666,6 +666,35 @@ static void set_work_pool_and_clear_pending(struct work_struct *work,
         */
        smp_wmb();
        set_work_data(work, (unsigned long)pool_id << WORK_OFFQ_POOL_SHIFT, 0);
+       /*
+        * The following mb guarantees that previous clear of a PENDING bit
+        * will not be reordered with any speculative LOADS or STORES from
+        * work->current_func, which is executed afterwards.  This possible
+        * reordering can lead to a missed execution on attempt to qeueue
+        * the same @work.  E.g. consider this case:
+        *
+        *   CPU#0                         CPU#1
+        *   ----------------------------  --------------------------------
+        *
+        * 1  STORE event_indicated
+        * 2  queue_work_on() {
+        * 3    test_and_set_bit(PENDING)
+        * 4 }                             set_..._and_clear_pending() {
+        * 5                                 set_work_data() # clear bit
+        * 6                                 smp_mb()
+        * 7                               work->current_func() {
+        * 8                                  LOAD event_indicated
+        *                                 }
+        *
+        * Without an explicit full barrier speculative LOAD on line 8 can
+        * be executed before CPU#0 does STORE on line 1.  If that happens,
+        * CPU#0 observes the PENDING bit is still set and new execution of
+        * a @work is not queued in a hope, that CPU#1 will eventually
+        * finish the queued @work.  Meanwhile CPU#1 does not see
+        * event_indicated is set, because speculative LOAD was executed
+        * before actual STORE.
+        */
+       smp_mb();
 }
 
 static void clear_work_data(struct work_struct *work)
index 654c9d8..9e0b031 100644 (file)
@@ -210,10 +210,6 @@ depot_stack_handle_t depot_save_stack(struct stack_trace *trace,
                goto fast_exit;
 
        hash = hash_stack(trace->entries, trace->nr_entries);
-       /* Bad luck, we won't store this stack. */
-       if (hash == 0)
-               goto exit;
-
        bucket = &stack_table[hash & STACK_HASH_MASK];
 
        /*
index 86f9f8b..df67b53 100644 (file)
@@ -232,7 +232,7 @@ retry:
        return READ_ONCE(huge_zero_page);
 }
 
-static void put_huge_zero_page(void)
+void put_huge_zero_page(void)
 {
        /*
         * Counter should never go to zero here. Only shrinker can put
@@ -1684,12 +1684,12 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
        if (vma_is_dax(vma)) {
                spin_unlock(ptl);
                if (is_huge_zero_pmd(orig_pmd))
-                       put_huge_zero_page();
+                       tlb_remove_page(tlb, pmd_page(orig_pmd));
        } else if (is_huge_zero_pmd(orig_pmd)) {
                pte_free(tlb->mm, pgtable_trans_huge_withdraw(tlb->mm, pmd));
                atomic_long_dec(&tlb->mm->nr_ptes);
                spin_unlock(ptl);
-               put_huge_zero_page();
+               tlb_remove_page(tlb, pmd_page(orig_pmd));
        } else {
                struct page *page = pmd_page(orig_pmd);
                page_remove_rmap(page, true);
@@ -1960,10 +1960,9 @@ int khugepaged_enter_vma_merge(struct vm_area_struct *vma,
                 * page fault if needed.
                 */
                return 0;
-       if (vma->vm_ops)
+       if (vma->vm_ops || (vm_flags & VM_NO_THP))
                /* khugepaged not yet working on file or special mappings */
                return 0;
-       VM_BUG_ON_VMA(vm_flags & VM_NO_THP, vma);
        hstart = (vma->vm_start + ~HPAGE_PMD_MASK) & HPAGE_PMD_MASK;
        hend = vma->vm_end & HPAGE_PMD_MASK;
        if (hstart < hend)
@@ -2352,8 +2351,7 @@ static bool hugepage_vma_check(struct vm_area_struct *vma)
                return false;
        if (is_vma_temporary_stack(vma))
                return false;
-       VM_BUG_ON_VMA(vma->vm_flags & VM_NO_THP, vma);
-       return true;
+       return !(vma->vm_flags & VM_NO_THP);
 }
 
 static void collapse_huge_page(struct mm_struct *mm,
index 36db05f..fe787f5 100644 (file)
@@ -207,6 +207,7 @@ static void mem_cgroup_oom_notify(struct mem_cgroup *memcg);
 /* "mc" and its members are protected by cgroup_mutex */
 static struct move_charge_struct {
        spinlock_t        lock; /* for from, to */
+       struct mm_struct  *mm;
        struct mem_cgroup *from;
        struct mem_cgroup *to;
        unsigned long flags;
@@ -4667,6 +4668,8 @@ static void __mem_cgroup_clear_mc(void)
 
 static void mem_cgroup_clear_mc(void)
 {
+       struct mm_struct *mm = mc.mm;
+
        /*
         * we must clear moving_task before waking up waiters at the end of
         * task migration.
@@ -4676,7 +4679,10 @@ static void mem_cgroup_clear_mc(void)
        spin_lock(&mc.lock);
        mc.from = NULL;
        mc.to = NULL;
+       mc.mm = NULL;
        spin_unlock(&mc.lock);
+
+       mmput(mm);
 }
 
 static int mem_cgroup_can_attach(struct cgroup_taskset *tset)
@@ -4733,6 +4739,7 @@ static int mem_cgroup_can_attach(struct cgroup_taskset *tset)
                VM_BUG_ON(mc.moved_swap);
 
                spin_lock(&mc.lock);
+               mc.mm = mm;
                mc.from = from;
                mc.to = memcg;
                mc.flags = move_flags;
@@ -4742,8 +4749,9 @@ static int mem_cgroup_can_attach(struct cgroup_taskset *tset)
                ret = mem_cgroup_precharge_mc(mm);
                if (ret)
                        mem_cgroup_clear_mc();
+       } else {
+               mmput(mm);
        }
-       mmput(mm);
        return ret;
 }
 
@@ -4852,11 +4860,11 @@ put:                    /* get_mctgt_type() gets the page */
        return ret;
 }
 
-static void mem_cgroup_move_charge(struct mm_struct *mm)
+static void mem_cgroup_move_charge(void)
 {
        struct mm_walk mem_cgroup_move_charge_walk = {
                .pmd_entry = mem_cgroup_move_charge_pte_range,
-               .mm = mm,
+               .mm = mc.mm,
        };
 
        lru_add_drain_all();
@@ -4868,7 +4876,7 @@ static void mem_cgroup_move_charge(struct mm_struct *mm)
        atomic_inc(&mc.from->moving_account);
        synchronize_rcu();
 retry:
-       if (unlikely(!down_read_trylock(&mm->mmap_sem))) {
+       if (unlikely(!down_read_trylock(&mc.mm->mmap_sem))) {
                /*
                 * Someone who are holding the mmap_sem might be waiting in
                 * waitq. So we cancel all extra charges, wake up all waiters,
@@ -4885,23 +4893,16 @@ retry:
         * additional charge, the page walk just aborts.
         */
        walk_page_range(0, ~0UL, &mem_cgroup_move_charge_walk);
-       up_read(&mm->mmap_sem);
+       up_read(&mc.mm->mmap_sem);
        atomic_dec(&mc.from->moving_account);
 }
 
-static void mem_cgroup_move_task(struct cgroup_taskset *tset)
+static void mem_cgroup_move_task(void)
 {
-       struct cgroup_subsys_state *css;
-       struct task_struct *p = cgroup_taskset_first(tset, &css);
-       struct mm_struct *mm = get_task_mm(p);
-
-       if (mm) {
-               if (mc.to)
-                       mem_cgroup_move_charge(mm);
-               mmput(mm);
-       }
-       if (mc.to)
+       if (mc.to) {
+               mem_cgroup_move_charge();
                mem_cgroup_clear_mc();
+       }
 }
 #else  /* !CONFIG_MMU */
 static int mem_cgroup_can_attach(struct cgroup_taskset *tset)
@@ -4911,7 +4912,7 @@ static int mem_cgroup_can_attach(struct cgroup_taskset *tset)
 static void mem_cgroup_cancel_attach(struct cgroup_taskset *tset)
 {
 }
-static void mem_cgroup_move_task(struct cgroup_taskset *tset)
+static void mem_cgroup_move_task(void)
 {
 }
 #endif
@@ -5195,7 +5196,7 @@ struct cgroup_subsys memory_cgrp_subsys = {
        .css_reset = mem_cgroup_css_reset,
        .can_attach = mem_cgroup_can_attach,
        .cancel_attach = mem_cgroup_cancel_attach,
-       .attach = mem_cgroup_move_task,
+       .post_attach = mem_cgroup_move_task,
        .bind = mem_cgroup_bind,
        .dfl_cftypes = memory_files,
        .legacy_cftypes = mem_cgroup_legacy_files,
index 78f5f26..ca5acee 100644 (file)
@@ -888,7 +888,15 @@ int get_hwpoison_page(struct page *page)
                }
        }
 
-       return get_page_unless_zero(head);
+       if (get_page_unless_zero(head)) {
+               if (head == compound_head(page))
+                       return 1;
+
+               pr_info("MCE: %#lx cannot catch tail\n", page_to_pfn(page));
+               put_page(head);
+       }
+
+       return 0;
 }
 EXPORT_SYMBOL_GPL(get_hwpoison_page);
 
index 93897f2..305537f 100644 (file)
@@ -789,6 +789,46 @@ out:
        return pfn_to_page(pfn);
 }
 
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
+                               pmd_t pmd)
+{
+       unsigned long pfn = pmd_pfn(pmd);
+
+       /*
+        * There is no pmd_special() but there may be special pmds, e.g.
+        * in a direct-access (dax) mapping, so let's just replicate the
+        * !HAVE_PTE_SPECIAL case from vm_normal_page() here.
+        */
+       if (unlikely(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))) {
+               if (vma->vm_flags & VM_MIXEDMAP) {
+                       if (!pfn_valid(pfn))
+                               return NULL;
+                       goto out;
+               } else {
+                       unsigned long off;
+                       off = (addr - vma->vm_start) >> PAGE_SHIFT;
+                       if (pfn == vma->vm_pgoff + off)
+                               return NULL;
+                       if (!is_cow_mapping(vma->vm_flags))
+                               return NULL;
+               }
+       }
+
+       if (is_zero_pfn(pfn))
+               return NULL;
+       if (unlikely(pfn > highest_memmap_pfn))
+               return NULL;
+
+       /*
+        * NOTE! We still have PageReserved() pages in the page tables.
+        * eg. VDSO mappings can cause them to exist.
+        */
+out:
+       return pfn_to_page(pfn);
+}
+#endif
+
 /*
  * copy one vm_area from one task to the other. Assumes the page tables
  * already present in the new task to be cleared in the whole range
index 6c822a7..f9dfb18 100644 (file)
@@ -975,7 +975,13 @@ out:
                dec_zone_page_state(page, NR_ISOLATED_ANON +
                                page_is_file_cache(page));
                /* Soft-offlined page shouldn't go through lru cache list */
-               if (reason == MR_MEMORY_FAILURE) {
+               if (reason == MR_MEMORY_FAILURE && rc == MIGRATEPAGE_SUCCESS) {
+                       /*
+                        * With this release, we free successfully migrated
+                        * page and set PG_HWPoison on just freed page
+                        * intentionally. Although it's rather weird, it's how
+                        * HWPoison flag works at the moment.
+                        */
                        put_page(page);
                        if (!test_set_page_hwpoison(page))
                                num_poisoned_pages_inc();
index cd92e3d..985f23c 100644 (file)
@@ -353,7 +353,11 @@ int swap_readpage(struct page *page)
 
        ret = bdev_read_page(sis->bdev, swap_page_sector(page), page);
        if (!ret) {
-               swap_slot_free_notify(page);
+               if (trylock_page(page)) {
+                       swap_slot_free_notify(page);
+                       unlock_page(page);
+               }
+
                count_vm_event(PSWPIN);
                return 0;
        }
index a0bc206..03aacbc 100644 (file)
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -728,6 +728,11 @@ void release_pages(struct page **pages, int nr, bool cold)
                        zone = NULL;
                }
 
+               if (is_huge_zero_page(page)) {
+                       put_huge_zero_page();
+                       continue;
+               }
+
                page = compound_head(page);
                if (!put_page_testzero(page))
                        continue;
index b934223..142cb61 100644 (file)
@@ -2553,7 +2553,7 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
                sc->gfp_mask |= __GFP_HIGHMEM;
 
        for_each_zone_zonelist_nodemask(zone, z, zonelist,
-                                       requested_highidx, sc->nodemask) {
+                                       gfp_zone(sc->gfp_mask), sc->nodemask) {
                enum zone_type classzone_idx;
 
                if (!populated_zone(zone))
@@ -3318,6 +3318,20 @@ static void kswapd_try_to_sleep(pg_data_t *pgdat, int order,
        /* Try to sleep for a short interval */
        if (prepare_kswapd_sleep(pgdat, order, remaining,
                                                balanced_classzone_idx)) {
+               /*
+                * Compaction records what page blocks it recently failed to
+                * isolate pages from and skips them in the future scanning.
+                * When kswapd is going to sleep, it is reasonable to assume
+                * that pages and compaction may succeed so reset the cache.
+                */
+               reset_isolation_suitable(pgdat);
+
+               /*
+                * We have freed the memory, now we should compact it to make
+                * allocation of the requested order possible.
+                */
+               wakeup_kcompactd(pgdat, order, classzone_idx);
+
                remaining = schedule_timeout(HZ/10);
                finish_wait(&pgdat->kswapd_wait, &wait);
                prepare_to_wait(&pgdat->kswapd_wait, &wait, TASK_INTERRUPTIBLE);
@@ -3341,20 +3355,6 @@ static void kswapd_try_to_sleep(pg_data_t *pgdat, int order,
                 */
                set_pgdat_percpu_threshold(pgdat, calculate_normal_threshold);
 
-               /*
-                * Compaction records what page blocks it recently failed to
-                * isolate pages from and skips them in the future scanning.
-                * When kswapd is going to sleep, it is reasonable to assume
-                * that pages and compaction may succeed so reset the cache.
-                */
-               reset_isolation_suitable(pgdat);
-
-               /*
-                * We have freed the memory, now we should compact it to make
-                * allocation of the requested order possible.
-                */
-               wakeup_kcompactd(pgdat, order, classzone_idx);
-
                if (!kthread_should_stop())
                        schedule();
 
index d16bb4b..97ecc27 100644 (file)
@@ -3,6 +3,15 @@
 
 #include <linux/netdevice.h>
 
+#include <net/6lowpan.h>
+
+/* caller need to be sure it's dev->type is ARPHRD_6LOWPAN */
+static inline bool lowpan_is_ll(const struct net_device *dev,
+                               enum lowpan_lltypes lltype)
+{
+       return lowpan_dev(dev)->lltype == lltype;
+}
+
 #ifdef CONFIG_6LOWPAN_DEBUGFS
 int lowpan_dev_debugfs_init(struct net_device *dev);
 void lowpan_dev_debugfs_exit(struct net_device *dev);
index 34e44c0..7a240b3 100644 (file)
@@ -27,11 +27,11 @@ int lowpan_register_netdevice(struct net_device *dev,
        dev->mtu = IPV6_MIN_MTU;
        dev->priv_flags |= IFF_NO_QUEUE;
 
-       lowpan_priv(dev)->lltype = lltype;
+       lowpan_dev(dev)->lltype = lltype;
 
-       spin_lock_init(&lowpan_priv(dev)->ctx.lock);
+       spin_lock_init(&lowpan_dev(dev)->ctx.lock);
        for (i = 0; i < LOWPAN_IPHC_CTX_TABLE_SIZE; i++)
-               lowpan_priv(dev)->ctx.table[i].id = i;
+               lowpan_dev(dev)->ctx.table[i].id = i;
 
        ret = register_netdevice(dev);
        if (ret < 0)
@@ -85,7 +85,7 @@ static int lowpan_event(struct notifier_block *unused,
        case NETDEV_DOWN:
                for (i = 0; i < LOWPAN_IPHC_CTX_TABLE_SIZE; i++)
                        clear_bit(LOWPAN_IPHC_CTX_FLAG_ACTIVE,
-                                 &lowpan_priv(dev)->ctx.table[i].flags);
+                                 &lowpan_dev(dev)->ctx.table[i].flags);
                break;
        default:
                return NOTIFY_DONE;
index 0793a81..acbaa3d 100644 (file)
@@ -172,7 +172,7 @@ static const struct file_operations lowpan_ctx_pfx_fops = {
 static int lowpan_dev_debugfs_ctx_init(struct net_device *dev,
                                       struct dentry *ctx, u8 id)
 {
-       struct lowpan_priv *lpriv = lowpan_priv(dev);
+       struct lowpan_dev *ldev = lowpan_dev(dev);
        struct dentry *dentry, *root;
        char buf[32];
 
@@ -185,25 +185,25 @@ static int lowpan_dev_debugfs_ctx_init(struct net_device *dev,
                return -EINVAL;
 
        dentry = debugfs_create_file("active", 0644, root,
-                                    &lpriv->ctx.table[id],
+                                    &ldev->ctx.table[id],
                                     &lowpan_ctx_flag_active_fops);
        if (!dentry)
                return -EINVAL;
 
        dentry = debugfs_create_file("compression", 0644, root,
-                                    &lpriv->ctx.table[id],
+                                    &ldev->ctx.table[id],
                                     &lowpan_ctx_flag_c_fops);
        if (!dentry)
                return -EINVAL;
 
        dentry = debugfs_create_file("prefix", 0644, root,
-                                    &lpriv->ctx.table[id],
+                                    &ldev->ctx.table[id],
                                     &lowpan_ctx_pfx_fops);
        if (!dentry)
                return -EINVAL;
 
        dentry = debugfs_create_file("prefix_len", 0644, root,
-                                    &lpriv->ctx.table[id],
+                                    &ldev->ctx.table[id],
                                     &lowpan_ctx_plen_fops);
        if (!dentry)
                return -EINVAL;
@@ -247,21 +247,21 @@ static const struct file_operations lowpan_context_fops = {
 
 int lowpan_dev_debugfs_init(struct net_device *dev)
 {
-       struct lowpan_priv *lpriv = lowpan_priv(dev);
+       struct lowpan_dev *ldev = lowpan_dev(dev);
        struct dentry *contexts, *dentry;
        int ret, i;
 
        /* creating the root */
-       lpriv->iface_debugfs = debugfs_create_dir(dev->name, lowpan_debugfs);
-       if (!lpriv->iface_debugfs)
+       ldev->iface_debugfs = debugfs_create_dir(dev->name, lowpan_debugfs);
+       if (!ldev->iface_debugfs)
                goto fail;
 
-       contexts = debugfs_create_dir("contexts", lpriv->iface_debugfs);
+       contexts = debugfs_create_dir("contexts", ldev->iface_debugfs);
        if (!contexts)
                goto remove_root;
 
        dentry = debugfs_create_file("show", 0644, contexts,
-                                    &lowpan_priv(dev)->ctx,
+                                    &lowpan_dev(dev)->ctx,
                                     &lowpan_context_fops);
        if (!dentry)
                goto remove_root;
@@ -282,7 +282,7 @@ fail:
 
 void lowpan_dev_debugfs_exit(struct net_device *dev)
 {
-       debugfs_remove_recursive(lowpan_priv(dev)->iface_debugfs);
+       debugfs_remove_recursive(lowpan_dev(dev)->iface_debugfs);
 }
 
 int __init lowpan_debugfs_init(void)
index 68c80f3..8501dd5 100644 (file)
@@ -53,9 +53,6 @@
 #include <net/6lowpan.h>
 #include <net/ipv6.h>
 
-/* special link-layer handling */
-#include <net/mac802154.h>
-
 #include "6lowpan_i.h"
 #include "nhc.h"
 
 #define LOWPAN_IPHC_CID_DCI(cid)       (cid & 0x0f)
 #define LOWPAN_IPHC_CID_SCI(cid)       ((cid & 0xf0) >> 4)
 
-static inline void iphc_uncompress_eui64_lladdr(struct in6_addr *ipaddr,
-                                               const void *lladdr)
-{
-       /* fe:80::XXXX:XXXX:XXXX:XXXX
-        *        \_________________/
-        *              hwaddr
-        */
-       ipaddr->s6_addr[0] = 0xFE;
-       ipaddr->s6_addr[1] = 0x80;
-       memcpy(&ipaddr->s6_addr[8], lladdr, EUI64_ADDR_LEN);
-       /* second bit-flip (Universe/Local)
-        * is done according RFC2464
-        */
-       ipaddr->s6_addr[8] ^= 0x02;
-}
-
-static inline void iphc_uncompress_802154_lladdr(struct in6_addr *ipaddr,
-                                                const void *lladdr)
+static inline void
+lowpan_iphc_uncompress_802154_lladdr(struct in6_addr *ipaddr,
+                                    const void *lladdr)
 {
        const struct ieee802154_addr *addr = lladdr;
-       u8 eui64[EUI64_ADDR_LEN] = { };
+       u8 eui64[EUI64_ADDR_LEN];
 
        switch (addr->mode) {
        case IEEE802154_ADDR_LONG:
                ieee802154_le64_to_be64(eui64, &addr->extended_addr);
-               iphc_uncompress_eui64_lladdr(ipaddr, eui64);
+               lowpan_iphc_uncompress_eui64_lladdr(ipaddr, eui64);
                break;
        case IEEE802154_ADDR_SHORT:
                /* fe:80::ff:fe00:XXXX
@@ -207,7 +189,7 @@ static inline void iphc_uncompress_802154_lladdr(struct in6_addr *ipaddr,
 static struct lowpan_iphc_ctx *
 lowpan_iphc_ctx_get_by_id(const struct net_device *dev, u8 id)
 {
-       struct lowpan_iphc_ctx *ret = &lowpan_priv(dev)->ctx.table[id];
+       struct lowpan_iphc_ctx *ret = &lowpan_dev(dev)->ctx.table[id];
 
        if (!lowpan_iphc_ctx_is_active(ret))
                return NULL;
@@ -219,7 +201,7 @@ static struct lowpan_iphc_ctx *
 lowpan_iphc_ctx_get_by_addr(const struct net_device *dev,
                            const struct in6_addr *addr)
 {
-       struct lowpan_iphc_ctx *table = lowpan_priv(dev)->ctx.table;
+       struct lowpan_iphc_ctx *table = lowpan_dev(dev)->ctx.table;
        struct lowpan_iphc_ctx *ret = NULL;
        struct in6_addr addr_pfx;
        u8 addr_plen;
@@ -263,7 +245,7 @@ static struct lowpan_iphc_ctx *
 lowpan_iphc_ctx_get_by_mcast_addr(const struct net_device *dev,
                                  const struct in6_addr *addr)
 {
-       struct lowpan_iphc_ctx *table = lowpan_priv(dev)->ctx.table;
+       struct lowpan_iphc_ctx *table = lowpan_dev(dev)->ctx.table;
        struct lowpan_iphc_ctx *ret = NULL;
        struct in6_addr addr_mcast, network_pfx = {};
        int i;
@@ -301,9 +283,10 @@ lowpan_iphc_ctx_get_by_mcast_addr(const struct net_device *dev,
  *
  * address_mode is the masked value for sam or dam value
  */
-static int uncompress_addr(struct sk_buff *skb, const struct net_device *dev,
-                          struct in6_addr *ipaddr, u8 address_mode,
-                          const void *lladdr)
+static int lowpan_iphc_uncompress_addr(struct sk_buff *skb,
+                                      const struct net_device *dev,
+                                      struct in6_addr *ipaddr,
+                                      u8 address_mode, const void *lladdr)
 {
        bool fail;
 
@@ -332,12 +315,12 @@ static int uncompress_addr(struct sk_buff *skb, const struct net_device *dev,
        case LOWPAN_IPHC_SAM_11:
        case LOWPAN_IPHC_DAM_11:
                fail = false;
-               switch (lowpan_priv(dev)->lltype) {
+               switch (lowpan_dev(dev)->lltype) {
                case LOWPAN_LLTYPE_IEEE802154:
-                       iphc_uncompress_802154_lladdr(ipaddr, lladdr);
+                       lowpan_iphc_uncompress_802154_lladdr(ipaddr, lladdr);
                        break;
                default:
-                       iphc_uncompress_eui64_lladdr(ipaddr, lladdr);
+                       lowpan_iphc_uncompress_eui64_lladdr(ipaddr, lladdr);
                        break;
                }
                break;
@@ -360,11 +343,11 @@ static int uncompress_addr(struct sk_buff *skb, const struct net_device *dev,
 /* Uncompress address function for source context
  * based address(non-multicast).
  */
-static int uncompress_ctx_addr(struct sk_buff *skb,
-                              const struct net_device *dev,
-                              const struct lowpan_iphc_ctx *ctx,
-                              struct in6_addr *ipaddr, u8 address_mode,
-                              const void *lladdr)
+static int lowpan_iphc_uncompress_ctx_addr(struct sk_buff *skb,
+                                          const struct net_device *dev,
+                                          const struct lowpan_iphc_ctx *ctx,
+                                          struct in6_addr *ipaddr,
+                                          u8 address_mode, const void *lladdr)
 {
        bool fail;
 
@@ -393,12 +376,12 @@ static int uncompress_ctx_addr(struct sk_buff *skb,
        case LOWPAN_IPHC_SAM_11:
        case LOWPAN_IPHC_DAM_11:
                fail = false;
-               switch (lowpan_priv(dev)->lltype) {
+               switch (lowpan_dev(dev)->lltype) {
                case LOWPAN_LLTYPE_IEEE802154:
-                       iphc_uncompress_802154_lladdr(ipaddr, lladdr);
+                       lowpan_iphc_uncompress_802154_lladdr(ipaddr, lladdr);
                        break;
                default:
-                       iphc_uncompress_eui64_lladdr(ipaddr, lladdr);
+                       lowpan_iphc_uncompress_eui64_lladdr(ipaddr, lladdr);
                        break;
                }
                ipv6_addr_prefix_copy(ipaddr, &ctx->pfx, ctx->plen);
@@ -657,22 +640,24 @@ int lowpan_header_decompress(struct sk_buff *skb, const struct net_device *dev,
        }
 
        if (iphc1 & LOWPAN_IPHC_SAC) {
-               spin_lock_bh(&lowpan_priv(dev)->ctx.lock);
+               spin_lock_bh(&lowpan_dev(dev)->ctx.lock);
                ci = lowpan_iphc_ctx_get_by_id(dev, LOWPAN_IPHC_CID_SCI(cid));
                if (!ci) {
-                       spin_unlock_bh(&lowpan_priv(dev)->ctx.lock);
+                       spin_unlock_bh(&lowpan_dev(dev)->ctx.lock);
                        return -EINVAL;
                }
 
                pr_debug("SAC bit is set. Handle context based source address.\n");
-               err = uncompress_ctx_addr(skb, dev, ci, &hdr.saddr,
-                                         iphc1 & LOWPAN_IPHC_SAM_MASK, saddr);
-               spin_unlock_bh(&lowpan_priv(dev)->ctx.lock);
+               err = lowpan_iphc_uncompress_ctx_addr(skb, dev, ci, &hdr.saddr,
+                                                     iphc1 & LOWPAN_IPHC_SAM_MASK,
+                                                     saddr);
+               spin_unlock_bh(&lowpan_dev(dev)->ctx.lock);
        } else {
                /* Source address uncompression */
                pr_debug("source address stateless compression\n");
-               err = uncompress_addr(skb, dev, &hdr.saddr,
-                                     iphc1 & LOWPAN_IPHC_SAM_MASK, saddr);
+               err = lowpan_iphc_uncompress_addr(skb, dev, &hdr.saddr,
+                                                 iphc1 & LOWPAN_IPHC_SAM_MASK,
+                                                 saddr);
        }
 
        /* Check on error of previous branch */
@@ -681,10 +666,10 @@ int lowpan_header_decompress(struct sk_buff *skb, const struct net_device *dev,
 
        switch (iphc1 & (LOWPAN_IPHC_M | LOWPAN_IPHC_DAC)) {
        case LOWPAN_IPHC_M | LOWPAN_IPHC_DAC:
-               spin_lock_bh(&lowpan_priv(dev)->ctx.lock);
+               spin_lock_bh(&lowpan_dev(dev)->ctx.lock);
                ci = lowpan_iphc_ctx_get_by_id(dev, LOWPAN_IPHC_CID_DCI(cid));
                if (!ci) {
-                       spin_unlock_bh(&lowpan_priv(dev)->ctx.lock);
+                       spin_unlock_bh(&lowpan_dev(dev)->ctx.lock);
                        return -EINVAL;
                }
 
@@ -693,7 +678,7 @@ int lowpan_header_decompress(struct sk_buff *skb, const struct net_device *dev,
                err = lowpan_uncompress_multicast_ctx_daddr(skb, ci,
                                                            &hdr.daddr,
                                                            iphc1 & LOWPAN_IPHC_DAM_MASK);
-               spin_unlock_bh(&lowpan_priv(dev)->ctx.lock);
+               spin_unlock_bh(&lowpan_dev(dev)->ctx.lock);
                break;
        case LOWPAN_IPHC_M:
                /* multicast */
@@ -701,22 +686,24 @@ int lowpan_header_decompress(struct sk_buff *skb, const struct net_device *dev,
                                                        iphc1 & LOWPAN_IPHC_DAM_MASK);
                break;
        case LOWPAN_IPHC_DAC:
-               spin_lock_bh(&lowpan_priv(dev)->ctx.lock);
+               spin_lock_bh(&lowpan_dev(dev)->ctx.lock);
                ci = lowpan_iphc_ctx_get_by_id(dev, LOWPAN_IPHC_CID_DCI(cid));
                if (!ci) {
-                       spin_unlock_bh(&lowpan_priv(dev)->ctx.lock);
+                       spin_unlock_bh(&lowpan_dev(dev)->ctx.lock);
                        return -EINVAL;
                }
 
                /* Destination address context based uncompression */
                pr_debug("DAC bit is set. Handle context based destination address.\n");
-               err = uncompress_ctx_addr(skb, dev, ci, &hdr.daddr,
-                                         iphc1 & LOWPAN_IPHC_DAM_MASK, daddr);
-               spin_unlock_bh(&lowpan_priv(dev)->ctx.lock);
+               err = lowpan_iphc_uncompress_ctx_addr(skb, dev, ci, &hdr.daddr,
+                                                     iphc1 & LOWPAN_IPHC_DAM_MASK,
+                                                     daddr);
+               spin_unlock_bh(&lowpan_dev(dev)->ctx.lock);
                break;
        default:
-               err = uncompress_addr(skb, dev, &hdr.daddr,
-                                     iphc1 & LOWPAN_IPHC_DAM_MASK, daddr);
+               err = lowpan_iphc_uncompress_addr(skb, dev, &hdr.daddr,
+                                                 iphc1 & LOWPAN_IPHC_DAM_MASK,
+                                                 daddr);
                pr_debug("dest: stateless compression mode %d dest %pI6c\n",
                         iphc1 & LOWPAN_IPHC_DAM_MASK, &hdr.daddr);
                break;
@@ -736,7 +723,7 @@ int lowpan_header_decompress(struct sk_buff *skb, const struct net_device *dev,
                        return err;
        }
 
-       switch (lowpan_priv(dev)->lltype) {
+       switch (lowpan_dev(dev)->lltype) {
        case LOWPAN_LLTYPE_IEEE802154:
                if (lowpan_802154_cb(skb)->d_size)
                        hdr.payload_len = htons(lowpan_802154_cb(skb)->d_size -
@@ -1033,7 +1020,7 @@ int lowpan_header_compress(struct sk_buff *skb, const struct net_device *dev,
                       skb->data, skb->len);
 
        ipv6_daddr_type = ipv6_addr_type(&hdr->daddr);
-       spin_lock_bh(&lowpan_priv(dev)->ctx.lock);
+       spin_lock_bh(&lowpan_dev(dev)->ctx.lock);
        if (ipv6_daddr_type & IPV6_ADDR_MULTICAST)
                dci = lowpan_iphc_ctx_get_by_mcast_addr(dev, &hdr->daddr);
        else
@@ -1042,15 +1029,15 @@ int lowpan_header_compress(struct sk_buff *skb, const struct net_device *dev,
                memcpy(&dci_entry, dci, sizeof(*dci));
                cid |= dci->id;
        }
-       spin_unlock_bh(&lowpan_priv(dev)->ctx.lock);
+       spin_unlock_bh(&lowpan_dev(dev)->ctx.lock);
 
-       spin_lock_bh(&lowpan_priv(dev)->ctx.lock);
+       spin_lock_bh(&lowpan_dev(dev)->ctx.lock);
        sci = lowpan_iphc_ctx_get_by_addr(dev, &hdr->saddr);
        if (sci) {
                memcpy(&sci_entry, sci, sizeof(*sci));
                cid |= (sci->id << 4);
        }
-       spin_unlock_bh(&lowpan_priv(dev)->ctx.lock);
+       spin_unlock_bh(&lowpan_dev(dev)->ctx.lock);
 
        /* if cid is zero it will be compressed */
        if (cid) {
index 69537a2..225d919 100644 (file)
@@ -91,7 +91,7 @@ static int udp_uncompress(struct sk_buff *skb, size_t needed)
         * here, we obtain the hint from the remaining size of the
         * frame
         */
-       switch (lowpan_priv(skb->dev)->lltype) {
+       switch (lowpan_dev(skb->dev)->lltype) {
        case LOWPAN_LLTYPE_IEEE802154:
                if (lowpan_802154_cb(skb)->d_size)
                        uh.len = htons(lowpan_802154_cb(skb)->d_size -
index a8934d8..b841c42 100644 (file)
@@ -236,6 +236,7 @@ source "net/mpls/Kconfig"
 source "net/hsr/Kconfig"
 source "net/switchdev/Kconfig"
 source "net/l3mdev/Kconfig"
+source "net/qrtr/Kconfig"
 
 config RPS
        bool
index 81d1411..bdd1455 100644 (file)
@@ -78,3 +78,4 @@ endif
 ifneq ($(CONFIG_NET_L3_MASTER_DEV),)
 obj-y                          += l3mdev/
 endif
+obj-$(CONFIG_QRTR)             += qrtr/
index cd3b379..e574a7e 100644 (file)
@@ -194,7 +194,7 @@ lec_send(struct atm_vcc *vcc, struct sk_buff *skb)
 static void lec_tx_timeout(struct net_device *dev)
 {
        pr_info("%s\n", dev->name);
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
        netif_wake_queue(dev);
 }
 
@@ -324,7 +324,7 @@ static netdev_tx_t lec_start_xmit(struct sk_buff *skb,
 out:
        if (entry)
                lec_arp_put(entry);
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
        return NETDEV_TX_OK;
 }
 
index cb2d1b9..8c1710b 100644 (file)
@@ -32,6 +32,7 @@
 #include <linux/jiffies.h>
 #include <linux/list.h>
 #include <linux/kref.h>
+#include <linux/lockdep.h>
 #include <linux/netdevice.h>
 #include <linux/pkt_sched.h>
 #include <linux/printk.h>
@@ -175,71 +176,107 @@ unlock:
 }
 
 /**
- * batadv_iv_ogm_orig_del_if - change the private structures of the orig_node to
- *  exclude the removed interface
+ * batadv_iv_ogm_drop_bcast_own_entry - drop section of bcast_own
  * @orig_node: the orig_node that has to be changed
  * @max_if_num: the current amount of interfaces
  * @del_if_num: the index of the interface being removed
- *
- * Return: 0 on success, a negative error code otherwise.
  */
-static int batadv_iv_ogm_orig_del_if(struct batadv_orig_node *orig_node,
-                                    int max_if_num, int del_if_num)
+static void
+batadv_iv_ogm_drop_bcast_own_entry(struct batadv_orig_node *orig_node,
+                                  int max_if_num, int del_if_num)
 {
-       int ret = -ENOMEM;
-       size_t chunk_size, if_offset;
-       void *data_ptr = NULL;
-
-       spin_lock_bh(&orig_node->bat_iv.ogm_cnt_lock);
+       size_t chunk_size;
+       size_t if_offset;
+       void *data_ptr;
 
-       /* last interface was removed */
-       if (max_if_num == 0)
-               goto free_bcast_own;
+       lockdep_assert_held(&orig_node->bat_iv.ogm_cnt_lock);
 
        chunk_size = sizeof(unsigned long) * BATADV_NUM_WORDS;
        data_ptr = kmalloc_array(max_if_num, chunk_size, GFP_ATOMIC);
        if (!data_ptr)
-               goto unlock;
+               /* use old buffer when new one could not be allocated */
+               data_ptr = orig_node->bat_iv.bcast_own;
 
        /* copy first part */
-       memcpy(data_ptr, orig_node->bat_iv.bcast_own, del_if_num * chunk_size);
+       memmove(data_ptr, orig_node->bat_iv.bcast_own, del_if_num * chunk_size);
 
        /* copy second part */
        if_offset = (del_if_num + 1) * chunk_size;
-       memcpy((char *)data_ptr + del_if_num * chunk_size,
-              (uint8_t *)orig_node->bat_iv.bcast_own + if_offset,
-              (max_if_num - del_if_num) * chunk_size);
+       memmove((char *)data_ptr + del_if_num * chunk_size,
+               (uint8_t *)orig_node->bat_iv.bcast_own + if_offset,
+               (max_if_num - del_if_num) * chunk_size);
 
-free_bcast_own:
-       kfree(orig_node->bat_iv.bcast_own);
-       orig_node->bat_iv.bcast_own = data_ptr;
+       /* bcast_own was shrunk down in new buffer; free old one */
+       if (orig_node->bat_iv.bcast_own != data_ptr) {
+               kfree(orig_node->bat_iv.bcast_own);
+               orig_node->bat_iv.bcast_own = data_ptr;
+       }
+}
+
+/**
+ * batadv_iv_ogm_drop_bcast_own_sum_entry - drop section of bcast_own_sum
+ * @orig_node: the orig_node that has to be changed
+ * @max_if_num: the current amount of interfaces
+ * @del_if_num: the index of the interface being removed
+ */
+static void
+batadv_iv_ogm_drop_bcast_own_sum_entry(struct batadv_orig_node *orig_node,
+                                      int max_if_num, int del_if_num)
+{
+       size_t if_offset;
+       void *data_ptr;
 
-       if (max_if_num == 0)
-               goto free_own_sum;
+       lockdep_assert_held(&orig_node->bat_iv.ogm_cnt_lock);
 
        data_ptr = kmalloc_array(max_if_num, sizeof(u8), GFP_ATOMIC);
-       if (!data_ptr) {
-               kfree(orig_node->bat_iv.bcast_own);
-               goto unlock;
-       }
+       if (!data_ptr)
+               /* use old buffer when new one could not be allocated */
+               data_ptr = orig_node->bat_iv.bcast_own_sum;
 
-       memcpy(data_ptr, orig_node->bat_iv.bcast_own_sum,
-              del_if_num * sizeof(u8));
+       memmove(data_ptr, orig_node->bat_iv.bcast_own_sum,
+               del_if_num * sizeof(u8));
 
        if_offset = (del_if_num + 1) * sizeof(u8);
-       memcpy((char *)data_ptr + del_if_num * sizeof(u8),
-              orig_node->bat_iv.bcast_own_sum + if_offset,
-              (max_if_num - del_if_num) * sizeof(u8));
+       memmove((char *)data_ptr + del_if_num * sizeof(u8),
+               orig_node->bat_iv.bcast_own_sum + if_offset,
+               (max_if_num - del_if_num) * sizeof(u8));
+
+       /* bcast_own_sum was shrunk down in new buffer; free old one */
+       if (orig_node->bat_iv.bcast_own_sum != data_ptr) {
+               kfree(orig_node->bat_iv.bcast_own_sum);
+               orig_node->bat_iv.bcast_own_sum = data_ptr;
+       }
+}
 
-free_own_sum:
-       kfree(orig_node->bat_iv.bcast_own_sum);
-       orig_node->bat_iv.bcast_own_sum = data_ptr;
+/**
+ * batadv_iv_ogm_orig_del_if - change the private structures of the orig_node to
+ *  exclude the removed interface
+ * @orig_node: the orig_node that has to be changed
+ * @max_if_num: the current amount of interfaces
+ * @del_if_num: the index of the interface being removed
+ *
+ * Return: 0 on success, a negative error code otherwise.
+ */
+static int batadv_iv_ogm_orig_del_if(struct batadv_orig_node *orig_node,
+                                    int max_if_num, int del_if_num)
+{
+       spin_lock_bh(&orig_node->bat_iv.ogm_cnt_lock);
+
+       if (max_if_num == 0) {
+               kfree(orig_node->bat_iv.bcast_own);
+               kfree(orig_node->bat_iv.bcast_own_sum);
+               orig_node->bat_iv.bcast_own = NULL;
+               orig_node->bat_iv.bcast_own_sum = NULL;
+       } else {
+               batadv_iv_ogm_drop_bcast_own_entry(orig_node, max_if_num,
+                                                  del_if_num);
+               batadv_iv_ogm_drop_bcast_own_sum_entry(orig_node, max_if_num,
+                                                      del_if_num);
+       }
 
-       ret = 0;
-unlock:
        spin_unlock_bh(&orig_node->bat_iv.ogm_cnt_lock);
 
-       return ret;
+       return 0;
 }
 
 /**
@@ -1829,9 +1866,8 @@ static void batadv_iv_ogm_orig_print(struct batadv_priv *bat_priv,
        int batman_count = 0;
        u32 i;
 
-       seq_printf(seq, "  %-15s %s (%s/%i) %17s [%10s]: %20s ...\n",
-                  "Originator", "last-seen", "#", BATADV_TQ_MAX_VALUE,
-                  "Nexthop", "outgoingIF", "Potential nexthops");
+       seq_puts(seq,
+                "  Originator      last-seen (#/255)           Nexthop [outgoingIF]:   Potential nexthops ...\n");
 
        for (i = 0; i < hash->size; i++) {
                head = &hash->table[i];
@@ -1911,8 +1947,7 @@ static void batadv_iv_neigh_print(struct batadv_priv *bat_priv,
        struct batadv_hard_iface *hard_iface;
        int batman_count = 0;
 
-       seq_printf(seq, "   %10s        %-13s %s\n",
-                  "IF", "Neighbor", "last-seen");
+       seq_puts(seq, "           IF        Neighbor      last-seen\n");
 
        rcu_read_lock();
        list_for_each_entry_rcu(hard_iface, &batadv_hardif_list, list) {
index 3315b9a..3ff8bd1 100644 (file)
 
 #include "bat_v_elp.h"
 #include "bat_v_ogm.h"
+#include "hard-interface.h"
 #include "hash.h"
 #include "originator.h"
 #include "packet.h"
 
+static void batadv_v_iface_activate(struct batadv_hard_iface *hard_iface)
+{
+       /* B.A.T.M.A.N. V does not use any queuing mechanism, therefore it can
+        * set the interface as ACTIVE right away, without any risk of race
+        * condition
+        */
+       if (hard_iface->if_status == BATADV_IF_TO_BE_ACTIVATED)
+               hard_iface->if_status = BATADV_IF_ACTIVE;
+}
+
 static int batadv_v_iface_enable(struct batadv_hard_iface *hard_iface)
 {
        int ret;
@@ -151,8 +162,8 @@ static void batadv_v_neigh_print(struct batadv_priv *bat_priv,
        struct batadv_hard_iface *hard_iface;
        int batman_count = 0;
 
-       seq_printf(seq, "  %-15s %s (%11s) [%10s]\n", "Neighbor",
-                  "last-seen", "throughput", "IF");
+       seq_puts(seq,
+                "  Neighbor        last-seen ( throughput) [        IF]\n");
 
        rcu_read_lock();
        list_for_each_entry_rcu(hard_iface, &batadv_hardif_list, list) {
@@ -191,9 +202,8 @@ static void batadv_v_orig_print(struct batadv_priv *bat_priv,
        int batman_count = 0;
        u32 i;
 
-       seq_printf(seq, "  %-15s %s (%11s) %17s [%10s]: %20s ...\n",
-                  "Originator", "last-seen", "throughput", "Nexthop",
-                  "outgoingIF", "Potential nexthops");
+       seq_puts(seq,
+                "  Originator      last-seen ( throughput)           Nexthop [outgoingIF]:   Potential nexthops ...\n");
 
        for (i = 0; i < hash->size; i++) {
                head = &hash->table[i];
@@ -274,6 +284,7 @@ static bool batadv_v_neigh_is_sob(struct batadv_neigh_node *neigh1,
 
 static struct batadv_algo_ops batadv_batman_v __read_mostly = {
        .name = "BATMAN_V",
+       .bat_iface_activate = batadv_v_iface_activate,
        .bat_iface_enable = batadv_v_iface_enable,
        .bat_iface_disable = batadv_v_iface_disable,
        .bat_iface_update_mac = batadv_v_iface_update_mac,
index d9bcbe6..4155fa5 100644 (file)
@@ -233,73 +233,6 @@ void batadv_v_ogm_primary_iface_set(struct batadv_hard_iface *primary_iface)
        ether_addr_copy(ogm_packet->orig, primary_iface->net_dev->dev_addr);
 }
 
-/**
- * batadv_v_ogm_orig_update - update the originator status based on the received
- *  OGM
- * @bat_priv: the bat priv with all the soft interface information
- * @orig_node: the originator to update
- * @neigh_node: the neighbour the OGM has been received from (to update)
- * @ogm2: the received OGM
- * @if_outgoing: the interface where this OGM is going to be forwarded through
- */
-static void
-batadv_v_ogm_orig_update(struct batadv_priv *bat_priv,
-                        struct batadv_orig_node *orig_node,
-                        struct batadv_neigh_node *neigh_node,
-                        const struct batadv_ogm2_packet *ogm2,
-                        struct batadv_hard_iface *if_outgoing)
-{
-       struct batadv_neigh_ifinfo *router_ifinfo = NULL, *neigh_ifinfo = NULL;
-       struct batadv_neigh_node *router = NULL;
-       s32 neigh_seq_diff;
-       u32 neigh_last_seqno;
-       u32 router_last_seqno;
-       u32 router_throughput, neigh_throughput;
-
-       batadv_dbg(BATADV_DBG_BATMAN, bat_priv,
-                  "Searching and updating originator entry of received packet\n");
-
-       /* if this neighbor already is our next hop there is nothing
-        * to change
-        */
-       router = batadv_orig_router_get(orig_node, if_outgoing);
-       if (router == neigh_node)
-               goto out;
-
-       /* don't consider neighbours with worse throughput.
-        * also switch route if this seqno is BATADV_V_MAX_ORIGDIFF newer than
-        * the last received seqno from our best next hop.
-        */
-       if (router) {
-               router_ifinfo = batadv_neigh_ifinfo_get(router, if_outgoing);
-               neigh_ifinfo = batadv_neigh_ifinfo_get(neigh_node, if_outgoing);
-
-               /* if these are not allocated, something is wrong. */
-               if (!router_ifinfo || !neigh_ifinfo)
-                       goto out;
-
-               neigh_last_seqno = neigh_ifinfo->bat_v.last_seqno;
-               router_last_seqno = router_ifinfo->bat_v.last_seqno;
-               neigh_seq_diff = neigh_last_seqno - router_last_seqno;
-               router_throughput = router_ifinfo->bat_v.throughput;
-               neigh_throughput = neigh_ifinfo->bat_v.throughput;
-
-               if ((neigh_seq_diff < BATADV_OGM_MAX_ORIGDIFF) &&
-                   (router_throughput >= neigh_throughput))
-                       goto out;
-       }
-
-       batadv_update_route(bat_priv, orig_node, if_outgoing, neigh_node);
-
-out:
-       if (router_ifinfo)
-               batadv_neigh_ifinfo_put(router_ifinfo);
-       if (neigh_ifinfo)
-               batadv_neigh_ifinfo_put(neigh_ifinfo);
-       if (router)
-               batadv_neigh_node_put(router);
-}
-
 /**
  * batadv_v_forward_penalty - apply a penalty to the throughput metric forwarded
  *  with B.A.T.M.A.N. V OGMs
@@ -347,10 +280,12 @@ static u32 batadv_v_forward_penalty(struct batadv_priv *bat_priv,
 }
 
 /**
- * batadv_v_ogm_forward - forward an OGM to the given outgoing interface
+ * batadv_v_ogm_forward - check conditions and forward an OGM to the given
+ *  outgoing interface
  * @bat_priv: the bat priv with all the soft interface information
  * @ogm_received: previously received OGM to be forwarded
- * @throughput: throughput to announce, may vary per outgoing interface
+ * @orig_node: the originator which has been updated
+ * @neigh_node: the neigh_node through with the OGM has been received
  * @if_incoming: the interface on which this OGM was received on
  * @if_outgoing: the interface to which the OGM has to be forwarded to
  *
@@ -359,28 +294,57 @@ static u32 batadv_v_forward_penalty(struct batadv_priv *bat_priv,
  */
 static void batadv_v_ogm_forward(struct batadv_priv *bat_priv,
                                 const struct batadv_ogm2_packet *ogm_received,
-                                u32 throughput,
+                                struct batadv_orig_node *orig_node,
+                                struct batadv_neigh_node *neigh_node,
                                 struct batadv_hard_iface *if_incoming,
                                 struct batadv_hard_iface *if_outgoing)
 {
+       struct batadv_neigh_ifinfo *neigh_ifinfo = NULL;
+       struct batadv_orig_ifinfo *orig_ifinfo = NULL;
+       struct batadv_neigh_node *router = NULL;
        struct batadv_ogm2_packet *ogm_forward;
        unsigned char *skb_buff;
        struct sk_buff *skb;
        size_t packet_len;
        u16 tvlv_len;
 
+       /* only forward for specific interfaces, not for the default one. */
+       if (if_outgoing == BATADV_IF_DEFAULT)
+               goto out;
+
+       orig_ifinfo = batadv_orig_ifinfo_new(orig_node, if_outgoing);
+       if (!orig_ifinfo)
+               goto out;
+
+       /* acquire possibly updated router */
+       router = batadv_orig_router_get(orig_node, if_outgoing);
+
+       /* strict rule: forward packets coming from the best next hop only */
+       if (neigh_node != router)
+               goto out;
+
+       /* don't forward the same seqno twice on one interface */
+       if (orig_ifinfo->last_seqno_forwarded == ntohl(ogm_received->seqno))
+               goto out;
+
+       orig_ifinfo->last_seqno_forwarded = ntohl(ogm_received->seqno);
+
        if (ogm_received->ttl <= 1) {
                batadv_dbg(BATADV_DBG_BATMAN, bat_priv, "ttl exceeded\n");
-               return;
+               goto out;
        }
 
+       neigh_ifinfo = batadv_neigh_ifinfo_get(neigh_node, if_outgoing);
+       if (!neigh_ifinfo)
+               goto out;
+
        tvlv_len = ntohs(ogm_received->tvlv_len);
 
        packet_len = BATADV_OGM2_HLEN + tvlv_len;
        skb = netdev_alloc_skb_ip_align(if_outgoing->net_dev,
                                        ETH_HLEN + packet_len);
        if (!skb)
-               return;
+               goto out;
 
        skb_reserve(skb, ETH_HLEN);
        skb_buff = skb_put(skb, packet_len);
@@ -388,15 +352,23 @@ static void batadv_v_ogm_forward(struct batadv_priv *bat_priv,
 
        /* apply forward penalty */
        ogm_forward = (struct batadv_ogm2_packet *)skb_buff;
-       ogm_forward->throughput = htonl(throughput);
+       ogm_forward->throughput = htonl(neigh_ifinfo->bat_v.throughput);
        ogm_forward->ttl--;
 
        batadv_dbg(BATADV_DBG_BATMAN, bat_priv,
                   "Forwarding OGM2 packet on %s: throughput %u, ttl %u, received via %s\n",
-                  if_outgoing->net_dev->name, throughput, ogm_forward->ttl,
-                  if_incoming->net_dev->name);
+                  if_outgoing->net_dev->name, ntohl(ogm_forward->throughput),
+                  ogm_forward->ttl, if_incoming->net_dev->name);
 
        batadv_v_ogm_send_to_if(skb, if_outgoing);
+
+out:
+       if (orig_ifinfo)
+               batadv_orig_ifinfo_put(orig_ifinfo);
+       if (router)
+               batadv_neigh_node_put(router);
+       if (neigh_ifinfo)
+               batadv_neigh_ifinfo_put(neigh_ifinfo);
 }
 
 /**
@@ -493,8 +465,10 @@ out:
  * @neigh_node: the neigh_node through with the OGM has been received
  * @if_incoming: the interface where this packet was received
  * @if_outgoing: the interface for which the packet should be considered
+ *
+ * Return: true if the packet should be forwarded, false otherwise
  */
-static void batadv_v_ogm_route_update(struct batadv_priv *bat_priv,
+static bool batadv_v_ogm_route_update(struct batadv_priv *bat_priv,
                                      const struct ethhdr *ethhdr,
                                      const struct batadv_ogm2_packet *ogm2,
                                      struct batadv_orig_node *orig_node,
@@ -503,14 +477,14 @@ static void batadv_v_ogm_route_update(struct batadv_priv *bat_priv,
                                      struct batadv_hard_iface *if_outgoing)
 {
        struct batadv_neigh_node *router = NULL;
-       struct batadv_neigh_ifinfo *neigh_ifinfo = NULL;
        struct batadv_orig_node *orig_neigh_node = NULL;
-       struct batadv_orig_ifinfo *orig_ifinfo = NULL;
        struct batadv_neigh_node *orig_neigh_router = NULL;
-
-       neigh_ifinfo = batadv_neigh_ifinfo_get(neigh_node, if_outgoing);
-       if (!neigh_ifinfo)
-               goto out;
+       struct batadv_neigh_ifinfo *router_ifinfo = NULL, *neigh_ifinfo = NULL;
+       u32 router_throughput, neigh_throughput;
+       u32 router_last_seqno;
+       u32 neigh_last_seqno;
+       s32 neigh_seq_diff;
+       bool forward = false;
 
        orig_neigh_node = batadv_v_ogm_orig_get(bat_priv, ethhdr->h_source);
        if (!orig_neigh_node)
@@ -529,47 +503,57 @@ static void batadv_v_ogm_route_update(struct batadv_priv *bat_priv,
                goto out;
        }
 
-       if (router)
-               batadv_neigh_node_put(router);
+       /* Mark the OGM to be considered for forwarding, and update routes
+        * if needed.
+        */
+       forward = true;
 
-       /* Update routes, and check if the OGM is from the best next hop */
-       batadv_v_ogm_orig_update(bat_priv, orig_node, neigh_node, ogm2,
-                                if_outgoing);
+       batadv_dbg(BATADV_DBG_BATMAN, bat_priv,
+                  "Searching and updating originator entry of received packet\n");
 
-       orig_ifinfo = batadv_orig_ifinfo_new(orig_node, if_outgoing);
-       if (!orig_ifinfo)
+       /* if this neighbor already is our next hop there is nothing
+        * to change
+        */
+       if (router == neigh_node)
                goto out;
 
-       /* don't forward the same seqno twice on one interface */
-       if (orig_ifinfo->last_seqno_forwarded == ntohl(ogm2->seqno))
-               goto out;
+       /* don't consider neighbours with worse throughput.
+        * also switch route if this seqno is BATADV_V_MAX_ORIGDIFF newer than
+        * the last received seqno from our best next hop.
+        */
+       if (router) {
+               router_ifinfo = batadv_neigh_ifinfo_get(router, if_outgoing);
+               neigh_ifinfo = batadv_neigh_ifinfo_get(neigh_node, if_outgoing);
 
-       /* acquire possibly updated router */
-       router = batadv_orig_router_get(orig_node, if_outgoing);
+               /* if these are not allocated, something is wrong. */
+               if (!router_ifinfo || !neigh_ifinfo)
+                       goto out;
 
-       /* strict rule: forward packets coming from the best next hop only */
-       if (neigh_node != router)
-               goto out;
+               neigh_last_seqno = neigh_ifinfo->bat_v.last_seqno;
+               router_last_seqno = router_ifinfo->bat_v.last_seqno;
+               neigh_seq_diff = neigh_last_seqno - router_last_seqno;
+               router_throughput = router_ifinfo->bat_v.throughput;
+               neigh_throughput = neigh_ifinfo->bat_v.throughput;
 
-       /* only forward for specific interface, not for the default one. */
-       if (if_outgoing != BATADV_IF_DEFAULT) {
-               orig_ifinfo->last_seqno_forwarded = ntohl(ogm2->seqno);
-               batadv_v_ogm_forward(bat_priv, ogm2,
-                                    neigh_ifinfo->bat_v.throughput,
-                                    if_incoming, if_outgoing);
+               if ((neigh_seq_diff < BATADV_OGM_MAX_ORIGDIFF) &&
+                   (router_throughput >= neigh_throughput))
+                       goto out;
        }
 
+       batadv_update_route(bat_priv, orig_node, if_outgoing, neigh_node);
 out:
-       if (orig_ifinfo)
-               batadv_orig_ifinfo_put(orig_ifinfo);
        if (router)
                batadv_neigh_node_put(router);
        if (orig_neigh_router)
                batadv_neigh_node_put(orig_neigh_router);
        if (orig_neigh_node)
                batadv_orig_node_put(orig_neigh_node);
+       if (router_ifinfo)
+               batadv_neigh_ifinfo_put(router_ifinfo);
        if (neigh_ifinfo)
                batadv_neigh_ifinfo_put(neigh_ifinfo);
+
+       return forward;
 }
 
 /**
@@ -592,6 +576,7 @@ batadv_v_ogm_process_per_outif(struct batadv_priv *bat_priv,
                               struct batadv_hard_iface *if_outgoing)
 {
        int seqno_age;
+       bool forward;
 
        /* first, update the metric with according sanity checks */
        seqno_age = batadv_v_ogm_metric_update(bat_priv, ogm2, orig_node,
@@ -610,8 +595,14 @@ batadv_v_ogm_process_per_outif(struct batadv_priv *bat_priv,
                                               ntohs(ogm2->tvlv_len));
 
        /* if the metric update went through, update routes if needed */
-       batadv_v_ogm_route_update(bat_priv, ethhdr, ogm2, orig_node,
-                                 neigh_node, if_incoming, if_outgoing);
+       forward = batadv_v_ogm_route_update(bat_priv, ethhdr, ogm2, orig_node,
+                                           neigh_node, if_incoming,
+                                           if_outgoing);
+
+       /* if the routes have been processed correctly, check and forward */
+       if (forward)
+               batadv_v_ogm_forward(bat_priv, ogm2, orig_node, neigh_node,
+                                    if_incoming, if_outgoing);
 }
 
 /**
index 0a6c8b8..2c9aa67 100644 (file)
@@ -120,7 +120,7 @@ static int batadv_compare_backbone_gw(const struct hlist_node *node,
 }
 
 /**
- * batadv_compare_backbone_gw - compare address and vid of two claims
+ * batadv_compare_claim - compare address and vid of two claims
  * @node: list node of the first entry to compare
  * @data2: pointer to the second claims
  *
@@ -200,9 +200,9 @@ static void batadv_claim_put(struct batadv_bla_claim *claim)
  *
  * Return: claim if found or NULL otherwise.
  */
-static struct batadv_bla_claim
-*batadv_claim_hash_find(struct batadv_priv *bat_priv,
-                       struct batadv_bla_claim *data)
+static struct batadv_bla_claim *
+batadv_claim_hash_find(struct batadv_priv *bat_priv,
+                      struct batadv_bla_claim *data)
 {
        struct batadv_hashtable *hash = bat_priv->bla.claim_hash;
        struct hlist_head *head;
@@ -1303,7 +1303,7 @@ static void batadv_bla_periodic_work(struct work_struct *work)
        struct batadv_hard_iface *primary_if;
        int i;
 
-       delayed_work = container_of(work, struct delayed_work, work);
+       delayed_work = to_delayed_work(work);
        priv_bla = container_of(delayed_work, struct batadv_priv_bla, work);
        bat_priv = container_of(priv_bla, struct batadv_priv, bla);
        primary_if = batadv_primary_if_get_selected(bat_priv);
@@ -1575,7 +1575,7 @@ int batadv_bla_is_backbone_gw(struct sk_buff *skb,
 }
 
 /**
- * batadv_bla_init - free all bla structures
+ * batadv_bla_free - free all bla structures
  * @bat_priv: the bat priv with all the soft interface information
  *
  * for softinterface free or module unload
@@ -1815,8 +1815,8 @@ int batadv_bla_claim_table_seq_print_text(struct seq_file *seq, void *offset)
                   "Claims announced for the mesh %s (orig %pM, group id %#.4x)\n",
                   net_dev->name, primary_addr,
                   ntohs(bat_priv->bla.claim_dest.group));
-       seq_printf(seq, "   %-17s    %-5s    %-17s [o] (%-6s)\n",
-                  "Client", "VID", "Originator", "CRC");
+       seq_puts(seq,
+                "   Client               VID      Originator        [o] (CRC   )\n");
        for (i = 0; i < hash->size; i++) {
                head = &hash->table[i];
 
@@ -1873,8 +1873,7 @@ int batadv_bla_backbone_table_seq_print_text(struct seq_file *seq, void *offset)
                   "Backbones announced for the mesh %s (orig %pM, group id %#.4x)\n",
                   net_dev->name, primary_addr,
                   ntohs(bat_priv->bla.claim_dest.group));
-       seq_printf(seq, "   %-17s    %-5s %-9s (%-6s)\n",
-                  "Originator", "VID", "last seen", "CRC");
+       seq_puts(seq, "   Originator           VID   last seen (CRC   )\n");
        for (i = 0; i < hash->size; i++) {
                head = &hash->table[i];
 
index 48253cf..aa315da 100644 (file)
@@ -365,14 +365,17 @@ static int batadv_nc_nodes_open(struct inode *inode, struct file *file)
 
 #define BATADV_DEBUGINFO(_name, _mode, _open)          \
 struct batadv_debuginfo batadv_debuginfo_##_name = {   \
-       .attr = { .name = __stringify(_name),           \
-                 .mode = _mode, },                     \
-       .fops = { .owner = THIS_MODULE,                 \
-                 .open = _open,                        \
-                 .read = seq_read,                     \
-                 .llseek = seq_lseek,                  \
-                 .release = single_release,            \
-               }                                       \
+       .attr = {                                       \
+               .name = __stringify(_name),             \
+               .mode = _mode,                          \
+       },                                              \
+       .fops = {                                       \
+               .owner = THIS_MODULE,                   \
+               .open = _open,                          \
+               .read   = seq_read,                     \
+               .llseek = seq_lseek,                    \
+               .release = single_release,              \
+       },                                              \
 }
 
 /* the following attributes are general and therefore they will be directly
index e96d7c7..67f44f5 100644 (file)
@@ -152,7 +152,7 @@ static void batadv_dat_purge(struct work_struct *work)
        struct batadv_priv_dat *priv_dat;
        struct batadv_priv *bat_priv;
 
-       delayed_work = container_of(work, struct delayed_work, work);
+       delayed_work = to_delayed_work(work);
        priv_dat = container_of(delayed_work, struct batadv_priv_dat, work);
        bat_priv = container_of(priv_dat, struct batadv_priv, dat);
 
@@ -568,6 +568,7 @@ static void batadv_choose_next_candidate(struct batadv_priv *bat_priv,
  * be sent to
  * @bat_priv: the bat priv with all the soft interface information
  * @ip_dst: ipv4 to look up in the DHT
+ * @vid: VLAN identifier
  *
  * An originator O is selected if and only if its DHT_ID value is one of three
  * closest values (from the LEFT, with wrap around if needed) then the hash
@@ -576,7 +577,8 @@ static void batadv_choose_next_candidate(struct batadv_priv *bat_priv,
  * Return: the candidate array of size BATADV_DAT_CANDIDATE_NUM.
  */
 static struct batadv_dat_candidate *
-batadv_dat_select_candidates(struct batadv_priv *bat_priv, __be32 ip_dst)
+batadv_dat_select_candidates(struct batadv_priv *bat_priv, __be32 ip_dst,
+                            unsigned short vid)
 {
        int select;
        batadv_dat_addr_t last_max = BATADV_DAT_ADDR_MAX, ip_key;
@@ -592,7 +594,7 @@ batadv_dat_select_candidates(struct batadv_priv *bat_priv, __be32 ip_dst)
                return NULL;
 
        dat.ip = ip_dst;
-       dat.vid = 0;
+       dat.vid = vid;
        ip_key = (batadv_dat_addr_t)batadv_hash_dat(&dat,
                                                    BATADV_DAT_ADDR_MAX);
 
@@ -612,6 +614,7 @@ batadv_dat_select_candidates(struct batadv_priv *bat_priv, __be32 ip_dst)
  * @bat_priv: the bat priv with all the soft interface information
  * @skb: payload to send
  * @ip: the DHT key
+ * @vid: VLAN identifier
  * @packet_subtype: unicast4addr packet subtype to use
  *
  * This function copies the skb with pskb_copy() and is sent as unicast packet
@@ -622,7 +625,7 @@ batadv_dat_select_candidates(struct batadv_priv *bat_priv, __be32 ip_dst)
  */
 static bool batadv_dat_send_data(struct batadv_priv *bat_priv,
                                 struct sk_buff *skb, __be32 ip,
-                                int packet_subtype)
+                                unsigned short vid, int packet_subtype)
 {
        int i;
        bool ret = false;
@@ -631,7 +634,7 @@ static bool batadv_dat_send_data(struct batadv_priv *bat_priv,
        struct sk_buff *tmp_skb;
        struct batadv_dat_candidate *cand;
 
-       cand = batadv_dat_select_candidates(bat_priv, ip);
+       cand = batadv_dat_select_candidates(bat_priv, ip, vid);
        if (!cand)
                goto out;
 
@@ -717,7 +720,7 @@ void batadv_dat_status_update(struct net_device *net_dev)
 }
 
 /**
- * batadv_gw_tvlv_ogm_handler_v1 - process incoming dat tvlv container
+ * batadv_dat_tvlv_ogm_handler_v1 - process incoming dat tvlv container
  * @bat_priv: the bat priv with all the soft interface information
  * @orig: the orig_node of the ogm
  * @flags: flags indicating the tvlv state (see batadv_tvlv_handler_flags)
@@ -814,8 +817,8 @@ int batadv_dat_cache_seq_print_text(struct seq_file *seq, void *offset)
                goto out;
 
        seq_printf(seq, "Distributed ARP Table (%s):\n", net_dev->name);
-       seq_printf(seq, "          %-7s          %-9s %4s %11s\n", "IPv4",
-                  "MAC", "VID", "last-seen");
+       seq_puts(seq,
+                "          IPv4             MAC        VID   last-seen\n");
 
        for (i = 0; i < hash->size; i++) {
                head = &hash->table[i];
@@ -1022,7 +1025,7 @@ bool batadv_dat_snoop_outgoing_arp_request(struct batadv_priv *bat_priv,
                ret = true;
        } else {
                /* Send the request to the DHT */
-               ret = batadv_dat_send_data(bat_priv, skb, ip_dst,
+               ret = batadv_dat_send_data(bat_priv, skb, ip_dst, vid,
                                           BATADV_P_DAT_DHT_GET);
        }
 out:
@@ -1150,8 +1153,8 @@ void batadv_dat_snoop_outgoing_arp_reply(struct batadv_priv *bat_priv,
        /* Send the ARP reply to the candidates for both the IP addresses that
         * the node obtained from the ARP reply
         */
-       batadv_dat_send_data(bat_priv, skb, ip_src, BATADV_P_DAT_DHT_PUT);
-       batadv_dat_send_data(bat_priv, skb, ip_dst, BATADV_P_DAT_DHT_PUT);
+       batadv_dat_send_data(bat_priv, skb, ip_src, vid, BATADV_P_DAT_DHT_PUT);
+       batadv_dat_send_data(bat_priv, skb, ip_dst, vid, BATADV_P_DAT_DHT_PUT);
 }
 
 /**
index e6956d0..65536db 100644 (file)
@@ -407,8 +407,8 @@ static struct sk_buff *batadv_frag_create(struct sk_buff *skb,
                                          unsigned int mtu)
 {
        struct sk_buff *skb_fragment;
-       unsigned header_size = sizeof(*frag_head);
-       unsigned fragment_size = mtu - header_size;
+       unsigned int header_size = sizeof(*frag_head);
+       unsigned int fragment_size = mtu - header_size;
 
        skb_fragment = netdev_alloc_skb(NULL, mtu + ETH_HLEN);
        if (!skb_fragment)
@@ -444,15 +444,15 @@ bool batadv_frag_send_packet(struct sk_buff *skb,
        struct batadv_hard_iface *primary_if = NULL;
        struct batadv_frag_packet frag_header;
        struct sk_buff *skb_fragment;
-       unsigned mtu = neigh_node->if_incoming->net_dev->mtu;
-       unsigned header_size = sizeof(frag_header);
-       unsigned max_fragment_size, max_packet_size;
+       unsigned int mtu = neigh_node->if_incoming->net_dev->mtu;
+       unsigned int header_size = sizeof(frag_header);
+       unsigned int max_fragment_size, max_packet_size;
        bool ret = false;
 
        /* To avoid merge and refragmentation at next-hops we never send
         * fragments larger than BATADV_FRAG_MAX_FRAG_SIZE
         */
-       mtu = min_t(unsigned, mtu, BATADV_FRAG_MAX_FRAG_SIZE);
+       mtu = min_t(unsigned int, mtu, BATADV_FRAG_MAX_FRAG_SIZE);
        max_fragment_size = mtu - header_size;
        max_packet_size = max_fragment_size * BATADV_FRAG_MAX_FRAGMENTS;
 
index b22b277..0a7deaf 100644 (file)
@@ -407,6 +407,9 @@ batadv_hardif_activate_interface(struct batadv_hard_iface *hard_iface)
 
        batadv_update_min_mtu(hard_iface->soft_iface);
 
+       if (bat_priv->bat_algo_ops->bat_iface_activate)
+               bat_priv->bat_algo_ops->bat_iface_activate(hard_iface);
+
 out:
        if (primary_if)
                batadv_hardif_put(primary_if);
@@ -572,8 +575,7 @@ void batadv_hardif_disable_interface(struct batadv_hard_iface *hard_iface,
        struct batadv_priv *bat_priv = netdev_priv(hard_iface->soft_iface);
        struct batadv_hard_iface *primary_if = NULL;
 
-       if (hard_iface->if_status == BATADV_IF_ACTIVE)
-               batadv_hardif_deactivate_interface(hard_iface);
+       batadv_hardif_deactivate_interface(hard_iface);
 
        if (hard_iface->if_status != BATADV_IF_INACTIVE)
                goto out;
index 14d0013..777aea1 100644 (file)
@@ -104,25 +104,21 @@ static int batadv_socket_open(struct inode *inode, struct file *file)
 
 static int batadv_socket_release(struct inode *inode, struct file *file)
 {
-       struct batadv_socket_client *socket_client = file->private_data;
-       struct batadv_socket_packet *socket_packet;
-       struct list_head *list_pos, *list_pos_tmp;
+       struct batadv_socket_client *client = file->private_data;
+       struct batadv_socket_packet *packet, *tmp;
 
-       spin_lock_bh(&socket_client->lock);
+       spin_lock_bh(&client->lock);
 
        /* for all packets in the queue ... */
-       list_for_each_safe(list_pos, list_pos_tmp, &socket_client->queue_list) {
-               socket_packet = list_entry(list_pos,
-                                          struct batadv_socket_packet, list);
-
-               list_del(list_pos);
-               kfree(socket_packet);
+       list_for_each_entry_safe(packet, tmp, &client->queue_list, list) {
+               list_del(&packet->list);
+               kfree(packet);
        }
 
-       batadv_socket_client_hash[socket_client->index] = NULL;
-       spin_unlock_bh(&socket_client->lock);
+       batadv_socket_client_hash[client->index] = NULL;
+       spin_unlock_bh(&client->lock);
 
-       kfree(socket_client);
+       kfree(client);
        module_put(THIS_MODULE);
 
        return 0;
@@ -337,7 +333,7 @@ err:
 }
 
 /**
- * batadv_socket_receive_packet - schedule an icmp packet to be sent to
+ * batadv_socket_add_packet - schedule an icmp packet to be sent to
  *  userspace on an icmp socket.
  * @socket_client: the socket this packet belongs to
  * @icmph: pointer to the header of the icmp packet
index d64ddb9..78c05a9 100644 (file)
@@ -663,8 +663,8 @@ static void batadv_tvlv_handler_put(struct batadv_tvlv_handler *tvlv_handler)
  *
  * Return: tvlv handler if found or NULL otherwise.
  */
-static struct batadv_tvlv_handler
-*batadv_tvlv_handler_get(struct batadv_priv *bat_priv, u8 type, u8 version)
+static struct batadv_tvlv_handler *
+batadv_tvlv_handler_get(struct batadv_priv *bat_priv, u8 type, u8 version)
 {
        struct batadv_tvlv_handler *tvlv_handler_tmp, *tvlv_handler = NULL;
 
@@ -722,8 +722,8 @@ static void batadv_tvlv_container_put(struct batadv_tvlv_container *tvlv)
  *
  * Return: tvlv container if found or NULL otherwise.
  */
-static struct batadv_tvlv_container
-*batadv_tvlv_container_get(struct batadv_priv *bat_priv, u8 type, u8 version)
+static struct batadv_tvlv_container *
+batadv_tvlv_container_get(struct batadv_priv *bat_priv, u8 type, u8 version)
 {
        struct batadv_tvlv_container *tvlv_tmp, *tvlv = NULL;
 
index db45336..07a6042 100644 (file)
@@ -24,7 +24,7 @@
 #define BATADV_DRIVER_DEVICE "batman-adv"
 
 #ifndef BATADV_SOURCE_VERSION
-#define BATADV_SOURCE_VERSION "2016.1"
+#define BATADV_SOURCE_VERSION "2016.2"
 #endif
 
 /* B.A.T.M.A.N. parameters */
@@ -296,7 +296,8 @@ static inline bool batadv_compare_eth(const void *data1, const void *data2)
 }
 
 /**
- * has_timed_out - compares current time (jiffies) and timestamp + timeout
+ * batadv_has_timed_out - compares current time (jiffies) and timestamp +
+ *  timeout
  * @timestamp:         base value to compare with (in jiffies)
  * @timeout:           added to base value before comparing (in milliseconds)
  *
index 8caa2c7..c32f24f 100644 (file)
@@ -394,7 +394,8 @@ static int batadv_mcast_forw_mode_check(struct batadv_priv *bat_priv,
 }
 
 /**
- * batadv_mcast_want_all_ip_count - count nodes with unspecific mcast interest
+ * batadv_mcast_forw_want_all_ip_count - count nodes with unspecific mcast
+ *  interest
  * @bat_priv: the bat priv with all the soft interface information
  * @ethhdr: ethernet header of a packet
  *
@@ -433,7 +434,7 @@ batadv_mcast_forw_tt_node_get(struct batadv_priv *bat_priv,
 }
 
 /**
- * batadv_mcast_want_forw_ipv4_node_get - get a node with an ipv4 flag
+ * batadv_mcast_forw_ipv4_node_get - get a node with an ipv4 flag
  * @bat_priv: the bat priv with all the soft interface information
  *
  * Return: an orig_node which has the BATADV_MCAST_WANT_ALL_IPV4 flag set and
@@ -460,7 +461,7 @@ batadv_mcast_forw_ipv4_node_get(struct batadv_priv *bat_priv)
 }
 
 /**
- * batadv_mcast_want_forw_ipv6_node_get - get a node with an ipv6 flag
+ * batadv_mcast_forw_ipv6_node_get - get a node with an ipv6 flag
  * @bat_priv: the bat priv with all the soft interface information
  *
  * Return: an orig_node which has the BATADV_MCAST_WANT_ALL_IPV6 flag set
@@ -487,7 +488,7 @@ batadv_mcast_forw_ipv6_node_get(struct batadv_priv *bat_priv)
 }
 
 /**
- * batadv_mcast_want_forw_ip_node_get - get a node with an ipv4/ipv6 flag
+ * batadv_mcast_forw_ip_node_get - get a node with an ipv4/ipv6 flag
  * @bat_priv: the bat priv with all the soft interface information
  * @ethhdr: an ethernet header to determine the protocol family from
  *
@@ -511,7 +512,7 @@ batadv_mcast_forw_ip_node_get(struct batadv_priv *bat_priv,
 }
 
 /**
- * batadv_mcast_want_forw_unsnoop_node_get - get a node with an unsnoopable flag
+ * batadv_mcast_forw_unsnoop_node_get - get a node with an unsnoopable flag
  * @bat_priv: the bat priv with all the soft interface information
  *
  * Return: an orig_node which has the BATADV_MCAST_WANT_ALL_UNSNOOPABLES flag
index b41719b..1da8e0e 100644 (file)
@@ -714,7 +714,7 @@ static void batadv_nc_worker(struct work_struct *work)
        struct batadv_priv *bat_priv;
        unsigned long timeout;
 
-       delayed_work = container_of(work, struct delayed_work, work);
+       delayed_work = to_delayed_work(work);
        priv_nc = container_of(delayed_work, struct batadv_priv_nc, work);
        bat_priv = container_of(priv_nc, struct batadv_priv, nc);
 
@@ -793,10 +793,10 @@ static bool batadv_can_nc_with_orig(struct batadv_priv *bat_priv,
  *
  * Return: the nc_node if found, NULL otherwise.
  */
-static struct batadv_nc_node
-*batadv_nc_find_nc_node(struct batadv_orig_node *orig_node,
-                       struct batadv_orig_node *orig_neigh_node,
-                       bool in_coding)
+static struct batadv_nc_node *
+batadv_nc_find_nc_node(struct batadv_orig_node *orig_node,
+                      struct batadv_orig_node *orig_neigh_node,
+                      bool in_coding)
 {
        struct batadv_nc_node *nc_node, *nc_node_out = NULL;
        struct list_head *list;
@@ -835,11 +835,11 @@ static struct batadv_nc_node
  *
  * Return: the nc_node if found or created, NULL in case of an error.
  */
-static struct batadv_nc_node
-*batadv_nc_get_nc_node(struct batadv_priv *bat_priv,
-                      struct batadv_orig_node *orig_node,
-                      struct batadv_orig_node *orig_neigh_node,
-                      bool in_coding)
+static struct batadv_nc_node *
+batadv_nc_get_nc_node(struct batadv_priv *bat_priv,
+                     struct batadv_orig_node *orig_node,
+                     struct batadv_orig_node *orig_neigh_node,
+                     bool in_coding)
 {
        struct batadv_nc_node *nc_node;
        spinlock_t *lock; /* Used to lock list selected by "int in_coding" */
index e4cbb07..f885a41 100644 (file)
@@ -250,7 +250,6 @@ static void batadv_neigh_node_release(struct kref *ref)
 {
        struct hlist_node *node_tmp;
        struct batadv_neigh_node *neigh_node;
-       struct batadv_hardif_neigh_node *hardif_neigh;
        struct batadv_neigh_ifinfo *neigh_ifinfo;
        struct batadv_algo_ops *bao;
 
@@ -262,13 +261,7 @@ static void batadv_neigh_node_release(struct kref *ref)
                batadv_neigh_ifinfo_put(neigh_ifinfo);
        }
 
-       hardif_neigh = batadv_hardif_neigh_get(neigh_node->if_incoming,
-                                              neigh_node->addr);
-       if (hardif_neigh) {
-               /* batadv_hardif_neigh_get() increases refcount too */
-               batadv_hardif_neigh_put(hardif_neigh);
-               batadv_hardif_neigh_put(hardif_neigh);
-       }
+       batadv_hardif_neigh_put(neigh_node->hardif_neigh);
 
        if (bao->bat_neigh_free)
                bao->bat_neigh_free(neigh_node);
@@ -289,7 +282,7 @@ void batadv_neigh_node_put(struct batadv_neigh_node *neigh_node)
 }
 
 /**
- * batadv_orig_node_get_router - router to the originator depending on iface
+ * batadv_orig_router_get - router to the originator depending on iface
  * @orig_node: the orig node for the router
  * @if_outgoing: the interface where the payload packet has been received or
  *  the OGM should be sent to
@@ -663,6 +656,11 @@ batadv_neigh_node_new(struct batadv_orig_node *orig_node,
        ether_addr_copy(neigh_node->addr, neigh_addr);
        neigh_node->if_incoming = hard_iface;
        neigh_node->orig_node = orig_node;
+       neigh_node->last_seen = jiffies;
+
+       /* increment unique neighbor refcount */
+       kref_get(&hardif_neigh->refcount);
+       neigh_node->hardif_neigh = hardif_neigh;
 
        /* extra reference for return */
        kref_init(&neigh_node->refcount);
@@ -672,9 +670,6 @@ batadv_neigh_node_new(struct batadv_orig_node *orig_node,
        hlist_add_head_rcu(&neigh_node->list, &orig_node->neigh_list);
        spin_unlock_bh(&orig_node->neigh_list_lock);
 
-       /* increment unique neighbor refcount */
-       kref_get(&hardif_neigh->refcount);
-
        batadv_dbg(BATADV_DBG_BATMAN, orig_node->bat_priv,
                   "Creating new neighbor %pM for orig_node %pM on interface %s\n",
                   neigh_addr, orig_node->orig, hard_iface->net_dev->name);
@@ -1222,7 +1217,7 @@ static void batadv_purge_orig(struct work_struct *work)
        struct delayed_work *delayed_work;
        struct batadv_priv *bat_priv;
 
-       delayed_work = container_of(work, struct delayed_work, work);
+       delayed_work = to_delayed_work(work);
        bat_priv = container_of(delayed_work, struct batadv_priv, orig_work);
        _batadv_purge_orig(bat_priv);
        queue_delayed_work(batadv_event_workqueue,
index 8a8d7ca..0796dfd 100644 (file)
@@ -501,7 +501,7 @@ struct batadv_coded_packet {
 #pragma pack()
 
 /**
- * struct batadv_unicast_tvlv - generic unicast packet with tvlv payload
+ * struct batadv_unicast_tvlv_packet - generic unicast packet with tvlv payload
  * @packet_type: batman-adv packet type, part of the general header
  * @version: batman-adv protocol version, part of the genereal header
  * @ttl: time to live for this packet, part of the genereal header
index 4dd646a..b781bf7 100644 (file)
@@ -105,6 +105,15 @@ static void _batadv_update_route(struct batadv_priv *bat_priv,
                neigh_node = NULL;
 
        spin_lock_bh(&orig_node->neigh_list_lock);
+       /* curr_router used earlier may not be the current orig_ifinfo->router
+        * anymore because it was dereferenced outside of the neigh_list_lock
+        * protected region. After the new best neighbor has replace the current
+        * best neighbor the reference counter needs to decrease. Consequently,
+        * the code needs to ensure the curr_router variable contains a pointer
+        * to the replaced best neighbor.
+        */
+       curr_router = rcu_dereference_protected(orig_ifinfo->router, true);
+
        rcu_assign_pointer(orig_ifinfo->router, neigh_node);
        spin_unlock_bh(&orig_node->neigh_list_lock);
        batadv_orig_ifinfo_put(orig_ifinfo);
index 3ce06e0..99ea900 100644 (file)
@@ -552,7 +552,7 @@ static void batadv_send_outstanding_bcast_packet(struct work_struct *work)
        struct net_device *soft_iface;
        struct batadv_priv *bat_priv;
 
-       delayed_work = container_of(work, struct delayed_work, work);
+       delayed_work = to_delayed_work(work);
        forw_packet = container_of(delayed_work, struct batadv_forw_packet,
                                   delayed_work);
        soft_iface = forw_packet->if_incoming->soft_iface;
@@ -604,7 +604,7 @@ void batadv_send_outstanding_bat_ogm_packet(struct work_struct *work)
        struct batadv_forw_packet *forw_packet;
        struct batadv_priv *bat_priv;
 
-       delayed_work = container_of(work, struct delayed_work, work);
+       delayed_work = to_delayed_work(work);
        forw_packet = container_of(delayed_work, struct batadv_forw_packet,
                                   delayed_work);
        bat_priv = netdev_priv(forw_packet->if_incoming->soft_iface);
@@ -675,6 +675,9 @@ batadv_purge_outstanding_packets(struct batadv_priv *bat_priv,
 
                if (pending) {
                        hlist_del(&forw_packet->list);
+                       if (!forw_packet->own)
+                               atomic_inc(&bat_priv->bcast_queue_left);
+
                        batadv_forw_packet_free(forw_packet);
                }
        }
@@ -702,6 +705,9 @@ batadv_purge_outstanding_packets(struct batadv_priv *bat_priv,
 
                if (pending) {
                        hlist_del(&forw_packet->list);
+                       if (!forw_packet->own)
+                               atomic_inc(&bat_priv->batman_queue_left);
+
                        batadv_forw_packet_free(forw_packet);
                }
        }
index 0710379..dfb4d56 100644 (file)
@@ -208,7 +208,7 @@ static int batadv_interface_tx(struct sk_buff *skb,
        if (atomic_read(&bat_priv->mesh_state) != BATADV_MESH_ACTIVE)
                goto dropped;
 
-       soft_iface->trans_start = jiffies;
+       netif_trans_update(soft_iface);
        vid = batadv_get_vid(skb, 0);
        ethhdr = eth_hdr(skb);
 
@@ -381,6 +381,24 @@ end:
        return NETDEV_TX_OK;
 }
 
+/**
+ * batadv_interface_rx - receive ethernet frame on local batman-adv interface
+ * @soft_iface: local interface which will receive the ethernet frame
+ * @skb: ethernet frame for @soft_iface
+ * @recv_if: interface on which the batman-adv packet was received
+ * @hdr_size: size of already parsed batman-adv header
+ * @orig_node: originator from which the batman-adv packet was sent
+ *
+ * Sends a ethernet frame to the receive path of the local @soft_iface.
+ * skb->data has still point to the batman-adv header with the size @hdr_size.
+ * The caller has to have parsed this header already and made sure that at least
+ * @hdr_size bytes are still available for pull in @skb.
+ *
+ * The packet may still get dropped. This can happen when the encapsulated
+ * ethernet frame is invalid or contains again an batman-adv packet. Also
+ * unicast packets will be dropped directly when it was sent between two
+ * isolated clients.
+ */
 void batadv_interface_rx(struct net_device *soft_iface,
                         struct sk_buff *skb, struct batadv_hard_iface *recv_if,
                         int hdr_size, struct batadv_orig_node *orig_node)
@@ -408,11 +426,17 @@ void batadv_interface_rx(struct net_device *soft_iface,
         */
        nf_reset(skb);
 
+       if (unlikely(!pskb_may_pull(skb, ETH_HLEN)))
+               goto dropped;
+
        vid = batadv_get_vid(skb, 0);
        ethhdr = eth_hdr(skb);
 
        switch (ntohs(ethhdr->h_proto)) {
        case ETH_P_8021Q:
+               if (!pskb_may_pull(skb, VLAN_ETH_HLEN))
+                       goto dropped;
+
                vhdr = (struct vlan_ethhdr *)skb->data;
 
                if (vhdr->h_vlan_encapsulated_proto != ethertype)
@@ -424,8 +448,6 @@ void batadv_interface_rx(struct net_device *soft_iface,
        }
 
        /* skb->dev & skb->pkt_type are set here */
-       if (unlikely(!pskb_may_pull(skb, ETH_HLEN)))
-               goto dropped;
        skb->protocol = eth_type_trans(skb, soft_iface);
 
        /* should not be necessary anymore as we use skb_pull_rcsum()
@@ -539,7 +561,7 @@ struct batadv_softif_vlan *batadv_softif_vlan_get(struct batadv_priv *bat_priv,
 }
 
 /**
- * batadv_create_vlan - allocate the needed resources for a new vlan
+ * batadv_softif_create_vlan - allocate the needed resources for a new vlan
  * @bat_priv: the bat priv with all the soft interface information
  * @vid: the VLAN identifier
  *
index 0b43e86..942b3aa 100644 (file)
@@ -215,6 +215,8 @@ static void batadv_tt_local_entry_release(struct kref *ref)
        tt_local_entry = container_of(ref, struct batadv_tt_local_entry,
                                      common.refcount);
 
+       batadv_softif_vlan_put(tt_local_entry->vlan);
+
        kfree_rcu(tt_local_entry, common.rcu);
 }
 
@@ -673,6 +675,7 @@ bool batadv_tt_local_add(struct net_device *soft_iface, const u8 *addr,
        kref_get(&tt_local->common.refcount);
        tt_local->last_seen = jiffies;
        tt_local->common.added_at = tt_local->last_seen;
+       tt_local->vlan = vlan;
 
        /* the batman interface mac and multicast addresses should never be
         * purged
@@ -991,7 +994,6 @@ int batadv_tt_local_seq_print_text(struct seq_file *seq, void *offset)
        struct batadv_tt_common_entry *tt_common_entry;
        struct batadv_tt_local_entry *tt_local;
        struct batadv_hard_iface *primary_if;
-       struct batadv_softif_vlan *vlan;
        struct hlist_head *head;
        unsigned short vid;
        u32 i;
@@ -1008,8 +1010,8 @@ int batadv_tt_local_seq_print_text(struct seq_file *seq, void *offset)
        seq_printf(seq,
                   "Locally retrieved addresses (from %s) announced via TT (TTVN: %u):\n",
                   net_dev->name, (u8)atomic_read(&bat_priv->tt.vn));
-       seq_printf(seq, "       %-13s  %s %-8s %-9s (%-10s)\n", "Client", "VID",
-                  "Flags", "Last seen", "CRC");
+       seq_puts(seq,
+                "       Client         VID Flags    Last seen (CRC       )\n");
 
        for (i = 0; i < hash->size; i++) {
                head = &hash->table[i];
@@ -1027,14 +1029,6 @@ int batadv_tt_local_seq_print_text(struct seq_file *seq, void *offset)
                        last_seen_msecs = last_seen_msecs % 1000;
 
                        no_purge = tt_common_entry->flags & np_flag;
-
-                       vlan = batadv_softif_vlan_get(bat_priv, vid);
-                       if (!vlan) {
-                               seq_printf(seq, "Cannot retrieve VLAN %d\n",
-                                          BATADV_PRINT_VID(vid));
-                               continue;
-                       }
-
                        seq_printf(seq,
                                   " * %pM %4i [%c%c%c%c%c%c] %3u.%03u   (%#.8x)\n",
                                   tt_common_entry->addr,
@@ -1052,9 +1046,7 @@ int batadv_tt_local_seq_print_text(struct seq_file *seq, void *offset)
                                     BATADV_TT_CLIENT_ISOLA) ? 'I' : '.'),
                                   no_purge ? 0 : last_seen_secs,
                                   no_purge ? 0 : last_seen_msecs,
-                                  vlan->tt.crc);
-
-                       batadv_softif_vlan_put(vlan);
+                                  tt_local->vlan->tt.crc);
                }
                rcu_read_unlock();
        }
@@ -1099,7 +1091,6 @@ u16 batadv_tt_local_remove(struct batadv_priv *bat_priv, const u8 *addr,
 {
        struct batadv_tt_local_entry *tt_local_entry;
        u16 flags, curr_flags = BATADV_NO_FLAGS;
-       struct batadv_softif_vlan *vlan;
        void *tt_entry_exists;
 
        tt_local_entry = batadv_tt_local_hash_find(bat_priv, addr, vid);
@@ -1139,14 +1130,6 @@ u16 batadv_tt_local_remove(struct batadv_priv *bat_priv, const u8 *addr,
        /* extra call to free the local tt entry */
        batadv_tt_local_entry_put(tt_local_entry);
 
-       /* decrease the reference held for this vlan */
-       vlan = batadv_softif_vlan_get(bat_priv, vid);
-       if (!vlan)
-               goto out;
-
-       batadv_softif_vlan_put(vlan);
-       batadv_softif_vlan_put(vlan);
-
 out:
        if (tt_local_entry)
                batadv_tt_local_entry_put(tt_local_entry);
@@ -1219,7 +1202,6 @@ static void batadv_tt_local_table_free(struct batadv_priv *bat_priv)
        spinlock_t *list_lock; /* protects write access to the hash lists */
        struct batadv_tt_common_entry *tt_common_entry;
        struct batadv_tt_local_entry *tt_local;
-       struct batadv_softif_vlan *vlan;
        struct hlist_node *node_tmp;
        struct hlist_head *head;
        u32 i;
@@ -1241,14 +1223,6 @@ static void batadv_tt_local_table_free(struct batadv_priv *bat_priv)
                                                struct batadv_tt_local_entry,
                                                common);
 
-                       /* decrease the reference held for this vlan */
-                       vlan = batadv_softif_vlan_get(bat_priv,
-                                                     tt_common_entry->vid);
-                       if (vlan) {
-                               batadv_softif_vlan_put(vlan);
-                               batadv_softif_vlan_put(vlan);
-                       }
-
                        batadv_tt_local_entry_put(tt_local);
                }
                spin_unlock_bh(list_lock);
@@ -1706,9 +1680,8 @@ int batadv_tt_global_seq_print_text(struct seq_file *seq, void *offset)
        seq_printf(seq,
                   "Globally announced TT entries received via the mesh %s\n",
                   net_dev->name);
-       seq_printf(seq, "       %-13s  %s  %s       %-15s %s (%-10s) %s\n",
-                  "Client", "VID", "(TTVN)", "Originator", "(Curr TTVN)",
-                  "CRC", "Flags");
+       seq_puts(seq,
+                "       Client         VID  (TTVN)       Originator      (Curr TTVN) (CRC       ) Flags\n");
 
        for (i = 0; i < hash->size; i++) {
                head = &hash->table[i];
@@ -3227,7 +3200,7 @@ static void batadv_tt_purge(struct work_struct *work)
        struct batadv_priv_tt *priv_tt;
        struct batadv_priv *bat_priv;
 
-       delayed_work = container_of(work, struct delayed_work, work);
+       delayed_work = to_delayed_work(work);
        priv_tt = container_of(delayed_work, struct batadv_priv_tt, work);
        bat_priv = container_of(priv_tt, struct batadv_priv, tt);
 
@@ -3309,7 +3282,6 @@ static void batadv_tt_local_purge_pending_clients(struct batadv_priv *bat_priv)
        struct batadv_hashtable *hash = bat_priv->tt.local_hash;
        struct batadv_tt_common_entry *tt_common;
        struct batadv_tt_local_entry *tt_local;
-       struct batadv_softif_vlan *vlan;
        struct hlist_node *node_tmp;
        struct hlist_head *head;
        spinlock_t *list_lock; /* protects write access to the hash lists */
@@ -3339,13 +3311,6 @@ static void batadv_tt_local_purge_pending_clients(struct batadv_priv *bat_priv)
                                                struct batadv_tt_local_entry,
                                                common);
 
-                       /* decrease the reference held for this vlan */
-                       vlan = batadv_softif_vlan_get(bat_priv, tt_common->vid);
-                       if (vlan) {
-                               batadv_softif_vlan_put(vlan);
-                               batadv_softif_vlan_put(vlan);
-                       }
-
                        batadv_tt_local_entry_put(tt_local);
                }
                spin_unlock_bh(list_lock);
index 9abfb3e..1e47fbe 100644 (file)
@@ -433,6 +433,7 @@ struct batadv_hardif_neigh_node {
  * @ifinfo_lock: lock protecting private ifinfo members and list
  * @if_incoming: pointer to incoming hard-interface
  * @last_seen: when last packet via this neighbor was received
+ * @hardif_neigh: hardif_neigh of this neighbor
  * @refcount: number of contexts the object is used
  * @rcu: struct used for freeing in an RCU-safe manner
  */
@@ -444,6 +445,7 @@ struct batadv_neigh_node {
        spinlock_t ifinfo_lock; /* protects ifinfo_list and its members */
        struct batadv_hard_iface *if_incoming;
        unsigned long last_seen;
+       struct batadv_hardif_neigh_node *hardif_neigh;
        struct kref refcount;
        struct rcu_head rcu;
 };
@@ -1073,10 +1075,12 @@ struct batadv_tt_common_entry {
  * struct batadv_tt_local_entry - translation table local entry data
  * @common: general translation table data
  * @last_seen: timestamp used for purging stale tt local entries
+ * @vlan: soft-interface vlan of the entry
  */
 struct batadv_tt_local_entry {
        struct batadv_tt_common_entry common;
        unsigned long last_seen;
+       struct batadv_softif_vlan *vlan;
 };
 
 /**
@@ -1250,6 +1254,8 @@ struct batadv_forw_packet {
  * struct batadv_algo_ops - mesh algorithm callbacks
  * @list: list node for the batadv_algo_list
  * @name: name of the algorithm
+ * @bat_iface_activate: start routing mechanisms when hard-interface is brought
+ *  up
  * @bat_iface_enable: init routing info when hard-interface is enabled
  * @bat_iface_disable: de-init routing info when hard-interface is disabled
  * @bat_iface_update_mac: (re-)init mac addresses of the protocol information
@@ -1277,6 +1283,7 @@ struct batadv_forw_packet {
 struct batadv_algo_ops {
        struct hlist_node list;
        char *name;
+       void (*bat_iface_activate)(struct batadv_hard_iface *hard_iface);
        int (*bat_iface_enable)(struct batadv_hard_iface *hard_iface);
        void (*bat_iface_disable)(struct batadv_hard_iface *hard_iface);
        void (*bat_iface_update_mac)(struct batadv_hard_iface *hard_iface);
index 8a4cc2f..780089d 100644 (file)
@@ -68,7 +68,7 @@ struct lowpan_peer {
        struct in6_addr peer_addr;
 };
 
-struct lowpan_dev {
+struct lowpan_btle_dev {
        struct list_head list;
 
        struct hci_dev *hdev;
@@ -80,18 +80,21 @@ struct lowpan_dev {
        struct delayed_work notify_peers;
 };
 
-static inline struct lowpan_dev *lowpan_dev(const struct net_device *netdev)
+static inline struct lowpan_btle_dev *
+lowpan_btle_dev(const struct net_device *netdev)
 {
-       return (struct lowpan_dev *)lowpan_priv(netdev)->priv;
+       return (struct lowpan_btle_dev *)lowpan_dev(netdev)->priv;
 }
 
-static inline void peer_add(struct lowpan_dev *dev, struct lowpan_peer *peer)
+static inline void peer_add(struct lowpan_btle_dev *dev,
+                           struct lowpan_peer *peer)
 {
        list_add_rcu(&peer->list, &dev->peers);
        atomic_inc(&dev->peer_count);
 }
 
-static inline bool peer_del(struct lowpan_dev *dev, struct lowpan_peer *peer)
+static inline bool peer_del(struct lowpan_btle_dev *dev,
+                           struct lowpan_peer *peer)
 {
        list_del_rcu(&peer->list);
        kfree_rcu(peer, rcu);
@@ -106,7 +109,7 @@ static inline bool peer_del(struct lowpan_dev *dev, struct lowpan_peer *peer)
        return false;
 }
 
-static inline struct lowpan_peer *peer_lookup_ba(struct lowpan_dev *dev,
+static inline struct lowpan_peer *peer_lookup_ba(struct lowpan_btle_dev *dev,
                                                 bdaddr_t *ba, __u8 type)
 {
        struct lowpan_peer *peer;
@@ -134,8 +137,8 @@ static inline struct lowpan_peer *peer_lookup_ba(struct lowpan_dev *dev,
        return NULL;
 }
 
-static inline struct lowpan_peer *__peer_lookup_chan(struct lowpan_dev *dev,
-                                                    struct l2cap_chan *chan)
+static inline struct lowpan_peer *
+__peer_lookup_chan(struct lowpan_btle_dev *dev, struct l2cap_chan *chan)
 {
        struct lowpan_peer *peer;
 
@@ -147,8 +150,8 @@ static inline struct lowpan_peer *__peer_lookup_chan(struct lowpan_dev *dev,
        return NULL;
 }
 
-static inline struct lowpan_peer *__peer_lookup_conn(struct lowpan_dev *dev,
-                                                    struct l2cap_conn *conn)
+static inline struct lowpan_peer *
+__peer_lookup_conn(struct lowpan_btle_dev *dev, struct l2cap_conn *conn)
 {
        struct lowpan_peer *peer;
 
@@ -160,7 +163,7 @@ static inline struct lowpan_peer *__peer_lookup_conn(struct lowpan_dev *dev,
        return NULL;
 }
 
-static inline struct lowpan_peer *peer_lookup_dst(struct lowpan_dev *dev,
+static inline struct lowpan_peer *peer_lookup_dst(struct lowpan_btle_dev *dev,
                                                  struct in6_addr *daddr,
                                                  struct sk_buff *skb)
 {
@@ -220,7 +223,7 @@ static inline struct lowpan_peer *peer_lookup_dst(struct lowpan_dev *dev,
 
 static struct lowpan_peer *lookup_peer(struct l2cap_conn *conn)
 {
-       struct lowpan_dev *entry;
+       struct lowpan_btle_dev *entry;
        struct lowpan_peer *peer = NULL;
 
        rcu_read_lock();
@@ -236,10 +239,10 @@ static struct lowpan_peer *lookup_peer(struct l2cap_conn *conn)
        return peer;
 }
 
-static struct lowpan_dev *lookup_dev(struct l2cap_conn *conn)
+static struct lowpan_btle_dev *lookup_dev(struct l2cap_conn *conn)
 {
-       struct lowpan_dev *entry;
-       struct lowpan_dev *dev = NULL;
+       struct lowpan_btle_dev *entry;
+       struct lowpan_btle_dev *dev = NULL;
 
        rcu_read_lock();
 
@@ -270,10 +273,10 @@ static int iphc_decompress(struct sk_buff *skb, struct net_device *netdev,
                           struct l2cap_chan *chan)
 {
        const u8 *saddr, *daddr;
-       struct lowpan_dev *dev;
+       struct lowpan_btle_dev *dev;
        struct lowpan_peer *peer;
 
-       dev = lowpan_dev(netdev);
+       dev = lowpan_btle_dev(netdev);
 
        rcu_read_lock();
        peer = __peer_lookup_chan(dev, chan);
@@ -375,7 +378,7 @@ drop:
 /* Packet from BT LE device */
 static int chan_recv_cb(struct l2cap_chan *chan, struct sk_buff *skb)
 {
-       struct lowpan_dev *dev;
+       struct lowpan_btle_dev *dev;
        struct lowpan_peer *peer;
        int err;
 
@@ -431,15 +434,18 @@ static int setup_header(struct sk_buff *skb, struct net_device *netdev,
                        bdaddr_t *peer_addr, u8 *peer_addr_type)
 {
        struct in6_addr ipv6_daddr;
-       struct lowpan_dev *dev;
+       struct ipv6hdr *hdr;
+       struct lowpan_btle_dev *dev;
        struct lowpan_peer *peer;
        bdaddr_t addr, *any = BDADDR_ANY;
        u8 *daddr = any->b;
        int err, status = 0;
 
-       dev = lowpan_dev(netdev);
+       hdr = ipv6_hdr(skb);
+
+       dev = lowpan_btle_dev(netdev);
 
-       memcpy(&ipv6_daddr, &lowpan_cb(skb)->addr, sizeof(ipv6_daddr));
+       memcpy(&ipv6_daddr, &hdr->daddr, sizeof(ipv6_daddr));
 
        if (ipv6_addr_is_multicast(&ipv6_daddr)) {
                lowpan_cb(skb)->chan = NULL;
@@ -489,15 +495,9 @@ static int header_create(struct sk_buff *skb, struct net_device *netdev,
                         unsigned short type, const void *_daddr,
                         const void *_saddr, unsigned int len)
 {
-       struct ipv6hdr *hdr;
-
        if (type != ETH_P_IPV6)
                return -EINVAL;
 
-       hdr = ipv6_hdr(skb);
-
-       memcpy(&lowpan_cb(skb)->addr, &hdr->daddr, sizeof(struct in6_addr));
-
        return 0;
 }
 
@@ -543,19 +543,19 @@ static int send_pkt(struct l2cap_chan *chan, struct sk_buff *skb,
 static int send_mcast_pkt(struct sk_buff *skb, struct net_device *netdev)
 {
        struct sk_buff *local_skb;
-       struct lowpan_dev *entry;
+       struct lowpan_btle_dev *entry;
        int err = 0;
 
        rcu_read_lock();
 
        list_for_each_entry_rcu(entry, &bt_6lowpan_devices, list) {
                struct lowpan_peer *pentry;
-               struct lowpan_dev *dev;
+               struct lowpan_btle_dev *dev;
 
                if (entry->netdev != netdev)
                        continue;
 
-               dev = lowpan_dev(entry->netdev);
+               dev = lowpan_btle_dev(entry->netdev);
 
                list_for_each_entry_rcu(pentry, &dev->peers, list) {
                        int ret;
@@ -723,8 +723,8 @@ static void ifdown(struct net_device *netdev)
 
 static void do_notify_peers(struct work_struct *work)
 {
-       struct lowpan_dev *dev = container_of(work, struct lowpan_dev,
-                                             notify_peers.work);
+       struct lowpan_btle_dev *dev = container_of(work, struct lowpan_btle_dev,
+                                                  notify_peers.work);
 
        netdev_notify_peers(dev->netdev); /* send neighbour adv at startup */
 }
@@ -766,7 +766,7 @@ static void set_ip_addr_bits(u8 addr_type, u8 *addr)
 }
 
 static struct l2cap_chan *add_peer_chan(struct l2cap_chan *chan,
-                                       struct lowpan_dev *dev)
+                                       struct lowpan_btle_dev *dev)
 {
        struct lowpan_peer *peer;
 
@@ -803,12 +803,12 @@ static struct l2cap_chan *add_peer_chan(struct l2cap_chan *chan,
        return peer->chan;
 }
 
-static int setup_netdev(struct l2cap_chan *chan, struct lowpan_dev **dev)
+static int setup_netdev(struct l2cap_chan *chan, struct lowpan_btle_dev **dev)
 {
        struct net_device *netdev;
        int err = 0;
 
-       netdev = alloc_netdev(LOWPAN_PRIV_SIZE(sizeof(struct lowpan_dev)),
+       netdev = alloc_netdev(LOWPAN_PRIV_SIZE(sizeof(struct lowpan_btle_dev)),
                              IFACE_NAME_TEMPLATE, NET_NAME_UNKNOWN,
                              netdev_setup);
        if (!netdev)
@@ -820,7 +820,7 @@ static int setup_netdev(struct l2cap_chan *chan, struct lowpan_dev **dev)
        SET_NETDEV_DEV(netdev, &chan->conn->hcon->hdev->dev);
        SET_NETDEV_DEVTYPE(netdev, &bt_type);
 
-       *dev = lowpan_dev(netdev);
+       *dev = lowpan_btle_dev(netdev);
        (*dev)->netdev = netdev;
        (*dev)->hdev = chan->conn->hcon->hdev;
        INIT_LIST_HEAD(&(*dev)->peers);
@@ -853,7 +853,7 @@ out:
 
 static inline void chan_ready_cb(struct l2cap_chan *chan)
 {
-       struct lowpan_dev *dev;
+       struct lowpan_btle_dev *dev;
 
        dev = lookup_dev(chan->conn);
 
@@ -890,8 +890,9 @@ static inline struct l2cap_chan *chan_new_conn_cb(struct l2cap_chan *pchan)
 
 static void delete_netdev(struct work_struct *work)
 {
-       struct lowpan_dev *entry = container_of(work, struct lowpan_dev,
-                                               delete_netdev);
+       struct lowpan_btle_dev *entry = container_of(work,
+                                                    struct lowpan_btle_dev,
+                                                    delete_netdev);
 
        lowpan_unregister_netdev(entry->netdev);
 
@@ -900,8 +901,8 @@ static void delete_netdev(struct work_struct *work)
 
 static void chan_close_cb(struct l2cap_chan *chan)
 {
-       struct lowpan_dev *entry;
-       struct lowpan_dev *dev = NULL;
+       struct lowpan_btle_dev *entry;
+       struct lowpan_btle_dev *dev = NULL;
        struct lowpan_peer *peer;
        int err = -ENOENT;
        bool last = false, remove = true;
@@ -921,7 +922,7 @@ static void chan_close_cb(struct l2cap_chan *chan)
        spin_lock(&devices_lock);
 
        list_for_each_entry_rcu(entry, &bt_6lowpan_devices, list) {
-               dev = lowpan_dev(entry->netdev);
+               dev = lowpan_btle_dev(entry->netdev);
                peer = __peer_lookup_chan(dev, chan);
                if (peer) {
                        last = peer_del(dev, peer);
@@ -1131,7 +1132,7 @@ static int get_l2cap_conn(char *buf, bdaddr_t *addr, u8 *addr_type,
 
 static void disconnect_all_peers(void)
 {
-       struct lowpan_dev *entry;
+       struct lowpan_btle_dev *entry;
        struct lowpan_peer *peer, *tmp_peer, *new_peer;
        struct list_head peers;
 
@@ -1291,7 +1292,7 @@ static ssize_t lowpan_control_write(struct file *fp,
 
 static int lowpan_control_show(struct seq_file *f, void *ptr)
 {
-       struct lowpan_dev *entry;
+       struct lowpan_btle_dev *entry;
        struct lowpan_peer *peer;
 
        spin_lock(&devices_lock);
@@ -1322,7 +1323,7 @@ static const struct file_operations lowpan_control_fops = {
 
 static void disconnect_devices(void)
 {
-       struct lowpan_dev *entry, *tmp, *new_dev;
+       struct lowpan_btle_dev *entry, *tmp, *new_dev;
        struct list_head devices;
 
        INIT_LIST_HEAD(&devices);
@@ -1360,7 +1361,7 @@ static int device_event(struct notifier_block *unused,
                        unsigned long event, void *ptr)
 {
        struct net_device *netdev = netdev_notifier_info_to_dev(ptr);
-       struct lowpan_dev *entry;
+       struct lowpan_btle_dev *entry;
 
        if (netdev->type != ARPHRD_6LOWPAN)
                return NOTIFY_DONE;
index 6ceb5d3..f4fcb4a 100644 (file)
@@ -188,7 +188,7 @@ static netdev_tx_t bnep_net_xmit(struct sk_buff *skb,
         * So we have to queue them and wake up session thread which is sleeping
         * on the sk_sleep(sk).
         */
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
        skb_queue_tail(&sk->sk_write_queue, skb);
        wake_up_interruptible(sk_sleep(sk));
 
index 253bc77..7dbc80d 100644 (file)
@@ -61,6 +61,19 @@ static void __mdb_entry_fill_flags(struct br_mdb_entry *e, unsigned char flags)
                e->flags |= MDB_FLAGS_OFFLOAD;
 }
 
+static void __mdb_entry_to_br_ip(struct br_mdb_entry *entry, struct br_ip *ip)
+{
+       memset(ip, 0, sizeof(struct br_ip));
+       ip->vid = entry->vid;
+       ip->proto = entry->addr.proto;
+       if (ip->proto == htons(ETH_P_IP))
+               ip->u.ip4 = entry->addr.u.ip4;
+#if IS_ENABLED(CONFIG_IPV6)
+       else
+               ip->u.ip6 = entry->addr.u.ip6;
+#endif
+}
+
 static int br_mdb_fill_info(struct sk_buff *skb, struct netlink_callback *cb,
                            struct net_device *dev)
 {
@@ -243,9 +256,45 @@ static inline size_t rtnl_mdb_nlmsg_size(void)
                + nla_total_size(sizeof(struct br_mdb_entry));
 }
 
-static void __br_mdb_notify(struct net_device *dev, struct br_mdb_entry *entry,
-                           int type, struct net_bridge_port_group *pg)
+struct br_mdb_complete_info {
+       struct net_bridge_port *port;
+       struct br_ip ip;
+};
+
+static void br_mdb_complete(struct net_device *dev, int err, void *priv)
 {
+       struct br_mdb_complete_info *data = priv;
+       struct net_bridge_port_group __rcu **pp;
+       struct net_bridge_port_group *p;
+       struct net_bridge_mdb_htable *mdb;
+       struct net_bridge_mdb_entry *mp;
+       struct net_bridge_port *port = data->port;
+       struct net_bridge *br = port->br;
+
+       if (err)
+               goto err;
+
+       spin_lock_bh(&br->multicast_lock);
+       mdb = mlock_dereference(br->mdb, br);
+       mp = br_mdb_ip_get(mdb, &data->ip);
+       if (!mp)
+               goto out;
+       for (pp = &mp->ports; (p = mlock_dereference(*pp, br)) != NULL;
+            pp = &p->next) {
+               if (p->port != port)
+                       continue;
+               p->flags |= MDB_PG_FLAGS_OFFLOAD;
+       }
+out:
+       spin_unlock_bh(&br->multicast_lock);
+err:
+       kfree(priv);
+}
+
+static void __br_mdb_notify(struct net_device *dev, struct net_bridge_port *p,
+                           struct br_mdb_entry *entry, int type)
+{
+       struct br_mdb_complete_info *complete_info;
        struct switchdev_obj_port_mdb mdb = {
                .obj = {
                        .id = SWITCHDEV_OBJ_ID_PORT_MDB,
@@ -268,9 +317,14 @@ static void __br_mdb_notify(struct net_device *dev, struct br_mdb_entry *entry,
 
        mdb.obj.orig_dev = port_dev;
        if (port_dev && type == RTM_NEWMDB) {
-               err = switchdev_port_obj_add(port_dev, &mdb.obj);
-               if (!err && pg)
-                       pg->flags |= MDB_PG_FLAGS_OFFLOAD;
+               complete_info = kmalloc(sizeof(*complete_info), GFP_ATOMIC);
+               if (complete_info) {
+                       complete_info->port = p;
+                       __mdb_entry_to_br_ip(entry, &complete_info->ip);
+                       mdb.obj.complete_priv = complete_info;
+                       mdb.obj.complete = br_mdb_complete;
+                       switchdev_port_obj_add(port_dev, &mdb.obj);
+               }
        } else if (port_dev && type == RTM_DELMDB) {
                switchdev_port_obj_del(port_dev, &mdb.obj);
        }
@@ -291,21 +345,21 @@ errout:
        rtnl_set_sk_err(net, RTNLGRP_MDB, err);
 }
 
-void br_mdb_notify(struct net_device *dev, struct net_bridge_port_group *pg,
-                  int type)
+void br_mdb_notify(struct net_device *dev, struct net_bridge_port *port,
+                  struct br_ip *group, int type, u8 flags)
 {
        struct br_mdb_entry entry;
 
        memset(&entry, 0, sizeof(entry));
-       entry.ifindex = pg->port->dev->ifindex;
-       entry.addr.proto = pg->addr.proto;
-       entry.addr.u.ip4 = pg->addr.u.ip4;
+       entry.ifindex = port->dev->ifindex;
+       entry.addr.proto = group->proto;
+       entry.addr.u.ip4 = group->u.ip4;
 #if IS_ENABLED(CONFIG_IPV6)
-       entry.addr.u.ip6 = pg->addr.u.ip6;
+       entry.addr.u.ip6 = group->u.ip6;
 #endif
-       entry.vid = pg->addr.vid;
-       __mdb_entry_fill_flags(&entry, pg->flags);
-       __br_mdb_notify(dev, &entry, type, pg);
+       entry.vid = group->vid;
+       __mdb_entry_fill_flags(&entry, flags);
+       __br_mdb_notify(dev, port, &entry, type);
 }
 
 static int nlmsg_populate_rtr_fill(struct sk_buff *skb,
@@ -450,8 +504,7 @@ static int br_mdb_parse(struct sk_buff *skb, struct nlmsghdr *nlh,
 }
 
 static int br_mdb_add_group(struct net_bridge *br, struct net_bridge_port *port,
-                           struct br_ip *group, unsigned char state,
-                           struct net_bridge_port_group **pg)
+                           struct br_ip *group, unsigned char state)
 {
        struct net_bridge_mdb_entry *mp;
        struct net_bridge_port_group *p;
@@ -482,7 +535,6 @@ static int br_mdb_add_group(struct net_bridge *br, struct net_bridge_port *port,
        if (unlikely(!p))
                return -ENOMEM;
        rcu_assign_pointer(*pp, p);
-       *pg = p;
        if (state == MDB_TEMPORARY)
                mod_timer(&p->timer, now + br->multicast_membership_interval);
 
@@ -490,8 +542,7 @@ static int br_mdb_add_group(struct net_bridge *br, struct net_bridge_port *port,
 }
 
 static int __br_mdb_add(struct net *net, struct net_bridge *br,
-                       struct br_mdb_entry *entry,
-                       struct net_bridge_port_group **pg)
+                       struct br_mdb_entry *entry)
 {
        struct br_ip ip;
        struct net_device *dev;
@@ -509,18 +560,10 @@ static int __br_mdb_add(struct net *net, struct net_bridge *br,
        if (!p || p->br != br || p->state == BR_STATE_DISABLED)
                return -EINVAL;
 
-       memset(&ip, 0, sizeof(ip));
-       ip.vid = entry->vid;
-       ip.proto = entry->addr.proto;
-       if (ip.proto == htons(ETH_P_IP))
-               ip.u.ip4 = entry->addr.u.ip4;
-#if IS_ENABLED(CONFIG_IPV6)
-       else
-               ip.u.ip6 = entry->addr.u.ip6;
-#endif
+       __mdb_entry_to_br_ip(entry, &ip);
 
        spin_lock_bh(&br->multicast_lock);
-       ret = br_mdb_add_group(br, p, &ip, entry->state, pg);
+       ret = br_mdb_add_group(br, p, &ip, entry->state);
        spin_unlock_bh(&br->multicast_lock);
        return ret;
 }
@@ -528,7 +571,6 @@ static int __br_mdb_add(struct net *net, struct net_bridge *br,
 static int br_mdb_add(struct sk_buff *skb, struct nlmsghdr *nlh)
 {
        struct net *net = sock_net(skb->sk);
-       struct net_bridge_port_group *pg;
        struct net_bridge_vlan_group *vg;
        struct net_device *dev, *pdev;
        struct br_mdb_entry *entry;
@@ -558,15 +600,15 @@ static int br_mdb_add(struct sk_buff *skb, struct nlmsghdr *nlh)
        if (br_vlan_enabled(br) && vg && entry->vid == 0) {
                list_for_each_entry(v, &vg->vlan_list, vlist) {
                        entry->vid = v->vid;
-                       err = __br_mdb_add(net, br, entry, &pg);
+                       err = __br_mdb_add(net, br, entry);
                        if (err)
                                break;
-                       __br_mdb_notify(dev, entry, RTM_NEWMDB, pg);
+                       __br_mdb_notify(dev, p, entry, RTM_NEWMDB);
                }
        } else {
-               err = __br_mdb_add(net, br, entry, &pg);
+               err = __br_mdb_add(net, br, entry);
                if (!err)
-                       __br_mdb_notify(dev, entry, RTM_NEWMDB, pg);
+                       __br_mdb_notify(dev, p, entry, RTM_NEWMDB);
        }
 
        return err;
@@ -584,15 +626,7 @@ static int __br_mdb_del(struct net_bridge *br, struct br_mdb_entry *entry)
        if (!netif_running(br->dev) || br->multicast_disabled)
                return -EINVAL;
 
-       memset(&ip, 0, sizeof(ip));
-       ip.vid = entry->vid;
-       ip.proto = entry->addr.proto;
-       if (ip.proto == htons(ETH_P_IP))
-               ip.u.ip4 = entry->addr.u.ip4;
-#if IS_ENABLED(CONFIG_IPV6)
-       else
-               ip.u.ip6 = entry->addr.u.ip6;
-#endif
+       __mdb_entry_to_br_ip(entry, &ip);
 
        spin_lock_bh(&br->multicast_lock);
        mdb = mlock_dereference(br->mdb, br);
@@ -662,12 +696,12 @@ static int br_mdb_del(struct sk_buff *skb, struct nlmsghdr *nlh)
                        entry->vid = v->vid;
                        err = __br_mdb_del(br, entry);
                        if (!err)
-                               __br_mdb_notify(dev, entry, RTM_DELMDB, NULL);
+                               __br_mdb_notify(dev, p, entry, RTM_DELMDB);
                }
        } else {
                err = __br_mdb_del(br, entry);
                if (!err)
-                       __br_mdb_notify(dev, entry, RTM_DELMDB, NULL);
+                       __br_mdb_notify(dev, p, entry, RTM_DELMDB);
        }
 
        return err;
index a4c15df..191ea66 100644 (file)
@@ -283,7 +283,8 @@ static void br_multicast_del_pg(struct net_bridge *br,
                rcu_assign_pointer(*pp, p->next);
                hlist_del_init(&p->mglist);
                del_timer(&p->timer);
-               br_mdb_notify(br->dev, p, RTM_DELMDB);
+               br_mdb_notify(br->dev, p->port, &pg->addr, RTM_DELMDB,
+                             p->flags);
                call_rcu_bh(&p->rcu, br_multicast_free_pg);
 
                if (!mp->ports && !mp->mglist &&
@@ -705,7 +706,7 @@ static int br_multicast_add_group(struct net_bridge *br,
        if (unlikely(!p))
                goto err;
        rcu_assign_pointer(*pp, p);
-       br_mdb_notify(br->dev, p, RTM_NEWMDB);
+       br_mdb_notify(br->dev, port, group, RTM_NEWMDB, 0);
 
 found:
        mod_timer(&p->timer, now + br->multicast_membership_interval);
@@ -1461,7 +1462,8 @@ br_multicast_leave_group(struct net_bridge *br,
                        hlist_del_init(&p->mglist);
                        del_timer(&p->timer);
                        call_rcu_bh(&p->rcu, br_multicast_free_pg);
-                       br_mdb_notify(br->dev, p, RTM_DELMDB);
+                       br_mdb_notify(br->dev, port, group, RTM_DELMDB,
+                                     p->flags);
 
                        if (!mp->ports && !mp->mglist &&
                            netif_running(br->dev))
index 44114a9..2d25979 100644 (file)
@@ -217,13 +217,13 @@ static int br_validate_ipv4(struct net *net, struct sk_buff *skb)
 
        len = ntohs(iph->tot_len);
        if (skb->len < len) {
-               IP_INC_STATS_BH(net, IPSTATS_MIB_INTRUNCATEDPKTS);
+               __IP_INC_STATS(net, IPSTATS_MIB_INTRUNCATEDPKTS);
                goto drop;
        } else if (len < (iph->ihl*4))
                goto inhdr_error;
 
        if (pskb_trim_rcsum(skb, len)) {
-               IP_INC_STATS_BH(net, IPSTATS_MIB_INDISCARDS);
+               __IP_INC_STATS(net, IPSTATS_MIB_INDISCARDS);
                goto drop;
        }
 
@@ -236,7 +236,7 @@ static int br_validate_ipv4(struct net *net, struct sk_buff *skb)
        return 0;
 
 inhdr_error:
-       IP_INC_STATS_BH(net, IPSTATS_MIB_INHDRERRORS);
+       __IP_INC_STATS(net, IPSTATS_MIB_INHDRERRORS);
 drop:
        return -1;
 }
index d61f56e..5e59a84 100644 (file)
@@ -122,13 +122,13 @@ int br_validate_ipv6(struct net *net, struct sk_buff *skb)
 
        if (pkt_len || hdr->nexthdr != NEXTHDR_HOP) {
                if (pkt_len + ip6h_len > skb->len) {
-                       IP6_INC_STATS_BH(net, idev,
-                                        IPSTATS_MIB_INTRUNCATEDPKTS);
+                       __IP6_INC_STATS(net, idev,
+                                       IPSTATS_MIB_INTRUNCATEDPKTS);
                        goto drop;
                }
                if (pskb_trim_rcsum(skb, pkt_len + ip6h_len)) {
-                       IP6_INC_STATS_BH(net, idev,
-                                        IPSTATS_MIB_INDISCARDS);
+                       __IP6_INC_STATS(net, idev,
+                                       IPSTATS_MIB_INDISCARDS);
                        goto drop;
                }
        }
@@ -142,7 +142,7 @@ int br_validate_ipv6(struct net *net, struct sk_buff *skb)
        return 0;
 
 inhdr_error:
-       IP6_INC_STATS_BH(net, idev, IPSTATS_MIB_INHDRERRORS);
+       __IP6_INC_STATS(net, idev, IPSTATS_MIB_INHDRERRORS);
 drop:
        return -1;
 }
index e9c635e..a5343c7 100644 (file)
@@ -135,9 +135,9 @@ static inline size_t br_port_info_size(void)
                + nla_total_size(sizeof(u16))   /* IFLA_BRPORT_NO */
                + nla_total_size(sizeof(u8))    /* IFLA_BRPORT_TOPOLOGY_CHANGE_ACK */
                + nla_total_size(sizeof(u8))    /* IFLA_BRPORT_CONFIG_PENDING */
-               + nla_total_size(sizeof(u64))   /* IFLA_BRPORT_MESSAGE_AGE_TIMER */
-               + nla_total_size(sizeof(u64))   /* IFLA_BRPORT_FORWARD_DELAY_TIMER */
-               + nla_total_size(sizeof(u64))   /* IFLA_BRPORT_HOLD_TIMER */
+               + nla_total_size_64bit(sizeof(u64)) /* IFLA_BRPORT_MESSAGE_AGE_TIMER */
+               + nla_total_size_64bit(sizeof(u64)) /* IFLA_BRPORT_FORWARD_DELAY_TIMER */
+               + nla_total_size_64bit(sizeof(u64)) /* IFLA_BRPORT_HOLD_TIMER */
 #ifdef CONFIG_BRIDGE_IGMP_SNOOPING
                + nla_total_size(sizeof(u8))    /* IFLA_BRPORT_MULTICAST_ROUTER */
 #endif
@@ -190,13 +190,16 @@ static int br_port_fill_attrs(struct sk_buff *skb,
                return -EMSGSIZE;
 
        timerval = br_timer_value(&p->message_age_timer);
-       if (nla_put_u64(skb, IFLA_BRPORT_MESSAGE_AGE_TIMER, timerval))
+       if (nla_put_u64_64bit(skb, IFLA_BRPORT_MESSAGE_AGE_TIMER, timerval,
+                             IFLA_BRPORT_PAD))
                return -EMSGSIZE;
        timerval = br_timer_value(&p->forward_delay_timer);
-       if (nla_put_u64(skb, IFLA_BRPORT_FORWARD_DELAY_TIMER, timerval))
+       if (nla_put_u64_64bit(skb, IFLA_BRPORT_FORWARD_DELAY_TIMER, timerval,
+                             IFLA_BRPORT_PAD))
                return -EMSGSIZE;
        timerval = br_timer_value(&p->hold_timer);
-       if (nla_put_u64(skb, IFLA_BRPORT_HOLD_TIMER, timerval))
+       if (nla_put_u64_64bit(skb, IFLA_BRPORT_HOLD_TIMER, timerval,
+                             IFLA_BRPORT_PAD))
                return -EMSGSIZE;
 
 #ifdef CONFIG_BRIDGE_IGMP_SNOOPING
@@ -847,6 +850,7 @@ static const struct nla_policy br_policy[IFLA_BR_MAX + 1] = {
        [IFLA_BR_NF_CALL_IP6TABLES] = { .type = NLA_U8 },
        [IFLA_BR_NF_CALL_ARPTABLES] = { .type = NLA_U8 },
        [IFLA_BR_VLAN_DEFAULT_PVID] = { .type = NLA_U16 },
+       [IFLA_BR_VLAN_STATS_ENABLED] = { .type = NLA_U8 },
 };
 
 static int br_changelink(struct net_device *brdev, struct nlattr *tb[],
@@ -918,6 +922,14 @@ static int br_changelink(struct net_device *brdev, struct nlattr *tb[],
                if (err)
                        return err;
        }
+
+       if (data[IFLA_BR_VLAN_STATS_ENABLED]) {
+               __u8 vlan_stats = nla_get_u8(data[IFLA_BR_VLAN_STATS_ENABLED]);
+
+               err = br_vlan_set_stats(br, vlan_stats);
+               if (err)
+                       return err;
+       }
 #endif
 
        if (data[IFLA_BR_GROUP_FWD_MASK]) {
@@ -1079,6 +1091,7 @@ static size_t br_get_size(const struct net_device *brdev)
 #ifdef CONFIG_BRIDGE_VLAN_FILTERING
               nla_total_size(sizeof(__be16)) + /* IFLA_BR_VLAN_PROTOCOL */
               nla_total_size(sizeof(u16)) +    /* IFLA_BR_VLAN_DEFAULT_PVID */
+              nla_total_size(sizeof(u8)) +     /* IFLA_BR_VLAN_STATS_ENABLED */
 #endif
               nla_total_size(sizeof(u16)) +    /* IFLA_BR_GROUP_FWD_MASK */
               nla_total_size(sizeof(struct ifla_bridge_id)) +   /* IFLA_BR_ROOT_ID */
@@ -1087,10 +1100,10 @@ static size_t br_get_size(const struct net_device *brdev)
               nla_total_size(sizeof(u32)) +    /* IFLA_BR_ROOT_PATH_COST */
               nla_total_size(sizeof(u8)) +     /* IFLA_BR_TOPOLOGY_CHANGE */
               nla_total_size(sizeof(u8)) +     /* IFLA_BR_TOPOLOGY_CHANGE_DETECTED */
-              nla_total_size(sizeof(u64)) +    /* IFLA_BR_HELLO_TIMER */
-              nla_total_size(sizeof(u64)) +    /* IFLA_BR_TCN_TIMER */
-              nla_total_size(sizeof(u64)) +    /* IFLA_BR_TOPOLOGY_CHANGE_TIMER */
-              nla_total_size(sizeof(u64)) +    /* IFLA_BR_GC_TIMER */
+              nla_total_size_64bit(sizeof(u64)) + /* IFLA_BR_HELLO_TIMER */
+              nla_total_size_64bit(sizeof(u64)) + /* IFLA_BR_TCN_TIMER */
+              nla_total_size_64bit(sizeof(u64)) + /* IFLA_BR_TOPOLOGY_CHANGE_TIMER */
+              nla_total_size_64bit(sizeof(u64)) + /* IFLA_BR_GC_TIMER */
               nla_total_size(ETH_ALEN) +       /* IFLA_BR_GROUP_ADDR */
 #ifdef CONFIG_BRIDGE_IGMP_SNOOPING
               nla_total_size(sizeof(u8)) +     /* IFLA_BR_MCAST_ROUTER */
@@ -1101,12 +1114,12 @@ static size_t br_get_size(const struct net_device *brdev)
               nla_total_size(sizeof(u32)) +    /* IFLA_BR_MCAST_HASH_MAX */
               nla_total_size(sizeof(u32)) +    /* IFLA_BR_MCAST_LAST_MEMBER_CNT */
               nla_total_size(sizeof(u32)) +    /* IFLA_BR_MCAST_STARTUP_QUERY_CNT */
-              nla_total_size(sizeof(u64)) +    /* IFLA_BR_MCAST_LAST_MEMBER_INTVL */
-              nla_total_size(sizeof(u64)) +    /* IFLA_BR_MCAST_MEMBERSHIP_INTVL */
-              nla_total_size(sizeof(u64)) +    /* IFLA_BR_MCAST_QUERIER_INTVL */
-              nla_total_size(sizeof(u64)) +    /* IFLA_BR_MCAST_QUERY_INTVL */
-              nla_total_size(sizeof(u64)) +    /* IFLA_BR_MCAST_QUERY_RESPONSE_INTVL */
-              nla_total_size(sizeof(u64)) +    /* IFLA_BR_MCAST_STARTUP_QUERY_INTVL */
+              nla_total_size_64bit(sizeof(u64)) + /* IFLA_BR_MCAST_LAST_MEMBER_INTVL */
+              nla_total_size_64bit(sizeof(u64)) + /* IFLA_BR_MCAST_MEMBERSHIP_INTVL */
+              nla_total_size_64bit(sizeof(u64)) + /* IFLA_BR_MCAST_QUERIER_INTVL */
+              nla_total_size_64bit(sizeof(u64)) + /* IFLA_BR_MCAST_QUERY_INTVL */
+              nla_total_size_64bit(sizeof(u64)) + /* IFLA_BR_MCAST_QUERY_RESPONSE_INTVL */
+              nla_total_size_64bit(sizeof(u64)) + /* IFLA_BR_MCAST_STARTUP_QUERY_INTVL */
 #endif
 #if IS_ENABLED(CONFIG_BRIDGE_NETFILTER)
               nla_total_size(sizeof(u8)) +     /* IFLA_BR_NF_CALL_IPTABLES */
@@ -1129,16 +1142,17 @@ static int br_fill_info(struct sk_buff *skb, const struct net_device *brdev)
        u64 clockval;
 
        clockval = br_timer_value(&br->hello_timer);
-       if (nla_put_u64(skb, IFLA_BR_HELLO_TIMER, clockval))
+       if (nla_put_u64_64bit(skb, IFLA_BR_HELLO_TIMER, clockval, IFLA_BR_PAD))
                return -EMSGSIZE;
        clockval = br_timer_value(&br->tcn_timer);
-       if (nla_put_u64(skb, IFLA_BR_TCN_TIMER, clockval))
+       if (nla_put_u64_64bit(skb, IFLA_BR_TCN_TIMER, clockval, IFLA_BR_PAD))
                return -EMSGSIZE;
        clockval = br_timer_value(&br->topology_change_timer);
-       if (nla_put_u64(skb, IFLA_BR_TOPOLOGY_CHANGE_TIMER, clockval))
+       if (nla_put_u64_64bit(skb, IFLA_BR_TOPOLOGY_CHANGE_TIMER, clockval,
+                             IFLA_BR_PAD))
                return -EMSGSIZE;
        clockval = br_timer_value(&br->gc_timer);
-       if (nla_put_u64(skb, IFLA_BR_GC_TIMER, clockval))
+       if (nla_put_u64_64bit(skb, IFLA_BR_GC_TIMER, clockval, IFLA_BR_PAD))
                return -EMSGSIZE;
 
        if (nla_put_u32(skb, IFLA_BR_FORWARD_DELAY, forward_delay) ||
@@ -1163,7 +1177,8 @@ static int br_fill_info(struct sk_buff *skb, const struct net_device *brdev)
 
 #ifdef CONFIG_BRIDGE_VLAN_FILTERING
        if (nla_put_be16(skb, IFLA_BR_VLAN_PROTOCOL, br->vlan_proto) ||
-           nla_put_u16(skb, IFLA_BR_VLAN_DEFAULT_PVID, br->default_pvid))
+           nla_put_u16(skb, IFLA_BR_VLAN_DEFAULT_PVID, br->default_pvid) ||
+           nla_put_u8(skb, IFLA_BR_VLAN_STATS_ENABLED, br->vlan_stats_enabled))
                return -EMSGSIZE;
 #endif
 #ifdef CONFIG_BRIDGE_IGMP_SNOOPING
@@ -1182,22 +1197,28 @@ static int br_fill_info(struct sk_buff *skb, const struct net_device *brdev)
                return -EMSGSIZE;
 
        clockval = jiffies_to_clock_t(br->multicast_last_member_interval);
-       if (nla_put_u64(skb, IFLA_BR_MCAST_LAST_MEMBER_INTVL, clockval))
+       if (nla_put_u64_64bit(skb, IFLA_BR_MCAST_LAST_MEMBER_INTVL, clockval,
+                             IFLA_BR_PAD))
                return -EMSGSIZE;
        clockval = jiffies_to_clock_t(br->multicast_membership_interval);
-       if (nla_put_u64(skb, IFLA_BR_MCAST_MEMBERSHIP_INTVL, clockval))
+       if (nla_put_u64_64bit(skb, IFLA_BR_MCAST_MEMBERSHIP_INTVL, clockval,
+                             IFLA_BR_PAD))
                return -EMSGSIZE;
        clockval = jiffies_to_clock_t(br->multicast_querier_interval);
-       if (nla_put_u64(skb, IFLA_BR_MCAST_QUERIER_INTVL, clockval))
+       if (nla_put_u64_64bit(skb, IFLA_BR_MCAST_QUERIER_INTVL, clockval,
+                             IFLA_BR_PAD))
                return -EMSGSIZE;
        clockval = jiffies_to_clock_t(br->multicast_query_interval);
-       if (nla_put_u64(skb, IFLA_BR_MCAST_QUERY_INTVL, clockval))
+       if (nla_put_u64_64bit(skb, IFLA_BR_MCAST_QUERY_INTVL, clockval,
+                             IFLA_BR_PAD))
                return -EMSGSIZE;
        clockval = jiffies_to_clock_t(br->multicast_query_response_interval);
-       if (nla_put_u64(skb, IFLA_BR_MCAST_QUERY_RESPONSE_INTVL, clockval))
+       if (nla_put_u64_64bit(skb, IFLA_BR_MCAST_QUERY_RESPONSE_INTVL, clockval,
+                             IFLA_BR_PAD))
                return -EMSGSIZE;
        clockval = jiffies_to_clock_t(br->multicast_startup_query_interval);
-       if (nla_put_u64(skb, IFLA_BR_MCAST_STARTUP_QUERY_INTVL, clockval))
+       if (nla_put_u64_64bit(skb, IFLA_BR_MCAST_STARTUP_QUERY_INTVL, clockval,
+                             IFLA_BR_PAD))
                return -EMSGSIZE;
 #endif
 #if IS_ENABLED(CONFIG_BRIDGE_NETFILTER)
@@ -1213,6 +1234,69 @@ static int br_fill_info(struct sk_buff *skb, const struct net_device *brdev)
        return 0;
 }
 
+static size_t br_get_linkxstats_size(const struct net_device *dev)
+{
+       struct net_bridge *br = netdev_priv(dev);
+       struct net_bridge_vlan_group *vg;
+       struct net_bridge_vlan *v;
+       int numvls = 0;
+
+       vg = br_vlan_group(br);
+       if (!vg)
+               return 0;
+
+       /* we need to count all, even placeholder entries */
+       list_for_each_entry(v, &vg->vlan_list, vlist)
+               numvls++;
+
+       /* account for the vlans and the link xstats type nest attribute */
+       return numvls * nla_total_size(sizeof(struct bridge_vlan_xstats)) +
+              nla_total_size(0);
+}
+
+static int br_fill_linkxstats(struct sk_buff *skb, const struct net_device *dev,
+                             int *prividx)
+{
+       struct net_bridge *br = netdev_priv(dev);
+       struct net_bridge_vlan_group *vg;
+       struct net_bridge_vlan *v;
+       struct nlattr *nest;
+       int vl_idx = 0;
+
+       vg = br_vlan_group(br);
+       if (!vg)
+               goto out;
+       nest = nla_nest_start(skb, LINK_XSTATS_TYPE_BRIDGE);
+       if (!nest)
+               return -EMSGSIZE;
+       list_for_each_entry(v, &vg->vlan_list, vlist) {
+               struct bridge_vlan_xstats vxi;
+               struct br_vlan_stats stats;
+
+               if (vl_idx++ < *prividx)
+                       continue;
+               memset(&vxi, 0, sizeof(vxi));
+               vxi.vid = v->vid;
+               br_vlan_get_stats(v, &stats);
+               vxi.rx_bytes = stats.rx_bytes;
+               vxi.rx_packets = stats.rx_packets;
+               vxi.tx_bytes = stats.tx_bytes;
+               vxi.tx_packets = stats.tx_packets;
+
+               if (nla_put(skb, BRIDGE_XSTATS_VLAN, sizeof(vxi), &vxi))
+                       goto nla_put_failure;
+       }
+       nla_nest_end(skb, nest);
+       *prividx = 0;
+out:
+       return 0;
+
+nla_put_failure:
+       nla_nest_end(skb, nest);
+       *prividx = vl_idx;
+
+       return -EMSGSIZE;
+}
 
 static struct rtnl_af_ops br_af_ops __read_mostly = {
        .family                 = AF_BRIDGE,
@@ -1231,6 +1315,8 @@ struct rtnl_link_ops br_link_ops __read_mostly = {
        .dellink                = br_dev_delete,
        .get_size               = br_get_size,
        .fill_info              = br_fill_info,
+       .fill_linkxstats        = br_fill_linkxstats,
+       .get_linkxstats_size    = br_get_linkxstats_size,
 
        .slave_maxtype          = IFLA_BRPORT_MAX,
        .slave_policy           = br_port_policy,
index 1b5d145..c7fb5d7 100644 (file)
@@ -77,12 +77,21 @@ struct bridge_mcast_querier {
 };
 #endif
 
+struct br_vlan_stats {
+       u64 rx_bytes;
+       u64 rx_packets;
+       u64 tx_bytes;
+       u64 tx_packets;
+       struct u64_stats_sync syncp;
+};
+
 /**
  * struct net_bridge_vlan - per-vlan entry
  *
  * @vnode: rhashtable member
  * @vid: VLAN id
  * @flags: bridge vlan flags
+ * @stats: per-cpu VLAN statistics
  * @br: if MASTER flag set, this points to a bridge struct
  * @port: if MASTER flag unset, this points to a port struct
  * @refcnt: if MASTER flag set, this is bumped for each port referencing it
@@ -100,6 +109,7 @@ struct net_bridge_vlan {
        struct rhash_head               vnode;
        u16                             vid;
        u16                             flags;
+       struct br_vlan_stats __percpu   *stats;
        union {
                struct net_bridge       *br;
                struct net_bridge_port  *port;
@@ -342,6 +352,7 @@ struct net_bridge
 #ifdef CONFIG_BRIDGE_VLAN_FILTERING
        struct net_bridge_vlan_group    __rcu *vlgrp;
        u8                              vlan_enabled;
+       u8                              vlan_stats_enabled;
        __be16                          vlan_proto;
        u16                             default_pvid;
 #endif
@@ -560,8 +571,8 @@ br_multicast_new_port_group(struct net_bridge_port *port, struct br_ip *group,
                            unsigned char flags);
 void br_mdb_init(void);
 void br_mdb_uninit(void);
-void br_mdb_notify(struct net_device *dev, struct net_bridge_port_group *pg,
-                  int type);
+void br_mdb_notify(struct net_device *dev, struct net_bridge_port *port,
+                  struct br_ip *group, int type, u8 flags);
 void br_rtr_notify(struct net_device *dev, struct net_bridge_port *port,
                   int type);
 
@@ -691,6 +702,7 @@ int __br_vlan_filter_toggle(struct net_bridge *br, unsigned long val);
 int br_vlan_filter_toggle(struct net_bridge *br, unsigned long val);
 int __br_vlan_set_proto(struct net_bridge *br, __be16 proto);
 int br_vlan_set_proto(struct net_bridge *br, unsigned long val);
+int br_vlan_set_stats(struct net_bridge *br, unsigned long val);
 int br_vlan_init(struct net_bridge *br);
 int br_vlan_set_default_pvid(struct net_bridge *br, unsigned long val);
 int __br_vlan_set_default_pvid(struct net_bridge *br, u16 pvid);
@@ -699,6 +711,8 @@ int nbp_vlan_delete(struct net_bridge_port *port, u16 vid);
 void nbp_vlan_flush(struct net_bridge_port *port);
 int nbp_vlan_init(struct net_bridge_port *port);
 int nbp_get_num_vlan_infos(struct net_bridge_port *p, u32 filter_mask);
+void br_vlan_get_stats(const struct net_bridge_vlan *v,
+                      struct br_vlan_stats *stats);
 
 static inline struct net_bridge_vlan_group *br_vlan_group(
                                        const struct net_bridge *br)
@@ -881,6 +895,10 @@ static inline struct net_bridge_vlan_group *nbp_vlan_group_rcu(
        return NULL;
 }
 
+static inline void br_vlan_get_stats(const struct net_bridge_vlan *v,
+                                    struct br_vlan_stats *stats)
+{
+}
 #endif
 
 struct nf_br_ops {
index 70bddfd..beb4707 100644 (file)
@@ -731,6 +731,22 @@ static ssize_t default_pvid_store(struct device *d,
        return store_bridge_parm(d, buf, len, br_vlan_set_default_pvid);
 }
 static DEVICE_ATTR_RW(default_pvid);
+
+static ssize_t vlan_stats_enabled_show(struct device *d,
+                                      struct device_attribute *attr,
+                                      char *buf)
+{
+       struct net_bridge *br = to_bridge(d);
+       return sprintf(buf, "%u\n", br->vlan_stats_enabled);
+}
+
+static ssize_t vlan_stats_enabled_store(struct device *d,
+                                       struct device_attribute *attr,
+                                       const char *buf, size_t len)
+{
+       return store_bridge_parm(d, buf, len, br_vlan_set_stats);
+}
+static DEVICE_ATTR_RW(vlan_stats_enabled);
 #endif
 
 static struct attribute *bridge_attrs[] = {
@@ -778,6 +794,7 @@ static struct attribute *bridge_attrs[] = {
        &dev_attr_vlan_filtering.attr,
        &dev_attr_vlan_protocol.attr,
        &dev_attr_default_pvid.attr,
+       &dev_attr_vlan_stats_enabled.attr,
 #endif
        NULL
 };
index e001152..b6de4f4 100644 (file)
@@ -162,6 +162,17 @@ static struct net_bridge_vlan *br_vlan_get_master(struct net_bridge *br, u16 vid
        return masterv;
 }
 
+static void br_master_vlan_rcu_free(struct rcu_head *rcu)
+{
+       struct net_bridge_vlan *v;
+
+       v = container_of(rcu, struct net_bridge_vlan, rcu);
+       WARN_ON(!br_vlan_is_master(v));
+       free_percpu(v->stats);
+       v->stats = NULL;
+       kfree(v);
+}
+
 static void br_vlan_put_master(struct net_bridge_vlan *masterv)
 {
        struct net_bridge_vlan_group *vg;
@@ -174,7 +185,7 @@ static void br_vlan_put_master(struct net_bridge_vlan *masterv)
                rhashtable_remove_fast(&vg->vlan_hash,
                                       &masterv->vnode, br_vlan_rht_params);
                __vlan_del_list(masterv);
-               kfree_rcu(masterv, rcu);
+               call_rcu(&masterv->rcu, br_master_vlan_rcu_free);
        }
 }
 
@@ -230,6 +241,7 @@ static int __vlan_add(struct net_bridge_vlan *v, u16 flags)
                if (!masterv)
                        goto out_filt;
                v->brvlan = masterv;
+               v->stats = masterv->stats;
        }
 
        /* Add the dev mac and count the vlan only if it's usable */
@@ -329,6 +341,7 @@ struct sk_buff *br_handle_vlan(struct net_bridge *br,
                               struct net_bridge_vlan_group *vg,
                               struct sk_buff *skb)
 {
+       struct br_vlan_stats *stats;
        struct net_bridge_vlan *v;
        u16 vid;
 
@@ -355,18 +368,27 @@ struct sk_buff *br_handle_vlan(struct net_bridge *br,
                        return NULL;
                }
        }
+       if (br->vlan_stats_enabled) {
+               stats = this_cpu_ptr(v->stats);
+               u64_stats_update_begin(&stats->syncp);
+               stats->tx_bytes += skb->len;
+               stats->tx_packets++;
+               u64_stats_update_end(&stats->syncp);
+       }
+
        if (v->flags & BRIDGE_VLAN_INFO_UNTAGGED)
                skb->vlan_tci = 0;
-
 out:
        return skb;
 }
 
 /* Called under RCU */
-static bool __allowed_ingress(struct net_bridge_vlan_group *vg, __be16 proto,
+static bool __allowed_ingress(const struct net_bridge *br,
+                             struct net_bridge_vlan_group *vg,
                              struct sk_buff *skb, u16 *vid)
 {
-       const struct net_bridge_vlan *v;
+       struct br_vlan_stats *stats;
+       struct net_bridge_vlan *v;
        bool tagged;
 
        BR_INPUT_SKB_CB(skb)->vlan_filtered = true;
@@ -375,7 +397,7 @@ static bool __allowed_ingress(struct net_bridge_vlan_group *vg, __be16 proto,
         * HW accelerated vlan tag.
         */
        if (unlikely(!skb_vlan_tag_present(skb) &&
-                    skb->protocol == proto)) {
+                    skb->protocol == br->vlan_proto)) {
                skb = skb_vlan_untag(skb);
                if (unlikely(!skb))
                        return false;
@@ -383,7 +405,7 @@ static bool __allowed_ingress(struct net_bridge_vlan_group *vg, __be16 proto,
 
        if (!br_vlan_get_tag(skb, vid)) {
                /* Tagged frame */
-               if (skb->vlan_proto != proto) {
+               if (skb->vlan_proto != br->vlan_proto) {
                        /* Protocol-mismatch, empty out vlan_tci for new tag */
                        skb_push(skb, ETH_HLEN);
                        skb = vlan_insert_tag_set_proto(skb, skb->vlan_proto,
@@ -419,7 +441,7 @@ static bool __allowed_ingress(struct net_bridge_vlan_group *vg, __be16 proto,
                *vid = pvid;
                if (likely(!tagged))
                        /* Untagged Frame. */
-                       __vlan_hwaccel_put_tag(skb, proto, pvid);
+                       __vlan_hwaccel_put_tag(skb, br->vlan_proto, pvid);
                else
                        /* Priority-tagged Frame.
                         * At this point, We know that skb->vlan_tci had
@@ -428,13 +450,24 @@ static bool __allowed_ingress(struct net_bridge_vlan_group *vg, __be16 proto,
                         */
                        skb->vlan_tci |= pvid;
 
-               return true;
+               /* if stats are disabled we can avoid the lookup */
+               if (!br->vlan_stats_enabled)
+                       return true;
        }
-
-       /* Frame had a valid vlan tag.  See if vlan is allowed */
        v = br_vlan_find(vg, *vid);
-       if (v && br_vlan_should_use(v))
-               return true;
+       if (!v || !br_vlan_should_use(v))
+               goto drop;
+
+       if (br->vlan_stats_enabled) {
+               stats = this_cpu_ptr(v->stats);
+               u64_stats_update_begin(&stats->syncp);
+               stats->rx_bytes += skb->len;
+               stats->rx_packets++;
+               u64_stats_update_end(&stats->syncp);
+       }
+
+       return true;
+
 drop:
        kfree_skb(skb);
        return false;
@@ -452,7 +485,7 @@ bool br_allowed_ingress(const struct net_bridge *br,
                return true;
        }
 
-       return __allowed_ingress(vg, br->vlan_proto, skb, vid);
+       return __allowed_ingress(br, vg, skb, vid);
 }
 
 /* Called under RCU. */
@@ -542,6 +575,11 @@ int br_vlan_add(struct net_bridge *br, u16 vid, u16 flags)
        if (!vlan)
                return -ENOMEM;
 
+       vlan->stats = netdev_alloc_pcpu_stats(struct br_vlan_stats);
+       if (!vlan->stats) {
+               kfree(vlan);
+               return -ENOMEM;
+       }
        vlan->vid = vid;
        vlan->flags = flags | BRIDGE_VLAN_INFO_MASTER;
        vlan->flags &= ~BRIDGE_VLAN_INFO_PVID;
@@ -549,8 +587,10 @@ int br_vlan_add(struct net_bridge *br, u16 vid, u16 flags)
        if (flags & BRIDGE_VLAN_INFO_BRENTRY)
                atomic_set(&vlan->refcnt, 1);
        ret = __vlan_add(vlan, flags);
-       if (ret)
+       if (ret) {
+               free_percpu(vlan->stats);
                kfree(vlan);
+       }
 
        return ret;
 }
@@ -711,6 +751,20 @@ int br_vlan_set_proto(struct net_bridge *br, unsigned long val)
        return __br_vlan_set_proto(br, htons(val));
 }
 
+int br_vlan_set_stats(struct net_bridge *br, unsigned long val)
+{
+       switch (val) {
+       case 0:
+       case 1:
+               br->vlan_stats_enabled = val;
+               break;
+       default:
+               return -EINVAL;
+       }
+
+       return 0;
+}
+
 static bool vlan_default_pvid(struct net_bridge_vlan_group *vg, u16 vid)
 {
        struct net_bridge_vlan *v;
@@ -1000,3 +1054,30 @@ void nbp_vlan_flush(struct net_bridge_port *port)
        synchronize_rcu();
        __vlan_group_free(vg);
 }
+
+void br_vlan_get_stats(const struct net_bridge_vlan *v,
+                      struct br_vlan_stats *stats)
+{
+       int i;
+
+       memset(stats, 0, sizeof(*stats));
+       for_each_possible_cpu(i) {
+               u64 rxpackets, rxbytes, txpackets, txbytes;
+               struct br_vlan_stats *cpu_stats;
+               unsigned int start;
+
+               cpu_stats = per_cpu_ptr(v->stats, i);
+               do {
+                       start = u64_stats_fetch_begin_irq(&cpu_stats->syncp);
+                       rxpackets = cpu_stats->rx_packets;
+                       rxbytes = cpu_stats->rx_bytes;
+                       txbytes = cpu_stats->tx_bytes;
+                       txpackets = cpu_stats->tx_packets;
+               } while (u64_stats_fetch_retry_irq(&cpu_stats->syncp, start));
+
+               stats->rx_packets += rxpackets;
+               stats->rx_bytes += rxbytes;
+               stats->tx_bytes += txbytes;
+               stats->tx_packets += txpackets;
+       }
+}
index 6b923bc..2bc5965 100644 (file)
@@ -293,13 +293,9 @@ int ceph_auth_create_authorizer(struct ceph_auth_client *ac,
 }
 EXPORT_SYMBOL(ceph_auth_create_authorizer);
 
-void ceph_auth_destroy_authorizer(struct ceph_auth_client *ac,
-                                 struct ceph_authorizer *a)
+void ceph_auth_destroy_authorizer(struct ceph_authorizer *a)
 {
-       mutex_lock(&ac->mutex);
-       if (ac->ops && ac->ops->destroy_authorizer)
-               ac->ops->destroy_authorizer(ac, a);
-       mutex_unlock(&ac->mutex);
+       a->destroy(a);
 }
 EXPORT_SYMBOL(ceph_auth_destroy_authorizer);
 
index 8c93fa8..5f836f0 100644 (file)
@@ -16,7 +16,6 @@ static void reset(struct ceph_auth_client *ac)
        struct ceph_auth_none_info *xi = ac->private;
 
        xi->starting = true;
-       xi->built_authorizer = false;
 }
 
 static void destroy(struct ceph_auth_client *ac)
@@ -39,6 +38,27 @@ static int should_authenticate(struct ceph_auth_client *ac)
        return xi->starting;
 }
 
+static int ceph_auth_none_build_authorizer(struct ceph_auth_client *ac,
+                                          struct ceph_none_authorizer *au)
+{
+       void *p = au->buf;
+       void *const end = p + sizeof(au->buf);
+       int ret;
+
+       ceph_encode_8_safe(&p, end, 1, e_range);
+       ret = ceph_entity_name_encode(ac->name, &p, end);
+       if (ret < 0)
+               return ret;
+
+       ceph_encode_64_safe(&p, end, ac->global_id, e_range);
+       au->buf_len = p - (void *)au->buf;
+       dout("%s built authorizer len %d\n", __func__, au->buf_len);
+       return 0;
+
+e_range:
+       return -ERANGE;
+}
+
 static int build_request(struct ceph_auth_client *ac, void *buf, void *end)
 {
        return 0;
@@ -57,32 +77,32 @@ static int handle_reply(struct ceph_auth_client *ac, int result,
        return result;
 }
 
+static void ceph_auth_none_destroy_authorizer(struct ceph_authorizer *a)
+{
+       kfree(a);
+}
+
 /*
- * build an 'authorizer' with our entity_name and global_id.  we can
- * reuse a single static copy since it is identical for all services
- * we connect to.
+ * build an 'authorizer' with our entity_name and global_id.  it is
+ * identical for all services we connect to.
  */
 static int ceph_auth_none_create_authorizer(
        struct ceph_auth_client *ac, int peer_type,
        struct ceph_auth_handshake *auth)
 {
-       struct ceph_auth_none_info *ai = ac->private;
-       struct ceph_none_authorizer *au = &ai->au;
-       void *p, *end;
+       struct ceph_none_authorizer *au;
        int ret;
 
-       if (!ai->built_authorizer) {
-               p = au->buf;
-               end = p + sizeof(au->buf);
-               ceph_encode_8(&p, 1);
-               ret = ceph_entity_name_encode(ac->name, &p, end - 8);
-               if (ret < 0)
-                       goto bad;
-               ceph_decode_need(&p, end, sizeof(u64), bad2);
-               ceph_encode_64(&p, ac->global_id);
-               au->buf_len = p - (void *)au->buf;
-               ai->built_authorizer = true;
-               dout("built authorizer len %d\n", au->buf_len);
+       au = kmalloc(sizeof(*au), GFP_NOFS);
+       if (!au)
+               return -ENOMEM;
+
+       au->base.destroy = ceph_auth_none_destroy_authorizer;
+
+       ret = ceph_auth_none_build_authorizer(ac, au);
+       if (ret) {
+               kfree(au);
+               return ret;
        }
 
        auth->authorizer = (struct ceph_authorizer *) au;
@@ -92,17 +112,6 @@ static int ceph_auth_none_create_authorizer(
        auth->authorizer_reply_buf_len = sizeof (au->reply_buf);
 
        return 0;
-
-bad2:
-       ret = -ERANGE;
-bad:
-       return ret;
-}
-
-static void ceph_auth_none_destroy_authorizer(struct ceph_auth_client *ac,
-                                     struct ceph_authorizer *a)
-{
-       /* nothing to do */
 }
 
 static const struct ceph_auth_client_ops ceph_auth_none_ops = {
@@ -114,7 +123,6 @@ static const struct ceph_auth_client_ops ceph_auth_none_ops = {
        .build_request = build_request,
        .handle_reply = handle_reply,
        .create_authorizer = ceph_auth_none_create_authorizer,
-       .destroy_authorizer = ceph_auth_none_destroy_authorizer,
 };
 
 int ceph_auth_none_init(struct ceph_auth_client *ac)
@@ -127,7 +135,6 @@ int ceph_auth_none_init(struct ceph_auth_client *ac)
                return -ENOMEM;
 
        xi->starting = true;
-       xi->built_authorizer = false;
 
        ac->protocol = CEPH_AUTH_NONE;
        ac->private = xi;
index 059a3ce..6202153 100644 (file)
@@ -12,6 +12,7 @@
  */
 
 struct ceph_none_authorizer {
+       struct ceph_authorizer base;
        char buf[128];
        int buf_len;
        char reply_buf[0];
@@ -19,8 +20,6 @@ struct ceph_none_authorizer {
 
 struct ceph_auth_none_info {
        bool starting;
-       bool built_authorizer;
-       struct ceph_none_authorizer au;   /* we only need one; it's static */
 };
 
 int ceph_auth_none_init(struct ceph_auth_client *ac);
index 9e43a31..a0905f0 100644 (file)
@@ -565,6 +565,14 @@ static int ceph_x_handle_reply(struct ceph_auth_client *ac, int result,
        return -EAGAIN;
 }
 
+static void ceph_x_destroy_authorizer(struct ceph_authorizer *a)
+{
+       struct ceph_x_authorizer *au = (void *)a;
+
+       ceph_x_authorizer_cleanup(au);
+       kfree(au);
+}
+
 static int ceph_x_create_authorizer(
        struct ceph_auth_client *ac, int peer_type,
        struct ceph_auth_handshake *auth)
@@ -581,6 +589,8 @@ static int ceph_x_create_authorizer(
        if (!au)
                return -ENOMEM;
 
+       au->base.destroy = ceph_x_destroy_authorizer;
+
        ret = ceph_x_build_authorizer(ac, th, au);
        if (ret) {
                kfree(au);
@@ -643,16 +653,6 @@ static int ceph_x_verify_authorizer_reply(struct ceph_auth_client *ac,
        return ret;
 }
 
-static void ceph_x_destroy_authorizer(struct ceph_auth_client *ac,
-                                     struct ceph_authorizer *a)
-{
-       struct ceph_x_authorizer *au = (void *)a;
-
-       ceph_x_authorizer_cleanup(au);
-       kfree(au);
-}
-
-
 static void ceph_x_reset(struct ceph_auth_client *ac)
 {
        struct ceph_x_info *xi = ac->private;
@@ -770,7 +770,6 @@ static const struct ceph_auth_client_ops ceph_x_ops = {
        .create_authorizer = ceph_x_create_authorizer,
        .update_authorizer = ceph_x_update_authorizer,
        .verify_authorizer_reply = ceph_x_verify_authorizer_reply,
-       .destroy_authorizer = ceph_x_destroy_authorizer,
        .invalidate_authorizer = ceph_x_invalidate_authorizer,
        .reset =  ceph_x_reset,
        .destroy = ceph_x_destroy,
index 40b1a3c..21a5af9 100644 (file)
@@ -26,6 +26,7 @@ struct ceph_x_ticket_handler {
 
 
 struct ceph_x_authorizer {
+       struct ceph_authorizer base;
        struct ceph_crypto_key session_key;
        struct ceph_buffer *buf;
        unsigned int service;
index 32355d9..40a53a7 100644 (file)
@@ -1087,10 +1087,8 @@ static void put_osd(struct ceph_osd *osd)
        dout("put_osd %p %d -> %d\n", osd, atomic_read(&osd->o_ref),
             atomic_read(&osd->o_ref) - 1);
        if (atomic_dec_and_test(&osd->o_ref)) {
-               struct ceph_auth_client *ac = osd->o_osdc->client->monc.auth;
-
                if (osd->o_auth.authorizer)
-                       ceph_auth_destroy_authorizer(ac, osd->o_auth.authorizer);
+                       ceph_auth_destroy_authorizer(osd->o_auth.authorizer);
                kfree(osd);
        }
 }
@@ -2984,7 +2982,7 @@ static struct ceph_auth_handshake *get_authorizer(struct ceph_connection *con,
        struct ceph_auth_handshake *auth = &o->o_auth;
 
        if (force_new && auth->authorizer) {
-               ceph_auth_destroy_authorizer(ac, auth->authorizer);
+               ceph_auth_destroy_authorizer(auth->authorizer);
                auth->authorizer = NULL;
        }
        if (!auth->authorizer) {
index 6324bc9..c749033 100644 (file)
@@ -1741,7 +1741,7 @@ static inline void net_timestamp_set(struct sk_buff *skb)
                        __net_timestamp(SKB);           \
        }                                               \
 
-bool is_skb_forwardable(struct net_device *dev, struct sk_buff *skb)
+bool is_skb_forwardable(const struct net_device *dev, const struct sk_buff *skb)
 {
        unsigned int len;
 
@@ -2815,7 +2815,7 @@ static netdev_features_t harmonize_features(struct sk_buff *skb,
 
        if (skb->ip_summed != CHECKSUM_NONE &&
            !can_checksum_protocol(features, type)) {
-               features &= ~NETIF_F_CSUM_MASK;
+               features &= ~(NETIF_F_CSUM_MASK | NETIF_F_GSO_MASK);
        } else if (illegal_highdma(skb->dev, skb)) {
                features &= ~NETIF_F_SG;
        }
@@ -3469,6 +3469,7 @@ u32 rps_cpu_mask __read_mostly;
 EXPORT_SYMBOL(rps_cpu_mask);
 
 struct static_key rps_needed __read_mostly;
+EXPORT_SYMBOL(rps_needed);
 
 static struct rps_dev_flow *
 set_rps_cpu(struct net_device *dev, struct sk_buff *skb,
@@ -3955,9 +3956,11 @@ sch_handle_ingress(struct sk_buff *skb, struct packet_type **pt_prev, int *ret,
                break;
        case TC_ACT_SHOT:
                qdisc_qstats_cpu_drop(cl->q);
+               kfree_skb(skb);
+               return NULL;
        case TC_ACT_STOLEN:
        case TC_ACT_QUEUED:
-               kfree_skb(skb);
+               consume_skb(skb);
                return NULL;
        case TC_ACT_REDIRECT:
                /* skb_mac_header check was done by cls/act_bpf, so
@@ -4982,8 +4985,8 @@ bool sk_busy_loop(struct sock *sk, int nonblock)
                        netpoll_poll_unlock(have);
                }
                if (rc > 0)
-                       NET_ADD_STATS_BH(sock_net(sk),
-                                        LINUX_MIB_BUSYPOLLRXPACKETS, rc);
+                       __NET_ADD_STATS(sock_net(sk),
+                                       LINUX_MIB_BUSYPOLLRXPACKETS, rc);
                local_bh_enable();
 
                if (rc == LL_FLUSH_FAILED)
@@ -6720,6 +6723,10 @@ static netdev_features_t netdev_fix_features(struct net_device *dev,
                features &= ~NETIF_F_TSO6;
        }
 
+       /* TSO with IPv4 ID mangling requires IPv4 TSO be enabled */
+       if ((features & NETIF_F_TSO_MANGLEID) && !(features & NETIF_F_TSO))
+               features &= ~NETIF_F_TSO_MANGLEID;
+
        /* TSO ECN requires that TSO is present as well. */
        if ((features & NETIF_F_ALL_TSO) == NETIF_F_TSO_ECN)
                features &= ~NETIF_F_TSO_ECN;
index 218e5de..71c2a1f 100644 (file)
@@ -1344,6 +1344,21 @@ struct bpf_scratchpad {
 
 static DEFINE_PER_CPU(struct bpf_scratchpad, bpf_sp);
 
+static inline int bpf_try_make_writable(struct sk_buff *skb,
+                                       unsigned int write_len)
+{
+       int err;
+
+       if (!skb_cloned(skb))
+               return 0;
+       if (skb_clone_writable(skb, write_len))
+               return 0;
+       err = pskb_expand_head(skb, 0, 0, GFP_ATOMIC);
+       if (!err)
+               bpf_compute_data_end(skb);
+       return err;
+}
+
 static u64 bpf_skb_store_bytes(u64 r1, u64 r2, u64 r3, u64 r4, u64 flags)
 {
        struct bpf_scratchpad *sp = this_cpu_ptr(&bpf_sp);
@@ -1366,7 +1381,7 @@ static u64 bpf_skb_store_bytes(u64 r1, u64 r2, u64 r3, u64 r4, u64 flags)
         */
        if (unlikely((u32) offset > 0xffff || len > sizeof(sp->buff)))
                return -EFAULT;
-       if (unlikely(skb_try_make_writable(skb, offset + len)))
+       if (unlikely(bpf_try_make_writable(skb, offset + len)))
                return -EFAULT;
 
        ptr = skb_header_pointer(skb, offset, len, sp->buff);
@@ -1444,7 +1459,7 @@ static u64 bpf_l3_csum_replace(u64 r1, u64 r2, u64 from, u64 to, u64 flags)
                return -EINVAL;
        if (unlikely((u32) offset > 0xffff))
                return -EFAULT;
-       if (unlikely(skb_try_make_writable(skb, offset + sizeof(sum))))
+       if (unlikely(bpf_try_make_writable(skb, offset + sizeof(sum))))
                return -EFAULT;
 
        ptr = skb_header_pointer(skb, offset, sizeof(sum), &sum);
@@ -1499,7 +1514,7 @@ static u64 bpf_l4_csum_replace(u64 r1, u64 r2, u64 from, u64 to, u64 flags)
                return -EINVAL;
        if (unlikely((u32) offset > 0xffff))
                return -EFAULT;
-       if (unlikely(skb_try_make_writable(skb, offset + sizeof(sum))))
+       if (unlikely(bpf_try_make_writable(skb, offset + sizeof(sum))))
                return -EFAULT;
 
        ptr = skb_header_pointer(skb, offset, sizeof(sum), &sum);
@@ -1699,12 +1714,15 @@ static u64 bpf_skb_vlan_push(u64 r1, u64 r2, u64 vlan_tci, u64 r4, u64 r5)
 {
        struct sk_buff *skb = (struct sk_buff *) (long) r1;
        __be16 vlan_proto = (__force __be16) r2;
+       int ret;
 
        if (unlikely(vlan_proto != htons(ETH_P_8021Q) &&
                     vlan_proto != htons(ETH_P_8021AD)))
                vlan_proto = htons(ETH_P_8021Q);
 
-       return skb_vlan_push(skb, vlan_proto, vlan_tci);
+       ret = skb_vlan_push(skb, vlan_proto, vlan_tci);
+       bpf_compute_data_end(skb);
+       return ret;
 }
 
 const struct bpf_func_proto bpf_skb_vlan_push_proto = {
@@ -1720,8 +1738,11 @@ EXPORT_SYMBOL_GPL(bpf_skb_vlan_push_proto);
 static u64 bpf_skb_vlan_pop(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5)
 {
        struct sk_buff *skb = (struct sk_buff *) (long) r1;
+       int ret;
 
-       return skb_vlan_pop(skb);
+       ret = skb_vlan_pop(skb);
+       bpf_compute_data_end(skb);
+       return ret;
 }
 
 const struct bpf_func_proto bpf_skb_vlan_pop_proto = {
@@ -2066,8 +2087,12 @@ static bool __is_valid_access(int off, int size, enum bpf_access_type type)
 static bool sk_filter_is_valid_access(int off, int size,
                                      enum bpf_access_type type)
 {
-       if (off == offsetof(struct __sk_buff, tc_classid))
+       switch (off) {
+       case offsetof(struct __sk_buff, tc_classid):
+       case offsetof(struct __sk_buff, data):
+       case offsetof(struct __sk_buff, data_end):
                return false;
+       }
 
        if (type == BPF_WRITE) {
                switch (off) {
@@ -2215,6 +2240,20 @@ static u32 bpf_net_convert_ctx_access(enum bpf_access_type type, int dst_reg,
                        *insn++ = BPF_LDX_MEM(BPF_H, dst_reg, src_reg, ctx_off);
                break;
 
+       case offsetof(struct __sk_buff, data):
+               *insn++ = BPF_LDX_MEM(bytes_to_bpf_size(FIELD_SIZEOF(struct sk_buff, data)),
+                                     dst_reg, src_reg,
+                                     offsetof(struct sk_buff, data));
+               break;
+
+       case offsetof(struct __sk_buff, data_end):
+               ctx_off -= offsetof(struct __sk_buff, data_end);
+               ctx_off += offsetof(struct sk_buff, cb);
+               ctx_off += offsetof(struct bpf_skb_data_end, data_end);
+               *insn++ = BPF_LDX_MEM(bytes_to_bpf_size(sizeof(void *)),
+                                     dst_reg, src_reg, ctx_off);
+               break;
+
        case offsetof(struct __sk_buff, tc_index):
 #ifdef CONFIG_NET_SCHED
                BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff, tc_index) != 2);
index e640462..f96ee8b 100644 (file)
@@ -25,9 +25,9 @@
 
 
 static inline int
-gnet_stats_copy(struct gnet_dump *d, int type, void *buf, int size)
+gnet_stats_copy(struct gnet_dump *d, int type, void *buf, int size, int padattr)
 {
-       if (nla_put(d->skb, type, size, buf))
+       if (nla_put_64bit(d->skb, type, size, buf, padattr))
                goto nla_put_failure;
        return 0;
 
@@ -59,7 +59,8 @@ nla_put_failure:
  */
 int
 gnet_stats_start_copy_compat(struct sk_buff *skb, int type, int tc_stats_type,
-       int xstats_type, spinlock_t *lock, struct gnet_dump *d)
+                            int xstats_type, spinlock_t *lock,
+                            struct gnet_dump *d, int padattr)
        __acquires(lock)
 {
        memset(d, 0, sizeof(*d));
@@ -71,16 +72,17 @@ gnet_stats_start_copy_compat(struct sk_buff *skb, int type, int tc_stats_type,
        d->skb = skb;
        d->compat_tc_stats = tc_stats_type;
        d->compat_xstats = xstats_type;
+       d->padattr = padattr;
 
        if (d->tail)
-               return gnet_stats_copy(d, type, NULL, 0);
+               return gnet_stats_copy(d, type, NULL, 0, padattr);
 
        return 0;
 }
 EXPORT_SYMBOL(gnet_stats_start_copy_compat);
 
 /**
- * gnet_stats_start_copy_compat - start dumping procedure in compatibility mode
+ * gnet_stats_start_copy - start dumping procedure in compatibility mode
  * @skb: socket buffer to put statistics TLVs into
  * @type: TLV type for top level statistic TLV
  * @lock: statistics lock
@@ -94,9 +96,9 @@ EXPORT_SYMBOL(gnet_stats_start_copy_compat);
  */
 int
 gnet_stats_start_copy(struct sk_buff *skb, int type, spinlock_t *lock,
-       struct gnet_dump *d)
+                     struct gnet_dump *d, int padattr)
 {
-       return gnet_stats_start_copy_compat(skb, type, 0, 0, lock, d);
+       return gnet_stats_start_copy_compat(skb, type, 0, 0, lock, d, padattr);
 }
 EXPORT_SYMBOL(gnet_stats_start_copy);
 
@@ -169,7 +171,8 @@ gnet_stats_copy_basic(struct gnet_dump *d,
                memset(&sb, 0, sizeof(sb));
                sb.bytes = bstats.bytes;
                sb.packets = bstats.packets;
-               return gnet_stats_copy(d, TCA_STATS_BASIC, &sb, sizeof(sb));
+               return gnet_stats_copy(d, TCA_STATS_BASIC, &sb, sizeof(sb),
+                                      TCA_STATS_PAD);
        }
        return 0;
 }
@@ -208,11 +211,13 @@ gnet_stats_copy_rate_est(struct gnet_dump *d,
        }
 
        if (d->tail) {
-               res = gnet_stats_copy(d, TCA_STATS_RATE_EST, &est, sizeof(est));
+               res = gnet_stats_copy(d, TCA_STATS_RATE_EST, &est, sizeof(est),
+                                     TCA_STATS_PAD);
                if (res < 0 || est.bps == r->bps)
                        return res;
                /* emit 64bit stats only if needed */
-               return gnet_stats_copy(d, TCA_STATS_RATE_EST64, r, sizeof(*r));
+               return gnet_stats_copy(d, TCA_STATS_RATE_EST64, r, sizeof(*r),
+                                      TCA_STATS_PAD);
        }
 
        return 0;
@@ -286,7 +291,8 @@ gnet_stats_copy_queue(struct gnet_dump *d,
 
        if (d->tail)
                return gnet_stats_copy(d, TCA_STATS_QUEUE,
-                                      &qstats, sizeof(qstats));
+                                      &qstats, sizeof(qstats),
+                                      TCA_STATS_PAD);
 
        return 0;
 }
@@ -316,7 +322,8 @@ gnet_stats_copy_app(struct gnet_dump *d, void *st, int len)
        }
 
        if (d->tail)
-               return gnet_stats_copy(d, TCA_STATS_APP, st, len);
+               return gnet_stats_copy(d, TCA_STATS_APP, st, len,
+                                      TCA_STATS_PAD);
 
        return 0;
 
@@ -347,12 +354,12 @@ gnet_stats_finish_copy(struct gnet_dump *d)
 
        if (d->compat_tc_stats)
                if (gnet_stats_copy(d, d->compat_tc_stats, &d->tc_stats,
-                       sizeof(d->tc_stats)) < 0)
+                                   sizeof(d->tc_stats), d->padattr) < 0)
                        return -1;
 
        if (d->compat_xstats && d->xstats) {
                if (gnet_stats_copy(d, d->compat_xstats, d->xstats,
-                       d->xstats_len) < 0)
+                                   d->xstats_len, d->padattr) < 0)
                        return -1;
        }
 
index 6a395d4..29dd8cc 100644 (file)
@@ -1857,7 +1857,8 @@ static int neightbl_fill_info(struct sk_buff *skb, struct neigh_table *tbl,
                        ndst.ndts_table_fulls           += st->table_fulls;
                }
 
-               if (nla_put(skb, NDTA_STATS, sizeof(ndst), &ndst))
+               if (nla_put_64bit(skb, NDTA_STATS, sizeof(ndst), &ndst,
+                                 NDTA_PAD))
                        goto nla_put_failure;
        }
 
index 2bf8329..14d0934 100644 (file)
@@ -162,7 +162,8 @@ static int softnet_seq_show(struct seq_file *seq, void *v)
                   "%08x %08x %08x %08x %08x %08x %08x %08x %08x %08x %08x\n",
                   sd->processed, sd->dropped, sd->time_squeeze, 0,
                   0, 0, 0, 0, /* was fastroute */
-                  sd->cpu_collision, sd->received_rps, flow_limit_count);
+                  0,   /* was cpu_collision */
+                  sd->received_rps, flow_limit_count);
        return 0;
 }
 
index 20999aa..8604ae2 100644 (file)
@@ -3472,7 +3472,6 @@ xmit_more:
                                     pkt_dev->odevname, ret);
                pkt_dev->errors++;
                /* fallthru */
-       case NETDEV_TX_LOCKED:
        case NETDEV_TX_BUSY:
                /* Retry it next time */
                atomic_dec(&(pkt_dev->skb->users));
index 5ec059d..d471f09 100644 (file)
@@ -825,17 +825,17 @@ static inline int rtnl_vfinfo_size(const struct net_device *dev,
                         nla_total_size(sizeof(struct ifla_vf_link_state)) +
                         nla_total_size(sizeof(struct ifla_vf_rss_query_en)) +
                         /* IFLA_VF_STATS_RX_PACKETS */
-                        nla_total_size(sizeof(__u64)) +
+                        nla_total_size_64bit(sizeof(__u64)) +
                         /* IFLA_VF_STATS_TX_PACKETS */
-                        nla_total_size(sizeof(__u64)) +
+                        nla_total_size_64bit(sizeof(__u64)) +
                         /* IFLA_VF_STATS_RX_BYTES */
-                        nla_total_size(sizeof(__u64)) +
+                        nla_total_size_64bit(sizeof(__u64)) +
                         /* IFLA_VF_STATS_TX_BYTES */
-                        nla_total_size(sizeof(__u64)) +
+                        nla_total_size_64bit(sizeof(__u64)) +
                         /* IFLA_VF_STATS_BROADCAST */
-                        nla_total_size(sizeof(__u64)) +
+                        nla_total_size_64bit(sizeof(__u64)) +
                         /* IFLA_VF_STATS_MULTICAST */
-                        nla_total_size(sizeof(__u64)) +
+                        nla_total_size_64bit(sizeof(__u64)) +
                         nla_total_size(sizeof(struct ifla_vf_trust)));
                return size;
        } else
@@ -876,7 +876,7 @@ static noinline size_t if_nlmsg_size(const struct net_device *dev,
               + nla_total_size(IFNAMSIZ) /* IFLA_IFNAME */
               + nla_total_size(IFALIASZ) /* IFLA_IFALIAS */
               + nla_total_size(IFNAMSIZ) /* IFLA_QDISC */
-              + nla_total_size(sizeof(struct rtnl_link_ifmap))
+              + nla_total_size_64bit(sizeof(struct rtnl_link_ifmap))
               + nla_total_size(sizeof(struct rtnl_link_stats))
               + nla_total_size_64bit(sizeof(struct rtnl_link_stats64))
               + nla_total_size(MAX_ADDR_LEN) /* IFLA_ADDRESS */
@@ -1153,18 +1153,18 @@ static noinline_for_stack int rtnl_fill_vfinfo(struct sk_buff *skb,
                nla_nest_cancel(skb, vfinfo);
                return -EMSGSIZE;
        }
-       if (nla_put_u64(skb, IFLA_VF_STATS_RX_PACKETS,
-                       vf_stats.rx_packets) ||
-           nla_put_u64(skb, IFLA_VF_STATS_TX_PACKETS,
-                       vf_stats.tx_packets) ||
-           nla_put_u64(skb, IFLA_VF_STATS_RX_BYTES,
-                       vf_stats.rx_bytes) ||
-           nla_put_u64(skb, IFLA_VF_STATS_TX_BYTES,
-                       vf_stats.tx_bytes) ||
-           nla_put_u64(skb, IFLA_VF_STATS_BROADCAST,
-                       vf_stats.broadcast) ||
-           nla_put_u64(skb, IFLA_VF_STATS_MULTICAST,
-                       vf_stats.multicast))
+       if (nla_put_u64_64bit(skb, IFLA_VF_STATS_RX_PACKETS,
+                             vf_stats.rx_packets, IFLA_VF_STATS_PAD) ||
+           nla_put_u64_64bit(skb, IFLA_VF_STATS_TX_PACKETS,
+                             vf_stats.tx_packets, IFLA_VF_STATS_PAD) ||
+           nla_put_u64_64bit(skb, IFLA_VF_STATS_RX_BYTES,
+                             vf_stats.rx_bytes, IFLA_VF_STATS_PAD) ||
+           nla_put_u64_64bit(skb, IFLA_VF_STATS_TX_BYTES,
+                             vf_stats.tx_bytes, IFLA_VF_STATS_PAD) ||
+           nla_put_u64_64bit(skb, IFLA_VF_STATS_BROADCAST,
+                             vf_stats.broadcast, IFLA_VF_STATS_PAD) ||
+           nla_put_u64_64bit(skb, IFLA_VF_STATS_MULTICAST,
+                             vf_stats.multicast, IFLA_VF_STATS_PAD))
                return -EMSGSIZE;
        nla_nest_end(skb, vfstats);
        nla_nest_end(skb, vf);
@@ -1181,7 +1181,7 @@ static int rtnl_fill_link_ifmap(struct sk_buff *skb, struct net_device *dev)
                .dma         = dev->dma,
                .port        = dev->if_port,
        };
-       if (nla_put(skb, IFLA_MAP, sizeof(map), &map))
+       if (nla_put_64bit(skb, IFLA_MAP, sizeof(map), &map, IFLA_PAD))
                return -EMSGSIZE;
 
        return 0;
@@ -3444,13 +3444,21 @@ out:
        return err;
 }
 
+static bool stats_attr_valid(unsigned int mask, int attrid, int idxattr)
+{
+       return (mask & IFLA_STATS_FILTER_BIT(attrid)) &&
+              (!idxattr || idxattr == attrid);
+}
+
 static int rtnl_fill_statsinfo(struct sk_buff *skb, struct net_device *dev,
                               int type, u32 pid, u32 seq, u32 change,
-                              unsigned int flags, unsigned int filter_mask)
+                              unsigned int flags, unsigned int filter_mask,
+                              int *idxattr, int *prividx)
 {
        struct if_stats_msg *ifsm;
        struct nlmsghdr *nlh;
        struct nlattr *attr;
+       int s_prividx = *prividx;
 
        ASSERT_RTNL();
 
@@ -3462,7 +3470,7 @@ static int rtnl_fill_statsinfo(struct sk_buff *skb, struct net_device *dev,
        ifsm->ifindex = dev->ifindex;
        ifsm->filter_mask = filter_mask;
 
-       if (filter_mask & IFLA_STATS_FILTER_BIT(IFLA_STATS_LINK_64)) {
+       if (stats_attr_valid(filter_mask, IFLA_STATS_LINK_64, *idxattr)) {
                struct rtnl_link_stats64 *sp;
 
                attr = nla_reserve_64bit(skb, IFLA_STATS_LINK_64,
@@ -3475,12 +3483,36 @@ static int rtnl_fill_statsinfo(struct sk_buff *skb, struct net_device *dev,
                dev_get_stats(dev, sp);
        }
 
+       if (stats_attr_valid(filter_mask, IFLA_STATS_LINK_XSTATS, *idxattr)) {
+               const struct rtnl_link_ops *ops = dev->rtnl_link_ops;
+
+               if (ops && ops->fill_linkxstats) {
+                       int err;
+
+                       *idxattr = IFLA_STATS_LINK_XSTATS;
+                       attr = nla_nest_start(skb,
+                                             IFLA_STATS_LINK_XSTATS);
+                       if (!attr)
+                               goto nla_put_failure;
+
+                       err = ops->fill_linkxstats(skb, dev, prividx);
+                       nla_nest_end(skb, attr);
+                       if (err)
+                               goto nla_put_failure;
+                       *idxattr = 0;
+               }
+       }
+
        nlmsg_end(skb, nlh);
 
        return 0;
 
 nla_put_failure:
-       nlmsg_cancel(skb, nlh);
+       /* not a multi message or no progress mean a real error */
+       if (!(flags & NLM_F_MULTI) || s_prividx == *prividx)
+               nlmsg_cancel(skb, nlh);
+       else
+               nlmsg_end(skb, nlh);
 
        return -EMSGSIZE;
 }
@@ -3494,17 +3526,28 @@ static size_t if_nlmsg_stats_size(const struct net_device *dev,
 {
        size_t size = 0;
 
-       if (filter_mask & IFLA_STATS_FILTER_BIT(IFLA_STATS_LINK_64))
+       if (stats_attr_valid(filter_mask, IFLA_STATS_LINK_64, 0))
                size += nla_total_size_64bit(sizeof(struct rtnl_link_stats64));
 
+       if (stats_attr_valid(filter_mask, IFLA_STATS_LINK_XSTATS, 0)) {
+               const struct rtnl_link_ops *ops = dev->rtnl_link_ops;
+
+               if (ops && ops->get_linkxstats_size) {
+                       size += nla_total_size(ops->get_linkxstats_size(dev));
+                       /* for IFLA_STATS_LINK_XSTATS */
+                       size += nla_total_size(0);
+               }
+       }
+
        return size;
 }
 
 static int rtnl_stats_get(struct sk_buff *skb, struct nlmsghdr *nlh)
 {
        struct net *net = sock_net(skb->sk);
-       struct if_stats_msg *ifsm;
        struct net_device *dev = NULL;
+       int idxattr = 0, prividx = 0;
+       struct if_stats_msg *ifsm;
        struct sk_buff *nskb;
        u32 filter_mask;
        int err;
@@ -3528,7 +3571,7 @@ static int rtnl_stats_get(struct sk_buff *skb, struct nlmsghdr *nlh)
 
        err = rtnl_fill_statsinfo(nskb, dev, RTM_NEWSTATS,
                                  NETLINK_CB(skb).portid, nlh->nlmsg_seq, 0,
-                                 0, filter_mask);
+                                 0, filter_mask, &idxattr, &prividx);
        if (err < 0) {
                /* -EMSGSIZE implies BUG in if_nlmsg_stats_size */
                WARN_ON(err == -EMSGSIZE);
@@ -3542,18 +3585,19 @@ static int rtnl_stats_get(struct sk_buff *skb, struct nlmsghdr *nlh)
 
 static int rtnl_stats_dump(struct sk_buff *skb, struct netlink_callback *cb)
 {
+       int h, s_h, err, s_idx, s_idxattr, s_prividx;
        struct net *net = sock_net(skb->sk);
+       unsigned int flags = NLM_F_MULTI;
        struct if_stats_msg *ifsm;
-       int h, s_h;
-       int idx = 0, s_idx;
-       struct net_device *dev;
        struct hlist_head *head;
-       unsigned int flags = NLM_F_MULTI;
+       struct net_device *dev;
        u32 filter_mask = 0;
-       int err;
+       int idx = 0;
 
        s_h = cb->args[0];
        s_idx = cb->args[1];
+       s_idxattr = cb->args[2];
+       s_prividx = cb->args[3];
 
        cb->seq = net->dev_base_seq;
 
@@ -3571,7 +3615,8 @@ static int rtnl_stats_dump(struct sk_buff *skb, struct netlink_callback *cb)
                        err = rtnl_fill_statsinfo(skb, dev, RTM_NEWSTATS,
                                                  NETLINK_CB(cb->skb).portid,
                                                  cb->nlh->nlmsg_seq, 0,
-                                                 flags, filter_mask);
+                                                 flags, filter_mask,
+                                                 &s_idxattr, &s_prividx);
                        /* If we ran out of room on the first message,
                         * we're in trouble
                         */
@@ -3579,13 +3624,16 @@ static int rtnl_stats_dump(struct sk_buff *skb, struct netlink_callback *cb)
 
                        if (err < 0)
                                goto out;
-
+                       s_prividx = 0;
+                       s_idxattr = 0;
                        nl_dump_check_consistent(cb, nlmsg_hdr(skb));
 cont:
                        idx++;
                }
        }
 out:
+       cb->args[3] = s_prividx;
+       cb->args[2] = s_idxattr;
        cb->args[1] = idx;
        cb->args[0] = h;
 
index 7ff7788..5586be9 100644 (file)
@@ -3080,8 +3080,7 @@ struct sk_buff *skb_segment(struct sk_buff *head_skb,
        unsigned int headroom;
        unsigned int len = head_skb->len;
        __be16 proto;
-       bool csum;
-       int sg = !!(features & NETIF_F_SG);
+       bool csum, sg;
        int nfrags = skb_shinfo(head_skb)->nr_frags;
        int err = -ENOMEM;
        int i = 0;
@@ -3093,15 +3092,19 @@ struct sk_buff *skb_segment(struct sk_buff *head_skb,
        if (unlikely(!proto))
                return ERR_PTR(-EINVAL);
 
+       sg = !!(features & NETIF_F_SG);
        csum = !!can_checksum_protocol(features, proto);
 
        /* GSO partial only requires that we trim off any excess that
         * doesn't fit into an MSS sized block, so take care of that
         * now.
         */
-       if (features & NETIF_F_GSO_PARTIAL) {
+       if (sg && csum && (features & NETIF_F_GSO_PARTIAL)) {
                partial_segs = len / mss;
-               mss *= partial_segs;
+               if (partial_segs > 1)
+                       mss *= partial_segs;
+               else
+                       partial_segs = 0;
        }
 
        headroom = skb_headroom(head_skb);
@@ -4622,3 +4625,245 @@ failure:
        return NULL;
 }
 EXPORT_SYMBOL(alloc_skb_with_frags);
+
+/* carve out the first off bytes from skb when off < headlen */
+static int pskb_carve_inside_header(struct sk_buff *skb, const u32 off,
+                                   const int headlen, gfp_t gfp_mask)
+{
+       int i;
+       int size = skb_end_offset(skb);
+       int new_hlen = headlen - off;
+       u8 *data;
+       int doff = 0;
+
+       size = SKB_DATA_ALIGN(size);
+
+       if (skb_pfmemalloc(skb))
+               gfp_mask |= __GFP_MEMALLOC;
+       data = kmalloc_reserve(size +
+                              SKB_DATA_ALIGN(sizeof(struct skb_shared_info)),
+                              gfp_mask, NUMA_NO_NODE, NULL);
+       if (!data)
+               return -ENOMEM;
+
+       size = SKB_WITH_OVERHEAD(ksize(data));
+
+       /* Copy real data, and all frags */
+       skb_copy_from_linear_data_offset(skb, off, data, new_hlen);
+       skb->len -= off;
+
+       memcpy((struct skb_shared_info *)(data + size),
+              skb_shinfo(skb),
+              offsetof(struct skb_shared_info,
+                       frags[skb_shinfo(skb)->nr_frags]));
+       if (skb_cloned(skb)) {
+               /* drop the old head gracefully */
+               if (skb_orphan_frags(skb, gfp_mask)) {
+                       kfree(data);
+                       return -ENOMEM;
+               }
+               for (i = 0; i < skb_shinfo(skb)->nr_frags; i++)
+                       skb_frag_ref(skb, i);
+               if (skb_has_frag_list(skb))
+                       skb_clone_fraglist(skb);
+               skb_release_data(skb);
+       } else {
+               /* we can reuse existing recount- all we did was
+                * relocate values
+                */
+               skb_free_head(skb);
+       }
+
+       doff = (data - skb->head);
+       skb->head = data;
+       skb->data = data;
+       skb->head_frag = 0;
+#ifdef NET_SKBUFF_DATA_USES_OFFSET
+       skb->end = size;
+       doff = 0;
+#else
+       skb->end = skb->head + size;
+#endif
+       skb_set_tail_pointer(skb, skb_headlen(skb));
+       skb_headers_offset_update(skb, 0);
+       skb->cloned = 0;
+       skb->hdr_len = 0;
+       skb->nohdr = 0;
+       atomic_set(&skb_shinfo(skb)->dataref, 1);
+
+       return 0;
+}
+
+static int pskb_carve(struct sk_buff *skb, const u32 off, gfp_t gfp);
+
+/* carve out the first eat bytes from skb's frag_list. May recurse into
+ * pskb_carve()
+ */
+static int pskb_carve_frag_list(struct sk_buff *skb,
+                               struct skb_shared_info *shinfo, int eat,
+                               gfp_t gfp_mask)
+{
+       struct sk_buff *list = shinfo->frag_list;
+       struct sk_buff *clone = NULL;
+       struct sk_buff *insp = NULL;
+
+       do {
+               if (!list) {
+                       pr_err("Not enough bytes to eat. Want %d\n", eat);
+                       return -EFAULT;
+               }
+               if (list->len <= eat) {
+                       /* Eaten as whole. */
+                       eat -= list->len;
+                       list = list->next;
+                       insp = list;
+               } else {
+                       /* Eaten partially. */
+                       if (skb_shared(list)) {
+                               clone = skb_clone(list, gfp_mask);
+                               if (!clone)
+                                       return -ENOMEM;
+                               insp = list->next;
+                               list = clone;
+                       } else {
+                               /* This may be pulled without problems. */
+                               insp = list;
+                       }
+                       if (pskb_carve(list, eat, gfp_mask) < 0) {
+                               kfree_skb(clone);
+                               return -ENOMEM;
+                       }
+                       break;
+               }
+       } while (eat);
+
+       /* Free pulled out fragments. */
+       while ((list = shinfo->frag_list) != insp) {
+               shinfo->frag_list = list->next;
+               kfree_skb(list);
+       }
+       /* And insert new clone at head. */
+       if (clone) {
+               clone->next = list;
+               shinfo->frag_list = clone;
+       }
+       return 0;
+}
+
+/* carve off first len bytes from skb. Split line (off) is in the
+ * non-linear part of skb
+ */
+static int pskb_carve_inside_nonlinear(struct sk_buff *skb, const u32 off,
+                                      int pos, gfp_t gfp_mask)
+{
+       int i, k = 0;
+       int size = skb_end_offset(skb);
+       u8 *data;
+       const int nfrags = skb_shinfo(skb)->nr_frags;
+       struct skb_shared_info *shinfo;
+       int doff = 0;
+
+       size = SKB_DATA_ALIGN(size);
+
+       if (skb_pfmemalloc(skb))
+               gfp_mask |= __GFP_MEMALLOC;
+       data = kmalloc_reserve(size +
+                              SKB_DATA_ALIGN(sizeof(struct skb_shared_info)),
+                              gfp_mask, NUMA_NO_NODE, NULL);
+       if (!data)
+               return -ENOMEM;
+
+       size = SKB_WITH_OVERHEAD(ksize(data));
+
+       memcpy((struct skb_shared_info *)(data + size),
+              skb_shinfo(skb), offsetof(struct skb_shared_info,
+                                        frags[skb_shinfo(skb)->nr_frags]));
+       if (skb_orphan_frags(skb, gfp_mask)) {
+               kfree(data);
+               return -ENOMEM;
+       }
+       shinfo = (struct skb_shared_info *)(data + size);
+       for (i = 0; i < nfrags; i++) {
+               int fsize = skb_frag_size(&skb_shinfo(skb)->frags[i]);
+
+               if (pos + fsize > off) {
+                       shinfo->frags[k] = skb_shinfo(skb)->frags[i];
+
+                       if (pos < off) {
+                               /* Split frag.
+                                * We have two variants in this case:
+                                * 1. Move all the frag to the second
+                                *    part, if it is possible. F.e.
+                                *    this approach is mandatory for TUX,
+                                *    where splitting is expensive.
+                                * 2. Split is accurately. We make this.
+                                */
+                               shinfo->frags[0].page_offset += off - pos;
+                               skb_frag_size_sub(&shinfo->frags[0], off - pos);
+                       }
+                       skb_frag_ref(skb, i);
+                       k++;
+               }
+               pos += fsize;
+       }
+       shinfo->nr_frags = k;
+       if (skb_has_frag_list(skb))
+               skb_clone_fraglist(skb);
+
+       if (k == 0) {
+               /* split line is in frag list */
+               pskb_carve_frag_list(skb, shinfo, off - pos, gfp_mask);
+       }
+       skb_release_data(skb);
+
+       doff = (data - skb->head);
+       skb->head = data;
+       skb->head_frag = 0;
+       skb->data = data;
+#ifdef NET_SKBUFF_DATA_USES_OFFSET
+       skb->end = size;
+       doff = 0;
+#else
+       skb->end = skb->head + size;
+#endif
+       skb_reset_tail_pointer(skb);
+       skb_headers_offset_update(skb, 0);
+       skb->cloned   = 0;
+       skb->hdr_len  = 0;
+       skb->nohdr    = 0;
+       skb->len -= off;
+       skb->data_len = skb->len;
+       atomic_set(&skb_shinfo(skb)->dataref, 1);
+       return 0;
+}
+
+/* remove len bytes from the beginning of the skb */
+static int pskb_carve(struct sk_buff *skb, const u32 len, gfp_t gfp)
+{
+       int headlen = skb_headlen(skb);
+
+       if (len < headlen)
+               return pskb_carve_inside_header(skb, len, headlen, gfp);
+       else
+               return pskb_carve_inside_nonlinear(skb, len, headlen, gfp);
+}
+
+/* Extract to_copy bytes starting at off from skb, and return this in
+ * a new skb
+ */
+struct sk_buff *pskb_extract(struct sk_buff *skb, int off,
+                            int to_copy, gfp_t gfp)
+{
+       struct sk_buff  *clone = skb_clone(skb, gfp);
+
+       if (!clone)
+               return NULL;
+
+       if (pskb_carve(clone, off, gfp) < 0 ||
+           pskb_trim(clone, to_copy)) {
+               kfree_skb(clone);
+               return NULL;
+       }
+       return clone;
+}
+EXPORT_SYMBOL(pskb_extract);
index e16a5db..08bf97e 100644 (file)
@@ -1655,6 +1655,17 @@ void sock_wfree(struct sk_buff *skb)
 }
 EXPORT_SYMBOL(sock_wfree);
 
+/* This variant of sock_wfree() is used by TCP,
+ * since it sets SOCK_USE_WRITE_QUEUE.
+ */
+void __sock_wfree(struct sk_buff *skb)
+{
+       struct sock *sk = skb->sk;
+
+       if (atomic_sub_and_test(skb->truesize, &sk->sk_wmem_alloc))
+               __sk_free(sk);
+}
+
 void skb_set_owner_w(struct sk_buff *skb, struct sock *sk)
 {
        skb_orphan(skb);
@@ -1677,8 +1688,21 @@ void skb_set_owner_w(struct sk_buff *skb, struct sock *sk)
 }
 EXPORT_SYMBOL(skb_set_owner_w);
 
+/* This helper is used by netem, as it can hold packets in its
+ * delay queue. We want to allow the owner socket to send more
+ * packets, as if they were already TX completed by a typical driver.
+ * But we also want to keep skb->sk set because some packet schedulers
+ * rely on it (sch_fq for example). So we set skb->truesize to a small
+ * amount (1) and decrease sk_wmem_alloc accordingly.
+ */
 void skb_orphan_partial(struct sk_buff *skb)
 {
+       /* If this skb is a TCP pure ACK or already went here,
+        * we have nothing to do. 2 is already a very small truesize.
+        */
+       if (skb->truesize <= 2)
+               return;
+
        /* TCP stack sets skb->ooo_okay based on sk_wmem_alloc,
         * so we do not completely orphan skb, but transfert all
         * accounted bytes but one, to avoid unexpected reorders.
@@ -2019,33 +2043,27 @@ static void __release_sock(struct sock *sk)
        __releases(&sk->sk_lock.slock)
        __acquires(&sk->sk_lock.slock)
 {
-       struct sk_buff *skb = sk->sk_backlog.head;
+       struct sk_buff *skb, *next;
 
-       do {
+       while ((skb = sk->sk_backlog.head) != NULL) {
                sk->sk_backlog.head = sk->sk_backlog.tail = NULL;
-               bh_unlock_sock(sk);
 
-               do {
-                       struct sk_buff *next = skb->next;
+               spin_unlock_bh(&sk->sk_lock.slock);
 
+               do {
+                       next = skb->next;
                        prefetch(next);
                        WARN_ON_ONCE(skb_dst_is_noref(skb));
                        skb->next = NULL;
                        sk_backlog_rcv(sk, skb);
 
-                       /*
-                        * We are in process context here with softirqs
-                        * disabled, use cond_resched_softirq() to preempt.
-                        * This is safe to do because we've taken the backlog
-                        * queue private:
-                        */
-                       cond_resched_softirq();
+                       cond_resched();
 
                        skb = next;
                } while (skb != NULL);
 
-               bh_lock_sock(sk);
-       } while ((skb = sk->sk_backlog.head) != NULL);
+               spin_lock_bh(&sk->sk_lock.slock);
+       }
 
        /*
         * Doing the zeroing here guarantee we can not loop forever
@@ -2054,6 +2072,13 @@ static void __release_sock(struct sock *sk)
        sk->sk_backlog.len = 0;
 }
 
+void __sk_flush_backlog(struct sock *sk)
+{
+       spin_lock_bh(&sk->sk_lock.slock);
+       __release_sock(sk);
+       spin_unlock_bh(&sk->sk_lock.slock);
+}
+
 /**
  * sk_wait_data - wait for data to arrive at sk_receive_queue
  * @sk:    sock to wait on
index ca9e35b..6b10573 100644 (file)
@@ -120,7 +120,7 @@ static size_t sock_diag_nlmsg_size(void)
 {
        return NLMSG_ALIGN(sizeof(struct inet_diag_msg)
               + nla_total_size(sizeof(u8)) /* INET_DIAG_PROTOCOL */
-              + nla_total_size(sizeof(struct tcp_info))); /* INET_DIAG_INFO */
+              + nla_total_size_64bit(sizeof(struct tcp_info))); /* INET_DIAG_INFO */
 }
 
 static void sock_diag_broadcast_destroy_work(struct work_struct *work)
index b0e28d2..0c55ffb 100644 (file)
@@ -198,9 +198,9 @@ struct dccp_mib {
 };
 
 DECLARE_SNMP_STAT(struct dccp_mib, dccp_statistics);
-#define DCCP_INC_STATS(field)      SNMP_INC_STATS(dccp_statistics, field)
-#define DCCP_INC_STATS_BH(field)    SNMP_INC_STATS_BH(dccp_statistics, field)
-#define DCCP_DEC_STATS(field)      SNMP_DEC_STATS(dccp_statistics, field)
+#define DCCP_INC_STATS(field)  SNMP_INC_STATS(dccp_statistics, field)
+#define __DCCP_INC_STATS(field)        __SNMP_INC_STATS(dccp_statistics, field)
+#define DCCP_DEC_STATS(field)  SNMP_DEC_STATS(dccp_statistics, field)
 
 /*
  *     Checksumming routines
index 3bd14e8..ba34718 100644 (file)
@@ -359,7 +359,7 @@ send_sync:
                goto discard;
        }
 
-       DCCP_INC_STATS_BH(DCCP_MIB_INERRS);
+       DCCP_INC_STATS(DCCP_MIB_INERRS);
 discard:
        __kfree_skb(skb);
        return 0;
index f6d183f..5c7e413 100644 (file)
@@ -205,7 +205,7 @@ void dccp_req_err(struct sock *sk, u64 seq)
         * socket here.
         */
        if (!between48(seq, dccp_rsk(req)->dreq_iss, dccp_rsk(req)->dreq_gss)) {
-               NET_INC_STATS_BH(net, LINUX_MIB_OUTOFWINDOWICMPS);
+               __NET_INC_STATS(net, LINUX_MIB_OUTOFWINDOWICMPS);
        } else {
                /*
                 * Still in RESPOND, just remove it silently.
@@ -247,7 +247,7 @@ static void dccp_v4_err(struct sk_buff *skb, u32 info)
 
        if (skb->len < offset + sizeof(*dh) ||
            skb->len < offset + __dccp_basic_hdr_len(dh)) {
-               ICMP_INC_STATS_BH(net, ICMP_MIB_INERRORS);
+               __ICMP_INC_STATS(net, ICMP_MIB_INERRORS);
                return;
        }
 
@@ -256,7 +256,7 @@ static void dccp_v4_err(struct sk_buff *skb, u32 info)
                                       iph->saddr, ntohs(dh->dccph_sport),
                                       inet_iif(skb));
        if (!sk) {
-               ICMP_INC_STATS_BH(net, ICMP_MIB_INERRORS);
+               __ICMP_INC_STATS(net, ICMP_MIB_INERRORS);
                return;
        }
 
@@ -273,7 +273,7 @@ static void dccp_v4_err(struct sk_buff *skb, u32 info)
         * servers this needs to be solved differently.
         */
        if (sock_owned_by_user(sk))
-               NET_INC_STATS_BH(net, LINUX_MIB_LOCKDROPPEDICMPS);
+               __NET_INC_STATS(net, LINUX_MIB_LOCKDROPPEDICMPS);
 
        if (sk->sk_state == DCCP_CLOSED)
                goto out;
@@ -281,7 +281,7 @@ static void dccp_v4_err(struct sk_buff *skb, u32 info)
        dp = dccp_sk(sk);
        if ((1 << sk->sk_state) & ~(DCCPF_REQUESTING | DCCPF_LISTEN) &&
            !between48(seq, dp->dccps_awl, dp->dccps_awh)) {
-               NET_INC_STATS_BH(net, LINUX_MIB_OUTOFWINDOWICMPS);
+               __NET_INC_STATS(net, LINUX_MIB_OUTOFWINDOWICMPS);
                goto out;
        }
 
@@ -318,7 +318,7 @@ static void dccp_v4_err(struct sk_buff *skb, u32 info)
        case DCCP_REQUESTING:
        case DCCP_RESPOND:
                if (!sock_owned_by_user(sk)) {
-                       DCCP_INC_STATS_BH(DCCP_MIB_ATTEMPTFAILS);
+                       __DCCP_INC_STATS(DCCP_MIB_ATTEMPTFAILS);
                        sk->sk_err = err;
 
                        sk->sk_error_report(sk);
@@ -431,11 +431,11 @@ struct sock *dccp_v4_request_recv_sock(const struct sock *sk,
        return newsk;
 
 exit_overflow:
-       NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_LISTENOVERFLOWS);
+       __NET_INC_STATS(sock_net(sk), LINUX_MIB_LISTENOVERFLOWS);
 exit_nonewsk:
        dst_release(dst);
 exit:
-       NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_LISTENDROPS);
+       __NET_INC_STATS(sock_net(sk), LINUX_MIB_LISTENDROPS);
        return NULL;
 put_and_exit:
        inet_csk_prepare_forced_close(newsk);
@@ -462,7 +462,7 @@ static struct dst_entry* dccp_v4_route_skb(struct net *net, struct sock *sk,
        security_skb_classify_flow(skb, flowi4_to_flowi(&fl4));
        rt = ip_route_output_flow(net, &fl4, sk);
        if (IS_ERR(rt)) {
-               IP_INC_STATS_BH(net, IPSTATS_MIB_OUTNOROUTES);
+               __IP_INC_STATS(net, IPSTATS_MIB_OUTNOROUTES);
                return NULL;
        }
 
@@ -533,8 +533,8 @@ static void dccp_v4_ctl_send_reset(const struct sock *sk, struct sk_buff *rxskb)
        bh_unlock_sock(ctl_sk);
 
        if (net_xmit_eval(err) == 0) {
-               DCCP_INC_STATS_BH(DCCP_MIB_OUTSEGS);
-               DCCP_INC_STATS_BH(DCCP_MIB_OUTRSTS);
+               DCCP_INC_STATS(DCCP_MIB_OUTSEGS);
+               DCCP_INC_STATS(DCCP_MIB_OUTRSTS);
        }
 out:
         dst_release(dst);
@@ -637,7 +637,7 @@ int dccp_v4_conn_request(struct sock *sk, struct sk_buff *skb)
 drop_and_free:
        reqsk_free(req);
 drop:
-       DCCP_INC_STATS_BH(DCCP_MIB_ATTEMPTFAILS);
+       __DCCP_INC_STATS(DCCP_MIB_ATTEMPTFAILS);
        return -1;
 }
 EXPORT_SYMBOL_GPL(dccp_v4_conn_request);
index 8ceb3ce..d176f4e 100644 (file)
@@ -80,8 +80,8 @@ static void dccp_v6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
 
        if (skb->len < offset + sizeof(*dh) ||
            skb->len < offset + __dccp_basic_hdr_len(dh)) {
-               ICMP6_INC_STATS_BH(net, __in6_dev_get(skb->dev),
-                                  ICMP6_MIB_INERRORS);
+               __ICMP6_INC_STATS(net, __in6_dev_get(skb->dev),
+                                 ICMP6_MIB_INERRORS);
                return;
        }
 
@@ -91,8 +91,8 @@ static void dccp_v6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
                                        inet6_iif(skb));
 
        if (!sk) {
-               ICMP6_INC_STATS_BH(net, __in6_dev_get(skb->dev),
-                                  ICMP6_MIB_INERRORS);
+               __ICMP6_INC_STATS(net, __in6_dev_get(skb->dev),
+                                 ICMP6_MIB_INERRORS);
                return;
        }
 
@@ -106,7 +106,7 @@ static void dccp_v6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
 
        bh_lock_sock(sk);
        if (sock_owned_by_user(sk))
-               NET_INC_STATS_BH(net, LINUX_MIB_LOCKDROPPEDICMPS);
+               __NET_INC_STATS(net, LINUX_MIB_LOCKDROPPEDICMPS);
 
        if (sk->sk_state == DCCP_CLOSED)
                goto out;
@@ -114,7 +114,7 @@ static void dccp_v6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
        dp = dccp_sk(sk);
        if ((1 << sk->sk_state) & ~(DCCPF_REQUESTING | DCCPF_LISTEN) &&
            !between48(seq, dp->dccps_awl, dp->dccps_awh)) {
-               NET_INC_STATS_BH(net, LINUX_MIB_OUTOFWINDOWICMPS);
+               __NET_INC_STATS(net, LINUX_MIB_OUTOFWINDOWICMPS);
                goto out;
        }
 
@@ -156,7 +156,7 @@ static void dccp_v6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
        case DCCP_RESPOND:  /* Cannot happen.
                               It can, it SYNs are crossed. --ANK */
                if (!sock_owned_by_user(sk)) {
-                       DCCP_INC_STATS_BH(DCCP_MIB_ATTEMPTFAILS);
+                       __DCCP_INC_STATS(DCCP_MIB_ATTEMPTFAILS);
                        sk->sk_err = err;
                        /*
                         * Wake people up to see the error
@@ -277,8 +277,8 @@ static void dccp_v6_ctl_send_reset(const struct sock *sk, struct sk_buff *rxskb)
        if (!IS_ERR(dst)) {
                skb_dst_set(skb, dst);
                ip6_xmit(ctl_sk, skb, &fl6, NULL, 0);
-               DCCP_INC_STATS_BH(DCCP_MIB_OUTSEGS);
-               DCCP_INC_STATS_BH(DCCP_MIB_OUTRSTS);
+               DCCP_INC_STATS(DCCP_MIB_OUTSEGS);
+               DCCP_INC_STATS(DCCP_MIB_OUTRSTS);
                return;
        }
 
@@ -378,7 +378,7 @@ static int dccp_v6_conn_request(struct sock *sk, struct sk_buff *skb)
 drop_and_free:
        reqsk_free(req);
 drop:
-       DCCP_INC_STATS_BH(DCCP_MIB_ATTEMPTFAILS);
+       __DCCP_INC_STATS(DCCP_MIB_ATTEMPTFAILS);
        return -1;
 }
 
@@ -527,11 +527,11 @@ static struct sock *dccp_v6_request_recv_sock(const struct sock *sk,
        return newsk;
 
 out_overflow:
-       NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_LISTENOVERFLOWS);
+       __NET_INC_STATS(sock_net(sk), LINUX_MIB_LISTENOVERFLOWS);
 out_nonewsk:
        dst_release(dst);
 out:
-       NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_LISTENDROPS);
+       __NET_INC_STATS(sock_net(sk), LINUX_MIB_LISTENDROPS);
        return NULL;
 }
 
index 1994f8a..53eddf9 100644 (file)
@@ -127,7 +127,7 @@ struct sock *dccp_create_openreq_child(const struct sock *sk,
                }
                dccp_init_xmit_timers(newsk);
 
-               DCCP_INC_STATS_BH(DCCP_MIB_PASSIVEOPENS);
+               __DCCP_INC_STATS(DCCP_MIB_PASSIVEOPENS);
        }
        return newsk;
 }
index 9bce318..74d29c5 100644 (file)
@@ -253,7 +253,7 @@ out_nonsensical_length:
        return 0;
 
 out_invalid_option:
-       DCCP_INC_STATS_BH(DCCP_MIB_INVALIDOPT);
+       DCCP_INC_STATS(DCCP_MIB_INVALIDOPT);
        rc = DCCP_RESET_CODE_OPTION_ERROR;
 out_featneg_failed:
        DCCP_WARN("DCCP(%p): Option %d (len=%d) error=%u\n", sk, opt, len, rc);
index 3ef7ace..3a2c340 100644 (file)
@@ -28,7 +28,7 @@ static void dccp_write_err(struct sock *sk)
 
        dccp_send_reset(sk, DCCP_RESET_CODE_ABORTED);
        dccp_done(sk);
-       DCCP_INC_STATS_BH(DCCP_MIB_ABORTONTIMEOUT);
+       __DCCP_INC_STATS(DCCP_MIB_ABORTONTIMEOUT);
 }
 
 /* A write timeout has occurred. Process the after effects. */
@@ -100,7 +100,7 @@ static void dccp_retransmit_timer(struct sock *sk)
         * total number of retransmissions of clones of original packets.
         */
        if (icsk->icsk_retransmits == 0)
-               DCCP_INC_STATS_BH(DCCP_MIB_TIMEOUTS);
+               __DCCP_INC_STATS(DCCP_MIB_TIMEOUTS);
 
        if (dccp_retransmit_skb(sk) != 0) {
                /*
@@ -179,7 +179,7 @@ static void dccp_delack_timer(unsigned long data)
        if (sock_owned_by_user(sk)) {
                /* Try again later. */
                icsk->icsk_ack.blocked = 1;
-               NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_DELAYEDACKLOCKED);
+               __NET_INC_STATS(sock_net(sk), LINUX_MIB_DELAYEDACKLOCKED);
                sk_reset_timer(sk, &icsk->icsk_delack_timer,
                               jiffies + TCP_DELACK_MIN);
                goto out;
@@ -209,7 +209,7 @@ static void dccp_delack_timer(unsigned long data)
                        icsk->icsk_ack.ato = TCP_ATO_MIN;
                }
                dccp_send_ack(sk);
-               NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_DELAYEDACKS);
+               __NET_INC_STATS(sock_net(sk), LINUX_MIB_DELAYEDACKS);
        }
 out:
        bh_unlock_sock(sk);
index 3b6750f..5ea8a40 100644 (file)
@@ -666,6 +666,78 @@ static void dsa_slave_get_strings(struct net_device *dev,
        }
 }
 
+static void dsa_cpu_port_get_ethtool_stats(struct net_device *dev,
+                                          struct ethtool_stats *stats,
+                                          uint64_t *data)
+{
+       struct dsa_switch_tree *dst = dev->dsa_ptr;
+       struct dsa_switch *ds = dst->ds[0];
+       s8 cpu_port = dst->cpu_port;
+       int count = 0;
+
+       if (dst->master_ethtool_ops.get_sset_count) {
+               count = dst->master_ethtool_ops.get_sset_count(dev,
+                                                              ETH_SS_STATS);
+               dst->master_ethtool_ops.get_ethtool_stats(dev, stats, data);
+       }
+
+       if (ds->drv->get_ethtool_stats)
+               ds->drv->get_ethtool_stats(ds, cpu_port, data + count);
+}
+
+static int dsa_cpu_port_get_sset_count(struct net_device *dev, int sset)
+{
+       struct dsa_switch_tree *dst = dev->dsa_ptr;
+       struct dsa_switch *ds = dst->ds[0];
+       int count = 0;
+
+       if (dst->master_ethtool_ops.get_sset_count)
+               count += dst->master_ethtool_ops.get_sset_count(dev, sset);
+
+       if (sset == ETH_SS_STATS && ds->drv->get_sset_count)
+               count += ds->drv->get_sset_count(ds);
+
+       return count;
+}
+
+static void dsa_cpu_port_get_strings(struct net_device *dev,
+                                    uint32_t stringset, uint8_t *data)
+{
+       struct dsa_switch_tree *dst = dev->dsa_ptr;
+       struct dsa_switch *ds = dst->ds[0];
+       s8 cpu_port = dst->cpu_port;
+       int len = ETH_GSTRING_LEN;
+       int mcount = 0, count;
+       unsigned int i;
+       uint8_t pfx[4];
+       uint8_t *ndata;
+
+       snprintf(pfx, sizeof(pfx), "p%.2d", cpu_port);
+       /* We do not want to be NULL-terminated, since this is a prefix */
+       pfx[sizeof(pfx) - 1] = '_';
+
+       if (dst->master_ethtool_ops.get_sset_count) {
+               mcount = dst->master_ethtool_ops.get_sset_count(dev,
+                                                               ETH_SS_STATS);
+               dst->master_ethtool_ops.get_strings(dev, stringset, data);
+       }
+
+       if (stringset == ETH_SS_STATS && ds->drv->get_strings) {
+               ndata = data + mcount * len;
+               /* This function copies ETH_GSTRINGS_LEN bytes, we will mangle
+                * the output after to prepend our CPU port prefix we
+                * constructed earlier
+                */
+               ds->drv->get_strings(ds, cpu_port, ndata);
+               count = ds->drv->get_sset_count(ds);
+               for (i = 0; i < count; i++) {
+                       memmove(ndata + (i * len + sizeof(pfx)),
+                               ndata + i * len, len - sizeof(pfx));
+                       memcpy(ndata + i * len, pfx, sizeof(pfx));
+               }
+       }
+}
+
 static void dsa_slave_get_ethtool_stats(struct net_device *dev,
                                        struct ethtool_stats *stats,
                                        uint64_t *data)
@@ -821,6 +893,8 @@ static const struct ethtool_ops dsa_slave_ethtool_ops = {
        .get_eee                = dsa_slave_get_eee,
 };
 
+static struct ethtool_ops dsa_cpu_port_ethtool_ops;
+
 static const struct net_device_ops dsa_slave_netdev_ops = {
        .ndo_open               = dsa_slave_open,
        .ndo_stop               = dsa_slave_close,
@@ -1038,6 +1112,7 @@ int dsa_slave_create(struct dsa_switch *ds, struct device *parent,
                     int port, char *name)
 {
        struct net_device *master = ds->dst->master_netdev;
+       struct dsa_switch_tree *dst = ds->dst;
        struct net_device *slave_dev;
        struct dsa_slave_priv *p;
        int ret;
@@ -1049,6 +1124,19 @@ int dsa_slave_create(struct dsa_switch *ds, struct device *parent,
 
        slave_dev->features = master->vlan_features;
        slave_dev->ethtool_ops = &dsa_slave_ethtool_ops;
+       if (master->ethtool_ops != &dsa_cpu_port_ethtool_ops) {
+               memcpy(&dst->master_ethtool_ops, master->ethtool_ops,
+                      sizeof(struct ethtool_ops));
+               memcpy(&dsa_cpu_port_ethtool_ops, &dst->master_ethtool_ops,
+                      sizeof(struct ethtool_ops));
+               dsa_cpu_port_ethtool_ops.get_sset_count =
+                                       dsa_cpu_port_get_sset_count;
+               dsa_cpu_port_ethtool_ops.get_ethtool_stats =
+                                       dsa_cpu_port_get_ethtool_stats;
+               dsa_cpu_port_ethtool_ops.get_strings =
+                                       dsa_cpu_port_get_strings;
+               master->ethtool_ops = &dsa_cpu_port_ethtool_ops;
+       }
        eth_hw_addr_inherit(slave_dev, master);
        slave_dev->priv_flags |= IFF_NO_QUEUE;
        slave_dev->netdev_ops = &dsa_slave_netdev_ops;
index b4e17a7..5ac7789 100644 (file)
@@ -41,24 +41,12 @@ static inline u32 ieee802154_addr_hash(const struct ieee802154_addr *a)
                return (((__force u64)a->extended_addr) >> 32) ^
                        (((__force u64)a->extended_addr) & 0xffffffff);
        case IEEE802154_ADDR_SHORT:
-               return (__force u32)(a->short_addr);
+               return (__force u32)(a->short_addr + (a->pan_id << 16));
        default:
                return 0;
        }
 }
 
-/* private device info */
-struct lowpan_dev_info {
-       struct net_device       *wdev; /* wpan device ptr */
-       u16                     fragment_tag;
-};
-
-static inline struct
-lowpan_dev_info *lowpan_dev_info(const struct net_device *dev)
-{
-       return (struct lowpan_dev_info *)lowpan_priv(dev)->priv;
-}
-
 int lowpan_frag_rcv(struct sk_buff *skb, const u8 frag_type);
 void lowpan_net_frag_exit(void);
 int lowpan_net_frag_init(void);
index 0023c90..dd085db 100644 (file)
@@ -148,7 +148,7 @@ static int lowpan_newlink(struct net *src_net, struct net_device *ldev,
                return -EBUSY;
        }
 
-       lowpan_dev_info(ldev)->wdev = wdev;
+       lowpan_802154_dev(ldev)->wdev = wdev;
        /* Set the lowpan hardware address to the wpan hardware address. */
        memcpy(ldev->dev_addr, wdev->dev_addr, IEEE802154_ADDR_LEN);
        /* We need headroom for possible wpan_dev_hard_header call and tailroom
@@ -173,7 +173,7 @@ static int lowpan_newlink(struct net *src_net, struct net_device *ldev,
 
 static void lowpan_dellink(struct net_device *ldev, struct list_head *head)
 {
-       struct net_device *wdev = lowpan_dev_info(ldev)->wdev;
+       struct net_device *wdev = lowpan_802154_dev(ldev)->wdev;
 
        ASSERT_RTNL();
 
@@ -184,7 +184,7 @@ static void lowpan_dellink(struct net_device *ldev, struct list_head *head)
 
 static struct rtnl_link_ops lowpan_link_ops __read_mostly = {
        .kind           = "lowpan",
-       .priv_size      = LOWPAN_PRIV_SIZE(sizeof(struct lowpan_dev_info)),
+       .priv_size      = LOWPAN_PRIV_SIZE(sizeof(struct lowpan_802154_dev)),
        .setup          = lowpan_setup,
        .newlink        = lowpan_newlink,
        .dellink        = lowpan_dellink,
index d4353fa..e459afd 100644 (file)
@@ -84,7 +84,7 @@ static struct sk_buff*
 lowpan_alloc_frag(struct sk_buff *skb, int size,
                  const struct ieee802154_hdr *master_hdr, bool frag1)
 {
-       struct net_device *wdev = lowpan_dev_info(skb->dev)->wdev;
+       struct net_device *wdev = lowpan_802154_dev(skb->dev)->wdev;
        struct sk_buff *frag;
        int rc;
 
@@ -148,8 +148,8 @@ lowpan_xmit_fragmented(struct sk_buff *skb, struct net_device *ldev,
        int frag_cap, frag_len, payload_cap, rc;
        int skb_unprocessed, skb_offset;
 
-       frag_tag = htons(lowpan_dev_info(ldev)->fragment_tag);
-       lowpan_dev_info(ldev)->fragment_tag++;
+       frag_tag = htons(lowpan_802154_dev(ldev)->fragment_tag);
+       lowpan_802154_dev(ldev)->fragment_tag++;
 
        frag_hdr[0] = LOWPAN_DISPATCH_FRAG1 | ((dgram_size >> 8) & 0x07);
        frag_hdr[1] = dgram_size & 0xff;
@@ -208,7 +208,7 @@ err:
 static int lowpan_header(struct sk_buff *skb, struct net_device *ldev,
                         u16 *dgram_size, u16 *dgram_offset)
 {
-       struct wpan_dev *wpan_dev = lowpan_dev_info(ldev)->wdev->ieee802154_ptr;
+       struct wpan_dev *wpan_dev = lowpan_802154_dev(ldev)->wdev->ieee802154_ptr;
        struct ieee802154_addr sa, da;
        struct ieee802154_mac_cb *cb = mac_cb_init(skb);
        struct lowpan_addr_info info;
@@ -248,8 +248,8 @@ static int lowpan_header(struct sk_buff *skb, struct net_device *ldev,
                cb->ackreq = wpan_dev->ackreq;
        }
 
-       return wpan_dev_hard_header(skb, lowpan_dev_info(ldev)->wdev, &da, &sa,
-                                   0);
+       return wpan_dev_hard_header(skb, lowpan_802154_dev(ldev)->wdev, &da,
+                                   &sa, 0);
 }
 
 netdev_tx_t lowpan_xmit(struct sk_buff *skb, struct net_device *ldev)
@@ -283,7 +283,7 @@ netdev_tx_t lowpan_xmit(struct sk_buff *skb, struct net_device *ldev)
        max_single = ieee802154_max_payload(&wpan_hdr);
 
        if (skb_tail_pointer(skb) - skb_network_header(skb) <= max_single) {
-               skb->dev = lowpan_dev_info(ldev)->wdev;
+               skb->dev = lowpan_802154_dev(ldev)->wdev;
                ldev->stats.tx_packets++;
                ldev->stats.tx_bytes += dgram_size;
                return dev_queue_xmit(skb);
index 3503c38..d3cbb32 100644 (file)
 
 #include "ieee802154.h"
 
-static int nla_put_hwaddr(struct sk_buff *msg, int type, __le64 hwaddr)
+static int nla_put_hwaddr(struct sk_buff *msg, int type, __le64 hwaddr,
+                         int padattr)
 {
-       return nla_put_u64(msg, type, swab64((__force u64)hwaddr));
+       return nla_put_u64_64bit(msg, type, swab64((__force u64)hwaddr),
+                                padattr);
 }
 
 static __le64 nla_get_hwaddr(const struct nlattr *nla)
@@ -623,7 +625,8 @@ ieee802154_llsec_fill_key_id(struct sk_buff *msg,
 
                if (desc->device_addr.mode == IEEE802154_ADDR_LONG &&
                    nla_put_hwaddr(msg, IEEE802154_ATTR_HW_ADDR,
-                                  desc->device_addr.extended_addr))
+                                  desc->device_addr.extended_addr,
+                                  IEEE802154_ATTR_PAD))
                        return -EMSGSIZE;
        }
 
@@ -638,7 +641,7 @@ ieee802154_llsec_fill_key_id(struct sk_buff *msg,
 
        if (desc->mode == IEEE802154_SCF_KEY_HW_INDEX &&
            nla_put_hwaddr(msg, IEEE802154_ATTR_LLSEC_KEY_SOURCE_EXTENDED,
-                          desc->extended_source))
+                          desc->extended_source, IEEE802154_ATTR_PAD))
                return -EMSGSIZE;
 
        return 0;
@@ -1063,7 +1066,8 @@ ieee802154_nl_fill_dev(struct sk_buff *msg, u32 portid, u32 seq,
            nla_put_shortaddr(msg, IEEE802154_ATTR_PAN_ID, desc->pan_id) ||
            nla_put_shortaddr(msg, IEEE802154_ATTR_SHORT_ADDR,
                              desc->short_addr) ||
-           nla_put_hwaddr(msg, IEEE802154_ATTR_HW_ADDR, desc->hwaddr) ||
+           nla_put_hwaddr(msg, IEEE802154_ATTR_HW_ADDR, desc->hwaddr,
+                          IEEE802154_ATTR_PAD) ||
            nla_put_u32(msg, IEEE802154_ATTR_LLSEC_FRAME_COUNTER,
                        desc->frame_counter) ||
            nla_put_u8(msg, IEEE802154_ATTR_LLSEC_DEV_OVERRIDE,
@@ -1167,7 +1171,8 @@ ieee802154_nl_fill_devkey(struct sk_buff *msg, u32 portid, u32 seq,
 
        if (nla_put_string(msg, IEEE802154_ATTR_DEV_NAME, dev->name) ||
            nla_put_u32(msg, IEEE802154_ATTR_DEV_INDEX, dev->ifindex) ||
-           nla_put_hwaddr(msg, IEEE802154_ATTR_HW_ADDR, devaddr) ||
+           nla_put_hwaddr(msg, IEEE802154_ATTR_HW_ADDR, devaddr,
+                          IEEE802154_ATTR_PAD) ||
            nla_put_u32(msg, IEEE802154_ATTR_LLSEC_FRAME_COUNTER,
                        devkey->frame_counter) ||
            ieee802154_llsec_fill_key_id(msg, &devkey->key_id))
index 6140720..ca207db 100644 (file)
@@ -813,7 +813,8 @@ nl802154_send_iface(struct sk_buff *msg, u32 portid, u32 seq, int flags,
 
        if (nla_put_u32(msg, NL802154_ATTR_WPAN_PHY, rdev->wpan_phy_idx) ||
            nla_put_u32(msg, NL802154_ATTR_IFTYPE, wpan_dev->iftype) ||
-           nla_put_u64(msg, NL802154_ATTR_WPAN_DEV, wpan_dev_id(wpan_dev)) ||
+           nla_put_u64_64bit(msg, NL802154_ATTR_WPAN_DEV,
+                             wpan_dev_id(wpan_dev), NL802154_ATTR_PAD) ||
            nla_put_u32(msg, NL802154_ATTR_GENERATION,
                        rdev->devlist_generation ^
                        (cfg802154_rdev_list_generation << 2)))
@@ -1077,6 +1078,11 @@ static int nl802154_set_pan_id(struct sk_buff *skb, struct genl_info *info)
        if (netif_running(dev))
                return -EBUSY;
 
+       if (wpan_dev->lowpan_dev) {
+               if (netif_running(wpan_dev->lowpan_dev))
+                       return -EBUSY;
+       }
+
        /* don't change address fields on monitor */
        if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR ||
            !info->attrs[NL802154_ATTR_PAN_ID])
@@ -1108,6 +1114,11 @@ static int nl802154_set_short_addr(struct sk_buff *skb, struct genl_info *info)
        if (netif_running(dev))
                return -EBUSY;
 
+       if (wpan_dev->lowpan_dev) {
+               if (netif_running(wpan_dev->lowpan_dev))
+                       return -EBUSY;
+       }
+
        /* don't change address fields on monitor */
        if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR ||
            !info->attrs[NL802154_ATTR_SHORT_ADDR])
index c34c754..89a8cac 100644 (file)
@@ -436,7 +436,7 @@ static int arp_filter(__be32 sip, __be32 tip, struct net_device *dev)
        if (IS_ERR(rt))
                return 1;
        if (rt->dst.dev != dev) {
-               NET_INC_STATS_BH(net, LINUX_MIB_ARPFILTER);
+               __NET_INC_STATS(net, LINUX_MIB_ARPFILTER);
                flag = 1;
        }
        ip_rt_put(rt);
index 8a9246d..ef2ebeb 100644 (file)
@@ -110,6 +110,7 @@ struct fib_table *fib_new_table(struct net *net, u32 id)
        hlist_add_head_rcu(&tb->tb_hlist, &net->ipv4.fib_table_hash[h]);
        return tb;
 }
+EXPORT_SYMBOL_GPL(fib_new_table);
 
 /* caller must hold either rtnl or rcu read lock */
 struct fib_table *fib_get_table(struct net *net, u32 id)
@@ -904,7 +905,11 @@ void fib_del_ifaddr(struct in_ifaddr *ifa, struct in_ifaddr *iprim)
        if (ifa->ifa_flags & IFA_F_SECONDARY) {
                prim = inet_ifa_byprefix(in_dev, any, ifa->ifa_mask);
                if (!prim) {
-                       pr_warn("%s: bug: prim == NULL\n", __func__);
+                       /* if the device has been deleted, we don't perform
+                        * address promotion
+                        */
+                       if (!in_dev->dead)
+                               pr_warn("%s: bug: prim == NULL\n", __func__);
                        return;
                }
                if (iprim && iprim != prim) {
index d9c552a..d78e2ee 100644 (file)
@@ -60,6 +60,67 @@ int gre_del_protocol(const struct gre_protocol *proto, u8 version)
 }
 EXPORT_SYMBOL_GPL(gre_del_protocol);
 
+/* Fills in tpi and returns header length to be pulled. */
+int gre_parse_header(struct sk_buff *skb, struct tnl_ptk_info *tpi,
+                    bool *csum_err)
+{
+       const struct gre_base_hdr *greh;
+       __be32 *options;
+       int hdr_len;
+
+       if (unlikely(!pskb_may_pull(skb, sizeof(struct gre_base_hdr))))
+               return -EINVAL;
+
+       greh = (struct gre_base_hdr *)skb_transport_header(skb);
+       if (unlikely(greh->flags & (GRE_VERSION | GRE_ROUTING)))
+               return -EINVAL;
+
+       tpi->flags = gre_flags_to_tnl_flags(greh->flags);
+       hdr_len = gre_calc_hlen(tpi->flags);
+
+       if (!pskb_may_pull(skb, hdr_len))
+               return -EINVAL;
+
+       greh = (struct gre_base_hdr *)skb_transport_header(skb);
+       tpi->proto = greh->protocol;
+
+       options = (__be32 *)(greh + 1);
+       if (greh->flags & GRE_CSUM) {
+               if (skb_checksum_simple_validate(skb)) {
+                       *csum_err = true;
+                       return -EINVAL;
+               }
+
+               skb_checksum_try_convert(skb, IPPROTO_GRE, 0,
+                                        null_compute_pseudo);
+               options++;
+       }
+
+       if (greh->flags & GRE_KEY) {
+               tpi->key = *options;
+               options++;
+       } else {
+               tpi->key = 0;
+       }
+       if (unlikely(greh->flags & GRE_SEQ)) {
+               tpi->seq = *options;
+               options++;
+       } else {
+               tpi->seq = 0;
+       }
+       /* WCCP version 1 and 2 protocol decoding.
+        * - Change protocol to IP
+        * - When dealing with WCCPv2, Skip extra 4 bytes in GRE header
+        */
+       if (greh->flags == 0 && tpi->proto == htons(ETH_P_WCCP)) {
+               tpi->proto = htons(ETH_P_IP);
+               if ((*(u8 *)options & 0xF0) != 0x40)
+                       hdr_len += 4;
+       }
+       return hdr_len;
+}
+EXPORT_SYMBOL(gre_parse_header);
+
 static int gre_rcv(struct sk_buff *skb)
 {
        const struct gre_protocol *proto;
index 6333489..38abe70 100644 (file)
@@ -363,7 +363,7 @@ static void icmp_push_reply(struct icmp_bxm *icmp_param,
                           icmp_param->data_len+icmp_param->head_len,
                           icmp_param->head_len,
                           ipc, rt, MSG_DONTWAIT) < 0) {
-               ICMP_INC_STATS_BH(sock_net(sk), ICMP_MIB_OUTERRORS);
+               __ICMP_INC_STATS(sock_net(sk), ICMP_MIB_OUTERRORS);
                ip_flush_pending_frames(sk);
        } else if ((skb = skb_peek(&sk->sk_write_queue)) != NULL) {
                struct icmphdr *icmph = icmp_hdr(skb);
@@ -744,7 +744,7 @@ static void icmp_socket_deliver(struct sk_buff *skb, u32 info)
         * avoid additional coding at protocol handlers.
         */
        if (!pskb_may_pull(skb, iph->ihl * 4 + 8)) {
-               ICMP_INC_STATS_BH(dev_net(skb->dev), ICMP_MIB_INERRORS);
+               __ICMP_INC_STATS(dev_net(skb->dev), ICMP_MIB_INERRORS);
                return;
        }
 
@@ -865,7 +865,7 @@ static bool icmp_unreach(struct sk_buff *skb)
 out:
        return true;
 out_err:
-       ICMP_INC_STATS_BH(net, ICMP_MIB_INERRORS);
+       __ICMP_INC_STATS(net, ICMP_MIB_INERRORS);
        return false;
 }
 
@@ -877,7 +877,7 @@ out_err:
 static bool icmp_redirect(struct sk_buff *skb)
 {
        if (skb->len < sizeof(struct iphdr)) {
-               ICMP_INC_STATS_BH(dev_net(skb->dev), ICMP_MIB_INERRORS);
+               __ICMP_INC_STATS(dev_net(skb->dev), ICMP_MIB_INERRORS);
                return false;
        }
 
@@ -956,7 +956,7 @@ static bool icmp_timestamp(struct sk_buff *skb)
        return true;
 
 out_err:
-       ICMP_INC_STATS_BH(dev_net(skb_dst(skb)->dev), ICMP_MIB_INERRORS);
+       __ICMP_INC_STATS(dev_net(skb_dst(skb)->dev), ICMP_MIB_INERRORS);
        return false;
 }
 
@@ -996,7 +996,7 @@ int icmp_rcv(struct sk_buff *skb)
                skb_set_network_header(skb, nh);
        }
 
-       ICMP_INC_STATS_BH(net, ICMP_MIB_INMSGS);
+       __ICMP_INC_STATS(net, ICMP_MIB_INMSGS);
 
        if (skb_checksum_simple_validate(skb))
                goto csum_error;
@@ -1006,7 +1006,7 @@ int icmp_rcv(struct sk_buff *skb)
 
        icmph = icmp_hdr(skb);
 
-       ICMPMSGIN_INC_STATS_BH(net, icmph->type);
+       ICMPMSGIN_INC_STATS(net, icmph->type);
        /*
         *      18 is the highest 'known' ICMP type. Anything else is a mystery
         *
@@ -1052,9 +1052,9 @@ drop:
        kfree_skb(skb);
        return 0;
 csum_error:
-       ICMP_INC_STATS_BH(net, ICMP_MIB_CSUMERRORS);
+       __ICMP_INC_STATS(net, ICMP_MIB_CSUMERRORS);
 error:
-       ICMP_INC_STATS_BH(net, ICMP_MIB_INERRORS);
+       __ICMP_INC_STATS(net, ICMP_MIB_INERRORS);
        goto drop;
 }
 
index ab69da2..fa8c398 100644 (file)
@@ -427,7 +427,7 @@ struct dst_entry *inet_csk_route_req(const struct sock *sk,
 route_err:
        ip_rt_put(rt);
 no_route:
-       IP_INC_STATS_BH(net, IPSTATS_MIB_OUTNOROUTES);
+       __IP_INC_STATS(net, IPSTATS_MIB_OUTNOROUTES);
        return NULL;
 }
 EXPORT_SYMBOL_GPL(inet_csk_route_req);
@@ -466,7 +466,7 @@ route_err:
        ip_rt_put(rt);
 no_route:
        rcu_read_unlock();
-       IP_INC_STATS_BH(net, IPSTATS_MIB_OUTNOROUTES);
+       __IP_INC_STATS(net, IPSTATS_MIB_OUTNOROUTES);
        return NULL;
 }
 EXPORT_SYMBOL_GPL(inet_csk_route_child_sock);
@@ -706,7 +706,9 @@ void inet_csk_destroy_sock(struct sock *sk)
 
        sk_refcnt_debug_release(sk);
 
+       local_bh_disable();
        percpu_counter_dec(sk->sk_prot->orphan_count);
+       local_bh_enable();
        sock_put(sk);
 }
 EXPORT_SYMBOL(inet_csk_destroy_sock);
index ad7956f..25af124 100644 (file)
@@ -220,8 +220,9 @@ int inet_sk_diag_fill(struct sock *sk, struct inet_connection_sock *icsk,
        }
 
        if ((ext & (1 << (INET_DIAG_INFO - 1))) && handler->idiag_info_size) {
-               attr = nla_reserve(skb, INET_DIAG_INFO,
-                                  handler->idiag_info_size);
+               attr = nla_reserve_64bit(skb, INET_DIAG_INFO,
+                                        handler->idiag_info_size,
+                                        INET_DIAG_PAD);
                if (!attr)
                        goto errout;
 
@@ -1078,7 +1079,9 @@ int inet_diag_handler_get_info(struct sk_buff *skb, struct sock *sk)
        }
 
        attr = handler->idiag_info_size
-               ? nla_reserve(skb, INET_DIAG_INFO, handler->idiag_info_size)
+               ? nla_reserve_64bit(skb, INET_DIAG_INFO,
+                                   handler->idiag_info_size,
+                                   INET_DIAG_PAD)
                : NULL;
        if (attr)
                info = nla_data(attr);
index fcadb67..77c20a4 100644 (file)
@@ -360,7 +360,7 @@ static int __inet_check_established(struct inet_timewait_death_row *death_row,
        __sk_nulls_add_node_rcu(sk, &head->chain);
        if (tw) {
                sk_nulls_del_node_init_rcu((struct sock *)tw);
-               NET_INC_STATS_BH(net, LINUX_MIB_TIMEWAITRECYCLED);
+               __NET_INC_STATS(net, LINUX_MIB_TIMEWAITRECYCLED);
        }
        spin_unlock(lock);
        sock_prot_inuse_add(sock_net(sk), sk->sk_prot, 1);
@@ -438,6 +438,7 @@ static int inet_reuseport_add_sock(struct sock *sk,
                                                     const struct sock *sk2,
                                                     bool match_wildcard))
 {
+       struct inet_bind_bucket *tb = inet_csk(sk)->icsk_bind_hash;
        struct sock *sk2;
        kuid_t uid = sock_i_uid(sk);
 
@@ -446,6 +447,7 @@ static int inet_reuseport_add_sock(struct sock *sk,
                    sk2->sk_family == sk->sk_family &&
                    ipv6_only_sock(sk2) == ipv6_only_sock(sk) &&
                    sk2->sk_bound_dev_if == sk->sk_bound_dev_if &&
+                   inet_csk(sk2)->icsk_bind_hash == tb &&
                    sk2->sk_reuseport && uid_eq(uid, sock_i_uid(sk2)) &&
                    saddr_same(sk, sk2, false))
                        return reuseport_add_sock(sk, sk2);
@@ -479,7 +481,11 @@ int __inet_hash(struct sock *sk, struct sock *osk,
                if (err)
                        goto unlock;
        }
-       hlist_add_head_rcu(&sk->sk_node, &ilb->head);
+       if (IS_ENABLED(CONFIG_IPV6) && sk->sk_reuseport &&
+               sk->sk_family == AF_INET6)
+               hlist_add_tail_rcu(&sk->sk_node, &ilb->head);
+       else
+               hlist_add_head_rcu(&sk->sk_node, &ilb->head);
        sock_set_flag(sk, SOCK_RCU_FREE);
        sock_prot_inuse_add(sock_net(sk), sk->sk_prot, 1);
 unlock:
index c67f9bd..2065816 100644 (file)
@@ -94,7 +94,7 @@ static void inet_twsk_add_bind_node(struct inet_timewait_sock *tw,
 }
 
 /*
- * Enter the time wait state. This is called with locally disabled BH.
+ * Enter the time wait state.
  * Essentially we whip up a timewait bucket, copy the relevant info into it
  * from the SK, and mess with hash chains and list linkage.
  */
@@ -112,7 +112,7 @@ void __inet_twsk_hashdance(struct inet_timewait_sock *tw, struct sock *sk,
         */
        bhead = &hashinfo->bhash[inet_bhashfn(twsk_net(tw), inet->inet_num,
                        hashinfo->bhash_size)];
-       spin_lock(&bhead->lock);
+       spin_lock_bh(&bhead->lock);
        tw->tw_tb = icsk->icsk_bind_hash;
        WARN_ON(!icsk->icsk_bind_hash);
        inet_twsk_add_bind_node(tw, &tw->tw_tb->owners);
@@ -138,7 +138,7 @@ void __inet_twsk_hashdance(struct inet_timewait_sock *tw, struct sock *sk,
        if (__sk_nulls_del_node_init_rcu(sk))
                sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1);
 
-       spin_unlock(lock);
+       spin_unlock_bh(lock);
 }
 EXPORT_SYMBOL_GPL(__inet_twsk_hashdance);
 
@@ -147,9 +147,9 @@ static void tw_timer_handler(unsigned long data)
        struct inet_timewait_sock *tw = (struct inet_timewait_sock *)data;
 
        if (tw->tw_kill)
-               NET_INC_STATS_BH(twsk_net(tw), LINUX_MIB_TIMEWAITKILLED);
+               __NET_INC_STATS(twsk_net(tw), LINUX_MIB_TIMEWAITKILLED);
        else
-               NET_INC_STATS_BH(twsk_net(tw), LINUX_MIB_TIMEWAITED);
+               __NET_INC_STATS(twsk_net(tw), LINUX_MIB_TIMEWAITED);
        inet_twsk_kill(tw);
 }
 
index af18f1e..cbfb180 100644 (file)
@@ -65,8 +65,8 @@ static int ip_forward_finish(struct net *net, struct sock *sk, struct sk_buff *s
 {
        struct ip_options *opt  = &(IPCB(skb)->opt);
 
-       IP_INC_STATS_BH(net, IPSTATS_MIB_OUTFORWDATAGRAMS);
-       IP_ADD_STATS_BH(net, IPSTATS_MIB_OUTOCTETS, skb->len);
+       __IP_INC_STATS(net, IPSTATS_MIB_OUTFORWDATAGRAMS);
+       __IP_ADD_STATS(net, IPSTATS_MIB_OUTOCTETS, skb->len);
 
        if (unlikely(opt->optlen))
                ip_forward_options(skb);
@@ -157,7 +157,7 @@ sr_failed:
 
 too_many_hops:
        /* Tell the sender its packet died... */
-       IP_INC_STATS_BH(net, IPSTATS_MIB_INHDRERRORS);
+       __IP_INC_STATS(net, IPSTATS_MIB_INHDRERRORS);
        icmp_send(skb, ICMP_TIME_EXCEEDED, ICMP_EXC_TTL, 0);
 drop:
        kfree_skb(skb);
index efbd47d..bbe7f72 100644 (file)
@@ -204,14 +204,14 @@ static void ip_expire(unsigned long arg)
                goto out;
 
        ipq_kill(qp);
-       IP_INC_STATS_BH(net, IPSTATS_MIB_REASMFAILS);
+       __IP_INC_STATS(net, IPSTATS_MIB_REASMFAILS);
 
        if (!inet_frag_evicting(&qp->q)) {
                struct sk_buff *head = qp->q.fragments;
                const struct iphdr *iph;
                int err;
 
-               IP_INC_STATS_BH(net, IPSTATS_MIB_REASMTIMEOUT);
+               __IP_INC_STATS(net, IPSTATS_MIB_REASMTIMEOUT);
 
                if (!(qp->q.flags & INET_FRAG_FIRST_IN) || !qp->q.fragments)
                        goto out;
@@ -291,7 +291,7 @@ static int ip_frag_too_far(struct ipq *qp)
                struct net *net;
 
                net = container_of(qp->q.net, struct net, ipv4.frags);
-               IP_INC_STATS_BH(net, IPSTATS_MIB_REASMFAILS);
+               __IP_INC_STATS(net, IPSTATS_MIB_REASMFAILS);
        }
 
        return rc;
@@ -635,7 +635,7 @@ static int ip_frag_reasm(struct ipq *qp, struct sk_buff *prev,
 
        ip_send_check(iph);
 
-       IP_INC_STATS_BH(net, IPSTATS_MIB_REASMOKS);
+       __IP_INC_STATS(net, IPSTATS_MIB_REASMOKS);
        qp->q.fragments = NULL;
        qp->q.fragments_tail = NULL;
        return 0;
@@ -647,7 +647,7 @@ out_nomem:
 out_oversize:
        net_info_ratelimited("Oversized IP packet from %pI4\n", &qp->saddr);
 out_fail:
-       IP_INC_STATS_BH(net, IPSTATS_MIB_REASMFAILS);
+       __IP_INC_STATS(net, IPSTATS_MIB_REASMFAILS);
        return err;
 }
 
@@ -658,7 +658,7 @@ int ip_defrag(struct net *net, struct sk_buff *skb, u32 user)
        int vif = l3mdev_master_ifindex_rcu(dev);
        struct ipq *qp;
 
-       IP_INC_STATS_BH(net, IPSTATS_MIB_REASMREQDS);
+       __IP_INC_STATS(net, IPSTATS_MIB_REASMREQDS);
        skb_orphan(skb);
 
        /* Lookup (or create) queue header */
@@ -675,7 +675,7 @@ int ip_defrag(struct net *net, struct sk_buff *skb, u32 user)
                return ret;
        }
 
-       IP_INC_STATS_BH(net, IPSTATS_MIB_REASMFAILS);
+       __IP_INC_STATS(net, IPSTATS_MIB_REASMFAILS);
        kfree_skb(skb);
        return -ENOMEM;
 }
index eedd829..2b267e7 100644 (file)
@@ -122,125 +122,6 @@ static int ipgre_tunnel_init(struct net_device *dev);
 static int ipgre_net_id __read_mostly;
 static int gre_tap_net_id __read_mostly;
 
-static int ip_gre_calc_hlen(__be16 o_flags)
-{
-       int addend = 4;
-
-       if (o_flags & TUNNEL_CSUM)
-               addend += 4;
-       if (o_flags & TUNNEL_KEY)
-               addend += 4;
-       if (o_flags & TUNNEL_SEQ)
-               addend += 4;
-       return addend;
-}
-
-static __be16 gre_flags_to_tnl_flags(__be16 flags)
-{
-       __be16 tflags = 0;
-
-       if (flags & GRE_CSUM)
-               tflags |= TUNNEL_CSUM;
-       if (flags & GRE_ROUTING)
-               tflags |= TUNNEL_ROUTING;
-       if (flags & GRE_KEY)
-               tflags |= TUNNEL_KEY;
-       if (flags & GRE_SEQ)
-               tflags |= TUNNEL_SEQ;
-       if (flags & GRE_STRICT)
-               tflags |= TUNNEL_STRICT;
-       if (flags & GRE_REC)
-               tflags |= TUNNEL_REC;
-       if (flags & GRE_VERSION)
-               tflags |= TUNNEL_VERSION;
-
-       return tflags;
-}
-
-static __be16 tnl_flags_to_gre_flags(__be16 tflags)
-{
-       __be16 flags = 0;
-
-       if (tflags & TUNNEL_CSUM)
-               flags |= GRE_CSUM;
-       if (tflags & TUNNEL_ROUTING)
-               flags |= GRE_ROUTING;
-       if (tflags & TUNNEL_KEY)
-               flags |= GRE_KEY;
-       if (tflags & TUNNEL_SEQ)
-               flags |= GRE_SEQ;
-       if (tflags & TUNNEL_STRICT)
-               flags |= GRE_STRICT;
-       if (tflags & TUNNEL_REC)
-               flags |= GRE_REC;
-       if (tflags & TUNNEL_VERSION)
-               flags |= GRE_VERSION;
-
-       return flags;
-}
-
-static int parse_gre_header(struct sk_buff *skb, struct tnl_ptk_info *tpi,
-                           bool *csum_err)
-{
-       const struct gre_base_hdr *greh;
-       __be32 *options;
-       int hdr_len;
-
-       if (unlikely(!pskb_may_pull(skb, sizeof(struct gre_base_hdr))))
-               return -EINVAL;
-
-       greh = (struct gre_base_hdr *)skb_transport_header(skb);
-       if (unlikely(greh->flags & (GRE_VERSION | GRE_ROUTING)))
-               return -EINVAL;
-
-       tpi->flags = gre_flags_to_tnl_flags(greh->flags);
-       hdr_len = ip_gre_calc_hlen(tpi->flags);
-
-       if (!pskb_may_pull(skb, hdr_len))
-               return -EINVAL;
-
-       greh = (struct gre_base_hdr *)skb_transport_header(skb);
-       tpi->proto = greh->protocol;
-
-       options = (__be32 *)(greh + 1);
-       if (greh->flags & GRE_CSUM) {
-               if (skb_checksum_simple_validate(skb)) {
-                       *csum_err = true;
-                       return -EINVAL;
-               }
-
-               skb_checksum_try_convert(skb, IPPROTO_GRE, 0,
-                                        null_compute_pseudo);
-               options++;
-       }
-
-       if (greh->flags & GRE_KEY) {
-               tpi->key = *options;
-               options++;
-       } else {
-               tpi->key = 0;
-       }
-       if (unlikely(greh->flags & GRE_SEQ)) {
-               tpi->seq = *options;
-               options++;
-       } else {
-               tpi->seq = 0;
-       }
-       /* WCCP version 1 and 2 protocol decoding.
-        * - Change protocol to IP
-        * - When dealing with WCCPv2, Skip extra 4 bytes in GRE header
-        */
-       if (greh->flags == 0 && tpi->proto == htons(ETH_P_WCCP)) {
-               tpi->proto = htons(ETH_P_IP);
-               if ((*(u8 *)options & 0xF0) != 0x40) {
-                       hdr_len += 4;
-                       if (!pskb_may_pull(skb, hdr_len))
-                               return -EINVAL;
-               }
-       }
-       return iptunnel_pull_header(skb, hdr_len, tpi->proto, false);
-}
-
 static void ipgre_err(struct sk_buff *skb, u32 info,
                      const struct tnl_ptk_info *tpi)
 {
@@ -341,7 +222,7 @@ static void gre_err(struct sk_buff *skb, u32 info)
        struct tnl_ptk_info tpi;
        bool csum_err = false;
 
-       if (parse_gre_header(skb, &tpi, &csum_err)) {
+       if (gre_parse_header(skb, &tpi, &csum_err) < 0) {
                if (!csum_err)          /* ignore csum errors. */
                        return;
        }
@@ -379,24 +260,22 @@ static __be32 tunnel_id_to_key(__be64 x)
 #endif
 }
 
-static int ipgre_rcv(struct sk_buff *skb, const struct tnl_ptk_info *tpi)
+static int __ipgre_rcv(struct sk_buff *skb, const struct tnl_ptk_info *tpi,
+                      struct ip_tunnel_net *itn, int hdr_len, bool raw_proto)
 {
-       struct net *net = dev_net(skb->dev);
        struct metadata_dst *tun_dst = NULL;
-       struct ip_tunnel_net *itn;
        const struct iphdr *iph;
        struct ip_tunnel *tunnel;
 
-       if (tpi->proto == htons(ETH_P_TEB))
-               itn = net_generic(net, gre_tap_net_id);
-       else
-               itn = net_generic(net, ipgre_net_id);
-
        iph = ip_hdr(skb);
        tunnel = ip_tunnel_lookup(itn, skb->dev->ifindex, tpi->flags,
                                  iph->saddr, iph->daddr, tpi->key);
 
        if (tunnel) {
+               if (__iptunnel_pull_header(skb, hdr_len, tpi->proto,
+                                          raw_proto, false) < 0)
+                       goto drop;
+
                skb_pop_mac_header(skb);
                if (tunnel->collect_md) {
                        __be16 flags;
@@ -412,13 +291,41 @@ static int ipgre_rcv(struct sk_buff *skb, const struct tnl_ptk_info *tpi)
                ip_tunnel_rcv(tunnel, skb, tpi, tun_dst, log_ecn_error);
                return PACKET_RCVD;
        }
-       return PACKET_REJECT;
+       return PACKET_NEXT;
+
+drop:
+       kfree_skb(skb);
+       return PACKET_RCVD;
+}
+
+static int ipgre_rcv(struct sk_buff *skb, const struct tnl_ptk_info *tpi,
+                    int hdr_len)
+{
+       struct net *net = dev_net(skb->dev);
+       struct ip_tunnel_net *itn;
+       int res;
+
+       if (tpi->proto == htons(ETH_P_TEB))
+               itn = net_generic(net, gre_tap_net_id);
+       else
+               itn = net_generic(net, ipgre_net_id);
+
+       res = __ipgre_rcv(skb, tpi, itn, hdr_len, false);
+       if (res == PACKET_NEXT && tpi->proto == htons(ETH_P_TEB)) {
+               /* ipgre tunnels in collect metadata mode should receive
+                * also ETH_P_TEB traffic.
+                */
+               itn = net_generic(net, ipgre_net_id);
+               res = __ipgre_rcv(skb, tpi, itn, hdr_len, true);
+       }
+       return res;
 }
 
 static int gre_rcv(struct sk_buff *skb)
 {
        struct tnl_ptk_info tpi;
        bool csum_err = false;
+       int hdr_len;
 
 #ifdef CONFIG_NET_IPGRE_BROADCAST
        if (ipv4_is_multicast(ip_hdr(skb)->daddr)) {
@@ -428,10 +335,11 @@ static int gre_rcv(struct sk_buff *skb)
        }
 #endif
 
-       if (parse_gre_header(skb, &tpi, &csum_err) < 0)
+       hdr_len = gre_parse_header(skb, &tpi, &csum_err);
+       if (hdr_len < 0)
                goto drop;
 
-       if (ipgre_rcv(skb, &tpi) == PACKET_RCVD)
+       if (ipgre_rcv(skb, &tpi, hdr_len) == PACKET_RCVD)
                return 0;
 
        icmp_send(skb, ICMP_DEST_UNREACH, ICMP_PORT_UNREACH, 0);
@@ -440,49 +348,6 @@ drop:
        return 0;
 }
 
-static __sum16 gre_checksum(struct sk_buff *skb)
-{
-       __wsum csum;
-
-       if (skb->ip_summed == CHECKSUM_PARTIAL)
-               csum = lco_csum(skb);
-       else
-               csum = skb_checksum(skb, 0, skb->len, 0);
-       return csum_fold(csum);
-}
-
-static void build_header(struct sk_buff *skb, int hdr_len, __be16 flags,
-                        __be16 proto, __be32 key, __be32 seq)
-{
-       struct gre_base_hdr *greh;
-
-       skb_push(skb, hdr_len);
-
-       skb_reset_transport_header(skb);
-       greh = (struct gre_base_hdr *)skb->data;
-       greh->flags = tnl_flags_to_gre_flags(flags);
-       greh->protocol = proto;
-
-       if (flags & (TUNNEL_KEY | TUNNEL_CSUM | TUNNEL_SEQ)) {
-               __be32 *ptr = (__be32 *)(((u8 *)greh) + hdr_len - 4);
-
-               if (flags & TUNNEL_SEQ) {
-                       *ptr = seq;
-                       ptr--;
-               }
-               if (flags & TUNNEL_KEY) {
-                       *ptr = key;
-                       ptr--;
-               }
-               if (flags & TUNNEL_CSUM &&
-                   !(skb_shinfo(skb)->gso_type &
-                     (SKB_GSO_GRE | SKB_GSO_GRE_CSUM))) {
-                       *ptr = 0;
-                       *(__sum16 *)ptr = gre_checksum(skb);
-               }
-       }
-}
-
 static void __gre_xmit(struct sk_buff *skb, struct net_device *dev,
                       const struct iphdr *tnl_params,
                       __be16 proto)
@@ -493,8 +358,9 @@ static void __gre_xmit(struct sk_buff *skb, struct net_device *dev,
                tunnel->o_seqno++;
 
        /* Push GRE header. */
-       build_header(skb, tunnel->tun_hlen, tunnel->parms.o_flags,
-                    proto, tunnel->parms.o_key, htonl(tunnel->o_seqno));
+       gre_build_header(skb, tunnel->tun_hlen,
+                        tunnel->parms.o_flags, proto, tunnel->parms.o_key,
+                        htonl(tunnel->o_seqno));
 
        skb_set_inner_protocol(skb, proto);
        ip_tunnel_xmit(skb, dev, tnl_params, tnl_params->protocol);
@@ -522,7 +388,8 @@ static struct rtable *gre_get_rt(struct sk_buff *skb,
        return ip_route_output_key(net, fl);
 }
 
-static void gre_fb_xmit(struct sk_buff *skb, struct net_device *dev)
+static void gre_fb_xmit(struct sk_buff *skb, struct net_device *dev,
+                       __be16 proto)
 {
        struct ip_tunnel_info *tun_info;
        const struct ip_tunnel_key *key;
@@ -552,7 +419,7 @@ static void gre_fb_xmit(struct sk_buff *skb, struct net_device *dev)
                                          fl.saddr);
        }
 
-       tunnel_hlen = ip_gre_calc_hlen(key->tun_flags);
+       tunnel_hlen = gre_calc_hlen(key->tun_flags);
 
        min_headroom = LL_RESERVED_SPACE(rt->dst.dev) + rt->dst.header_len
                        + tunnel_hlen + sizeof(struct iphdr);
@@ -571,8 +438,8 @@ static void gre_fb_xmit(struct sk_buff *skb, struct net_device *dev)
                goto err_free_rt;
 
        flags = tun_info->key.tun_flags & (TUNNEL_CSUM | TUNNEL_KEY);
-       build_header(skb, tunnel_hlen, flags, htons(ETH_P_TEB),
-                    tunnel_id_to_key(tun_info->key.tun_id), 0);
+       gre_build_header(skb, tunnel_hlen, flags, proto,
+                        tunnel_id_to_key(tun_info->key.tun_id), 0);
 
        df = key->tun_flags & TUNNEL_DONT_FRAGMENT ?  htons(IP_DF) : 0;
 
@@ -612,7 +479,7 @@ static netdev_tx_t ipgre_xmit(struct sk_buff *skb,
        const struct iphdr *tnl_params;
 
        if (tunnel->collect_md) {
-               gre_fb_xmit(skb, dev);
+               gre_fb_xmit(skb, dev, skb->protocol);
                return NETDEV_TX_OK;
        }
 
@@ -654,7 +521,7 @@ static netdev_tx_t gre_tap_xmit(struct sk_buff *skb,
        struct ip_tunnel *tunnel = netdev_priv(dev);
 
        if (tunnel->collect_md) {
-               gre_fb_xmit(skb, dev);
+               gre_fb_xmit(skb, dev, htons(ETH_P_TEB));
                return NETDEV_TX_OK;
        }
 
@@ -694,8 +561,8 @@ static int ipgre_tunnel_ioctl(struct net_device *dev,
        if (err)
                return err;
 
-       p.i_flags = tnl_flags_to_gre_flags(p.i_flags);
-       p.o_flags = tnl_flags_to_gre_flags(p.o_flags);
+       p.i_flags = gre_tnl_flags_to_gre_flags(p.i_flags);
+       p.o_flags = gre_tnl_flags_to_gre_flags(p.o_flags);
 
        if (copy_to_user(ifr->ifr_ifru.ifru_data, &p, sizeof(p)))
                return -EFAULT;
@@ -739,7 +606,7 @@ static int ipgre_header(struct sk_buff *skb, struct net_device *dev,
 
        iph = (struct iphdr *)skb_push(skb, t->hlen + sizeof(*iph));
        greh = (struct gre_base_hdr *)(iph+1);
-       greh->flags = tnl_flags_to_gre_flags(t->parms.o_flags);
+       greh->flags = gre_tnl_flags_to_gre_flags(t->parms.o_flags);
        greh->protocol = htons(type);
 
        memcpy(iph, &t->parms.iph, sizeof(struct iphdr));
@@ -840,7 +707,7 @@ static void __gre_tunnel_init(struct net_device *dev)
        int t_hlen;
 
        tunnel = netdev_priv(dev);
-       tunnel->tun_hlen = ip_gre_calc_hlen(tunnel->parms.o_flags);
+       tunnel->tun_hlen = gre_calc_hlen(tunnel->parms.o_flags);
        tunnel->parms.iph.protocol = IPPROTO_GRE;
 
        tunnel->hlen = tunnel->tun_hlen + tunnel->encap_hlen;
@@ -885,7 +752,7 @@ static int ipgre_tunnel_init(struct net_device *dev)
        netif_keep_dst(dev);
        dev->addr_len           = 4;
 
-       if (iph->daddr) {
+       if (iph->daddr && !tunnel->collect_md) {
 #ifdef CONFIG_NET_IPGRE_BROADCAST
                if (ipv4_is_multicast(iph->daddr)) {
                        if (!iph->saddr)
@@ -894,8 +761,9 @@ static int ipgre_tunnel_init(struct net_device *dev)
                        dev->header_ops = &ipgre_header_ops;
                }
 #endif
-       } else
+       } else if (!tunnel->collect_md) {
                dev->header_ops = &ipgre_header_ops;
+       }
 
        return ip_tunnel_init(dev);
 }
@@ -938,6 +806,11 @@ static int ipgre_tunnel_validate(struct nlattr *tb[], struct nlattr *data[])
        if (flags & (GRE_VERSION|GRE_ROUTING))
                return -EINVAL;
 
+       if (data[IFLA_GRE_COLLECT_METADATA] &&
+           data[IFLA_GRE_ENCAP_TYPE] &&
+           nla_get_u16(data[IFLA_GRE_ENCAP_TYPE]) != TUNNEL_ENCAP_NONE)
+               return -EINVAL;
+
        return 0;
 }
 
@@ -1155,8 +1028,10 @@ static int ipgre_fill_info(struct sk_buff *skb, const struct net_device *dev)
        struct ip_tunnel_parm *p = &t->parms;
 
        if (nla_put_u32(skb, IFLA_GRE_LINK, p->link) ||
-           nla_put_be16(skb, IFLA_GRE_IFLAGS, tnl_flags_to_gre_flags(p->i_flags)) ||
-           nla_put_be16(skb, IFLA_GRE_OFLAGS, tnl_flags_to_gre_flags(p->o_flags)) ||
+           nla_put_be16(skb, IFLA_GRE_IFLAGS,
+                        gre_tnl_flags_to_gre_flags(p->i_flags)) ||
+           nla_put_be16(skb, IFLA_GRE_OFLAGS,
+                        gre_tnl_flags_to_gre_flags(p->o_flags)) ||
            nla_put_be32(skb, IFLA_GRE_IKEY, p->i_key) ||
            nla_put_be32(skb, IFLA_GRE_OKEY, p->o_key) ||
            nla_put_in_addr(skb, IFLA_GRE_LOCAL, p->iph.saddr) ||
index e3d7827..751c065 100644 (file)
@@ -218,17 +218,17 @@ static int ip_local_deliver_finish(struct net *net, struct sock *sk, struct sk_b
                                protocol = -ret;
                                goto resubmit;
                        }
-                       IP_INC_STATS_BH(net, IPSTATS_MIB_INDELIVERS);
+                       __IP_INC_STATS(net, IPSTATS_MIB_INDELIVERS);
                } else {
                        if (!raw) {
                                if (xfrm4_policy_check(NULL, XFRM_POLICY_IN, skb)) {
-                                       IP_INC_STATS_BH(net, IPSTATS_MIB_INUNKNOWNPROTOS);
+                                       __IP_INC_STATS(net, IPSTATS_MIB_INUNKNOWNPROTOS);
                                        icmp_send(skb, ICMP_DEST_UNREACH,
                                                  ICMP_PROT_UNREACH, 0);
                                }
                                kfree_skb(skb);
                        } else {
-                               IP_INC_STATS_BH(net, IPSTATS_MIB_INDELIVERS);
+                               __IP_INC_STATS(net, IPSTATS_MIB_INDELIVERS);
                                consume_skb(skb);
                        }
                }
@@ -273,7 +273,7 @@ static inline bool ip_rcv_options(struct sk_buff *skb)
                                              --ANK (980813)
        */
        if (skb_cow(skb, skb_headroom(skb))) {
-               IP_INC_STATS_BH(dev_net(dev), IPSTATS_MIB_INDISCARDS);
+               __IP_INC_STATS(dev_net(dev), IPSTATS_MIB_INDISCARDS);
                goto drop;
        }
 
@@ -282,7 +282,7 @@ static inline bool ip_rcv_options(struct sk_buff *skb)
        opt->optlen = iph->ihl*4 - sizeof(struct iphdr);
 
        if (ip_options_compile(dev_net(dev), opt, skb)) {
-               IP_INC_STATS_BH(dev_net(dev), IPSTATS_MIB_INHDRERRORS);
+               __IP_INC_STATS(dev_net(dev), IPSTATS_MIB_INHDRERRORS);
                goto drop;
        }
 
@@ -337,7 +337,7 @@ static int ip_rcv_finish(struct net *net, struct sock *sk, struct sk_buff *skb)
                                               iph->tos, skb->dev);
                if (unlikely(err)) {
                        if (err == -EXDEV)
-                               NET_INC_STATS_BH(net, LINUX_MIB_IPRPFILTER);
+                               __NET_INC_STATS(net, LINUX_MIB_IPRPFILTER);
                        goto drop;
                }
        }
@@ -358,9 +358,9 @@ static int ip_rcv_finish(struct net *net, struct sock *sk, struct sk_buff *skb)
 
        rt = skb_rtable(skb);
        if (rt->rt_type == RTN_MULTICAST) {
-               IP_UPD_PO_STATS_BH(net, IPSTATS_MIB_INMCAST, skb->len);
+               __IP_UPD_PO_STATS(net, IPSTATS_MIB_INMCAST, skb->len);
        } else if (rt->rt_type == RTN_BROADCAST) {
-               IP_UPD_PO_STATS_BH(net, IPSTATS_MIB_INBCAST, skb->len);
+               __IP_UPD_PO_STATS(net, IPSTATS_MIB_INBCAST, skb->len);
        } else if (skb->pkt_type == PACKET_BROADCAST ||
                   skb->pkt_type == PACKET_MULTICAST) {
                struct in_device *in_dev = __in_dev_get_rcu(skb->dev);
@@ -409,11 +409,11 @@ int ip_rcv(struct sk_buff *skb, struct net_device *dev, struct packet_type *pt,
 
 
        net = dev_net(dev);
-       IP_UPD_PO_STATS_BH(net, IPSTATS_MIB_IN, skb->len);
+       __IP_UPD_PO_STATS(net, IPSTATS_MIB_IN, skb->len);
 
        skb = skb_share_check(skb, GFP_ATOMIC);
        if (!skb) {
-               IP_INC_STATS_BH(net, IPSTATS_MIB_INDISCARDS);
+               __IP_INC_STATS(net, IPSTATS_MIB_INDISCARDS);
                goto out;
        }
 
@@ -439,9 +439,9 @@ int ip_rcv(struct sk_buff *skb, struct net_device *dev, struct packet_type *pt,
        BUILD_BUG_ON(IPSTATS_MIB_ECT1PKTS != IPSTATS_MIB_NOECTPKTS + INET_ECN_ECT_1);
        BUILD_BUG_ON(IPSTATS_MIB_ECT0PKTS != IPSTATS_MIB_NOECTPKTS + INET_ECN_ECT_0);
        BUILD_BUG_ON(IPSTATS_MIB_CEPKTS != IPSTATS_MIB_NOECTPKTS + INET_ECN_CE);
-       IP_ADD_STATS_BH(net,
-                       IPSTATS_MIB_NOECTPKTS + (iph->tos & INET_ECN_MASK),
-                       max_t(unsigned short, 1, skb_shinfo(skb)->gso_segs));
+       __IP_ADD_STATS(net,
+                      IPSTATS_MIB_NOECTPKTS + (iph->tos & INET_ECN_MASK),
+                      max_t(unsigned short, 1, skb_shinfo(skb)->gso_segs));
 
        if (!pskb_may_pull(skb, iph->ihl*4))
                goto inhdr_error;
@@ -453,7 +453,7 @@ int ip_rcv(struct sk_buff *skb, struct net_device *dev, struct packet_type *pt,
 
        len = ntohs(iph->tot_len);
        if (skb->len < len) {
-               IP_INC_STATS_BH(net, IPSTATS_MIB_INTRUNCATEDPKTS);
+               __IP_INC_STATS(net, IPSTATS_MIB_INTRUNCATEDPKTS);
                goto drop;
        } else if (len < (iph->ihl*4))
                goto inhdr_error;
@@ -463,7 +463,7 @@ int ip_rcv(struct sk_buff *skb, struct net_device *dev, struct packet_type *pt,
         * Note this now means skb->len holds ntohs(iph->tot_len).
         */
        if (pskb_trim_rcsum(skb, len)) {
-               IP_INC_STATS_BH(net, IPSTATS_MIB_INDISCARDS);
+               __IP_INC_STATS(net, IPSTATS_MIB_INDISCARDS);
                goto drop;
        }
 
@@ -480,9 +480,9 @@ int ip_rcv(struct sk_buff *skb, struct net_device *dev, struct packet_type *pt,
                       ip_rcv_finish);
 
 csum_error:
-       IP_INC_STATS_BH(net, IPSTATS_MIB_CSUMERRORS);
+       __IP_INC_STATS(net, IPSTATS_MIB_CSUMERRORS);
 inhdr_error:
-       IP_INC_STATS_BH(net, IPSTATS_MIB_INHDRERRORS);
+       __IP_INC_STATS(net, IPSTATS_MIB_INHDRERRORS);
 drop:
        kfree_skb(skb);
 out:
index 279471c..bdb222c 100644 (file)
@@ -510,9 +510,10 @@ int ip_recv_error(struct sock *sk, struct msghdr *msg, int len, int *addr_len)
                copied = len;
        }
        err = skb_copy_datagram_msg(skb, 0, msg, copied);
-       if (err)
-               goto out_free_skb;
-
+       if (unlikely(err)) {
+               kfree_skb(skb);
+               return err;
+       }
        sock_recv_timestamp(msg, sk, skb);
 
        serr = SKB_EXT_ERR(skb);
@@ -544,8 +545,7 @@ int ip_recv_error(struct sock *sk, struct msghdr *msg, int len, int *addr_len)
        msg->msg_flags |= MSG_ERRQUEUE;
        err = copied;
 
-out_free_skb:
-       kfree_skb(skb);
+       consume_skb(skb);
 out:
        return err;
 }
index 6aad019..a69ed94 100644 (file)
@@ -326,12 +326,12 @@ static int ip_tunnel_bind_dev(struct net_device *dev)
 
                if (!IS_ERR(rt)) {
                        tdev = rt->dst.dev;
-                       dst_cache_set_ip4(&tunnel->dst_cache, &rt->dst,
-                                         fl4.saddr);
                        ip_rt_put(rt);
                }
                if (dev->type != ARPHRD_ETHER)
                        dev->flags |= IFF_POINTOPOINT;
+
+               dst_cache_reset(&tunnel->dst_cache);
        }
 
        if (!tdev && tunnel->parms.link)
index 786fa7c..9118b0e 100644 (file)
@@ -157,7 +157,7 @@ int iptunnel_handle_offloads(struct sk_buff *skb,
        }
 
        if (skb_is_gso(skb)) {
-               err = skb_unclone(skb, GFP_ATOMIC);
+               err = skb_header_unclone(skb, GFP_ATOMIC);
                if (unlikely(err))
                        return err;
                skb_shinfo(skb)->gso_type |= gso_type_mask;
index 60398a9..8c8c655 100644 (file)
@@ -915,11 +915,11 @@ static int ip_error(struct sk_buff *skb)
        if (!IN_DEV_FORWARD(in_dev)) {
                switch (rt->dst.error) {
                case EHOSTUNREACH:
-                       IP_INC_STATS_BH(net, IPSTATS_MIB_INADDRERRORS);
+                       __IP_INC_STATS(net, IPSTATS_MIB_INADDRERRORS);
                        break;
 
                case ENETUNREACH:
-                       IP_INC_STATS_BH(net, IPSTATS_MIB_INNOROUTES);
+                       __IP_INC_STATS(net, IPSTATS_MIB_INNOROUTES);
                        break;
                }
                goto out;
@@ -934,7 +934,7 @@ static int ip_error(struct sk_buff *skb)
                break;
        case ENETUNREACH:
                code = ICMP_NET_UNREACH;
-               IP_INC_STATS_BH(net, IPSTATS_MIB_INNOROUTES);
+               __IP_INC_STATS(net, IPSTATS_MIB_INNOROUTES);
                break;
        case EACCES:
                code = ICMP_PKT_FILTERED;
index 4c04f09..e3c4043 100644 (file)
@@ -312,11 +312,11 @@ struct sock *cookie_v4_check(struct sock *sk, struct sk_buff *skb)
 
        mss = __cookie_v4_check(ip_hdr(skb), th, cookie);
        if (mss == 0) {
-               NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_SYNCOOKIESFAILED);
+               __NET_INC_STATS(sock_net(sk), LINUX_MIB_SYNCOOKIESFAILED);
                goto out;
        }
 
-       NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_SYNCOOKIESRECV);
+       __NET_INC_STATS(sock_net(sk), LINUX_MIB_SYNCOOKIESRECV);
 
        /* check for timestamp cookie support */
        memset(&tcp_opt, 0, sizeof(tcp_opt));
index 4d73858..5c7ed14 100644 (file)
@@ -430,14 +430,15 @@ EXPORT_SYMBOL(tcp_init_sock);
 
 static void tcp_tx_timestamp(struct sock *sk, u16 tsflags, struct sk_buff *skb)
 {
-       if (sk->sk_tsflags || tsflags) {
+       if (tsflags) {
                struct skb_shared_info *shinfo = skb_shinfo(skb);
                struct tcp_skb_cb *tcb = TCP_SKB_CB(skb);
 
                sock_tx_timestamp(sk, tsflags, &shinfo->tx_flags);
-               if (shinfo->tx_flags & SKBTX_ANY_TSTAMP)
+               if (tsflags & SOF_TIMESTAMPING_TX_ACK)
+                       tcb->txstamp_ack = 1;
+               if (tsflags & SOF_TIMESTAMPING_TX_RECORD_MASK)
                        shinfo->tskey = TCP_SKB_CB(skb)->seq + skb->len - 1;
-               tcb->txstamp_ack = !!(shinfo->tx_flags & SKBTX_ACK_TSTAMP);
        }
 }
 
@@ -908,7 +909,8 @@ static ssize_t do_tcp_sendpages(struct sock *sk, struct page *page, int offset,
                int copy, i;
                bool can_coalesce;
 
-               if (!tcp_send_head(sk) || (copy = size_goal - skb->len) <= 0) {
+               if (!tcp_send_head(sk) || (copy = size_goal - skb->len) <= 0 ||
+                   !tcp_skb_can_collapse_to(skb)) {
 new_segment:
                        if (!sk_stream_memory_free(sk))
                                goto wait_for_sndbuf;
@@ -1082,6 +1084,7 @@ int tcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
        struct sockcm_cookie sockc;
        int flags, err, copied = 0;
        int mss_now = 0, size_goal, copied_syn = 0;
+       bool process_backlog = false;
        bool sg;
        long timeo;
 
@@ -1134,11 +1137,12 @@ int tcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
        /* This should be in poll */
        sk_clear_bit(SOCKWQ_ASYNC_NOSPACE, sk);
 
-       mss_now = tcp_send_mss(sk, &size_goal, flags);
-
        /* Ok commence sending. */
        copied = 0;
 
+restart:
+       mss_now = tcp_send_mss(sk, &size_goal, flags);
+
        err = -EPIPE;
        if (sk->sk_err || (sk->sk_shutdown & SEND_SHUTDOWN))
                goto out_err;
@@ -1156,7 +1160,7 @@ int tcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
                        copy = max - skb->len;
                }
 
-               if (copy <= 0) {
+               if (copy <= 0 || !tcp_skb_can_collapse_to(skb)) {
 new_segment:
                        /* Allocate new segment. If the interface is SG,
                         * allocate skb fitting to single page.
@@ -1164,6 +1168,10 @@ new_segment:
                        if (!sk_stream_memory_free(sk))
                                goto wait_for_sndbuf;
 
+                       if (process_backlog && sk_flush_backlog(sk)) {
+                               process_backlog = false;
+                               goto restart;
+                       }
                        skb = sk_stream_alloc_skb(sk,
                                                  select_size(sk, sg),
                                                  sk->sk_allocation,
@@ -1171,6 +1179,7 @@ new_segment:
                        if (!skb)
                                goto wait_for_memory;
 
+                       process_backlog = true;
                        /*
                         * Check whether we can use HW checksum.
                         */
@@ -1250,6 +1259,8 @@ new_segment:
                copied += copy;
                if (!msg_data_left(msg)) {
                        tcp_tx_timestamp(sk, sockc.tsflags, skb);
+                       if (unlikely(flags & MSG_EOR))
+                               TCP_SKB_CB(skb)->eor = 1;
                        goto out;
                }
 
@@ -1443,14 +1454,10 @@ static void tcp_prequeue_process(struct sock *sk)
        struct sk_buff *skb;
        struct tcp_sock *tp = tcp_sk(sk);
 
-       NET_INC_STATS_USER(sock_net(sk), LINUX_MIB_TCPPREQUEUED);
+       NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPPREQUEUED);
 
-       /* RX process wants to run with disabled BHs, though it is not
-        * necessary */
-       local_bh_disable();
        while ((skb = __skb_dequeue(&tp->ucopy.prequeue)) != NULL)
                sk_backlog_rcv(sk, skb);
-       local_bh_enable();
 
        /* Clear memory counter. */
        tp->ucopy.memory = 0;
@@ -1777,7 +1784,7 @@ int tcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len, int nonblock,
 
                        chunk = len - tp->ucopy.len;
                        if (chunk != 0) {
-                               NET_ADD_STATS_USER(sock_net(sk), LINUX_MIB_TCPDIRECTCOPYFROMBACKLOG, chunk);
+                               NET_ADD_STATS(sock_net(sk), LINUX_MIB_TCPDIRECTCOPYFROMBACKLOG, chunk);
                                len -= chunk;
                                copied += chunk;
                        }
@@ -1789,7 +1796,7 @@ do_prequeue:
 
                                chunk = len - tp->ucopy.len;
                                if (chunk != 0) {
-                                       NET_ADD_STATS_USER(sock_net(sk), LINUX_MIB_TCPDIRECTCOPYFROMPREQUEUE, chunk);
+                                       NET_ADD_STATS(sock_net(sk), LINUX_MIB_TCPDIRECTCOPYFROMPREQUEUE, chunk);
                                        len -= chunk;
                                        copied += chunk;
                                }
@@ -1875,7 +1882,7 @@ skip_copy:
                        tcp_prequeue_process(sk);
 
                        if (copied > 0 && (chunk = len - tp->ucopy.len) != 0) {
-                               NET_ADD_STATS_USER(sock_net(sk), LINUX_MIB_TCPDIRECTCOPYFROMPREQUEUE, chunk);
+                               NET_ADD_STATS(sock_net(sk), LINUX_MIB_TCPDIRECTCOPYFROMPREQUEUE, chunk);
                                len -= chunk;
                                copied += chunk;
                        }
@@ -2065,13 +2072,13 @@ void tcp_close(struct sock *sk, long timeout)
                sk->sk_prot->disconnect(sk, 0);
        } else if (data_was_unread) {
                /* Unread data was tossed, zap the connection. */
-               NET_INC_STATS_USER(sock_net(sk), LINUX_MIB_TCPABORTONCLOSE);
+               NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPABORTONCLOSE);
                tcp_set_state(sk, TCP_CLOSE);
                tcp_send_active_reset(sk, sk->sk_allocation);
        } else if (sock_flag(sk, SOCK_LINGER) && !sk->sk_lingertime) {
                /* Check zero linger _after_ checking for unread data. */
                sk->sk_prot->disconnect(sk, 0);
-               NET_INC_STATS_USER(sock_net(sk), LINUX_MIB_TCPABORTONDATA);
+               NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPABORTONDATA);
        } else if (tcp_close_state(sk)) {
                /* We FIN if the application ate all the data before
                 * zapping the connection.
@@ -2148,7 +2155,7 @@ adjudge_to_death:
                if (tp->linger2 < 0) {
                        tcp_set_state(sk, TCP_CLOSE);
                        tcp_send_active_reset(sk, GFP_ATOMIC);
-                       NET_INC_STATS_BH(sock_net(sk),
+                       __NET_INC_STATS(sock_net(sk),
                                        LINUX_MIB_TCPABORTONLINGER);
                } else {
                        const int tmo = tcp_fin_time(sk);
@@ -2167,7 +2174,7 @@ adjudge_to_death:
                if (tcp_check_oom(sk, 0)) {
                        tcp_set_state(sk, TCP_CLOSE);
                        tcp_send_active_reset(sk, GFP_ATOMIC);
-                       NET_INC_STATS_BH(sock_net(sk),
+                       __NET_INC_STATS(sock_net(sk),
                                        LINUX_MIB_TCPABORTONMEMORY);
                }
        }
@@ -3091,7 +3098,7 @@ void tcp_done(struct sock *sk)
        struct request_sock *req = tcp_sk(sk)->fastopen_rsk;
 
        if (sk->sk_state == TCP_SYN_SENT || sk->sk_state == TCP_SYN_RECV)
-               TCP_INC_STATS_BH(sock_net(sk), TCP_MIB_ATTEMPTFAILS);
+               TCP_INC_STATS(sock_net(sk), TCP_MIB_ATTEMPTFAILS);
 
        tcp_set_state(sk, TCP_CLOSE);
        tcp_clear_xmit_timers(sk);
index 167b6a3..ccce8a5 100644 (file)
@@ -155,11 +155,11 @@ static void tcp_cdg_hystart_update(struct sock *sk)
 
                        ca->last_ack = now_us;
                        if (after(now_us, ca->round_start + base_owd)) {
-                               NET_INC_STATS_BH(sock_net(sk),
-                                                LINUX_MIB_TCPHYSTARTTRAINDETECT);
-                               NET_ADD_STATS_BH(sock_net(sk),
-                                                LINUX_MIB_TCPHYSTARTTRAINCWND,
-                                                tp->snd_cwnd);
+                               NET_INC_STATS(sock_net(sk),
+                                             LINUX_MIB_TCPHYSTARTTRAINDETECT);
+                               NET_ADD_STATS(sock_net(sk),
+                                             LINUX_MIB_TCPHYSTARTTRAINCWND,
+                                             tp->snd_cwnd);
                                tp->snd_ssthresh = tp->snd_cwnd;
                                return;
                        }
@@ -174,11 +174,11 @@ static void tcp_cdg_hystart_update(struct sock *sk)
                                         125U);
 
                        if (ca->rtt.min > thresh) {
-                               NET_INC_STATS_BH(sock_net(sk),
-                                                LINUX_MIB_TCPHYSTARTDELAYDETECT);
-                               NET_ADD_STATS_BH(sock_net(sk),
-                                                LINUX_MIB_TCPHYSTARTDELAYCWND,
-                                                tp->snd_cwnd);
+                               NET_INC_STATS(sock_net(sk),
+                                             LINUX_MIB_TCPHYSTARTDELAYDETECT);
+                               NET_ADD_STATS(sock_net(sk),
+                                             LINUX_MIB_TCPHYSTARTDELAYCWND,
+                                             tp->snd_cwnd);
                                tp->snd_ssthresh = tp->snd_cwnd;
                        }
                }
index 448c261..0ce946e 100644 (file)
@@ -402,11 +402,11 @@ static void hystart_update(struct sock *sk, u32 delay)
                        ca->last_ack = now;
                        if ((s32)(now - ca->round_start) > ca->delay_min >> 4) {
                                ca->found |= HYSTART_ACK_TRAIN;
-                               NET_INC_STATS_BH(sock_net(sk),
-                                                LINUX_MIB_TCPHYSTARTTRAINDETECT);
-                               NET_ADD_STATS_BH(sock_net(sk),
-                                                LINUX_MIB_TCPHYSTARTTRAINCWND,
-                                                tp->snd_cwnd);
+                               NET_INC_STATS(sock_net(sk),
+                                             LINUX_MIB_TCPHYSTARTTRAINDETECT);
+                               NET_ADD_STATS(sock_net(sk),
+                                             LINUX_MIB_TCPHYSTARTTRAINCWND,
+                                             tp->snd_cwnd);
                                tp->snd_ssthresh = tp->snd_cwnd;
                        }
                }
@@ -423,11 +423,11 @@ static void hystart_update(struct sock *sk, u32 delay)
                        if (ca->curr_rtt > ca->delay_min +
                            HYSTART_DELAY_THRESH(ca->delay_min >> 3)) {
                                ca->found |= HYSTART_DELAY;
-                               NET_INC_STATS_BH(sock_net(sk),
-                                                LINUX_MIB_TCPHYSTARTDELAYDETECT);
-                               NET_ADD_STATS_BH(sock_net(sk),
-                                                LINUX_MIB_TCPHYSTARTDELAYCWND,
-                                                tp->snd_cwnd);
+                               NET_INC_STATS(sock_net(sk),
+                                             LINUX_MIB_TCPHYSTARTDELAYDETECT);
+                               NET_ADD_STATS(sock_net(sk),
+                                             LINUX_MIB_TCPHYSTARTDELAYCWND,
+                                             tp->snd_cwnd);
                                tp->snd_ssthresh = tp->snd_cwnd;
                        }
                }
index cffd8f9..54d9f9b 100644 (file)
@@ -255,9 +255,9 @@ static bool tcp_fastopen_queue_check(struct sock *sk)
                spin_lock(&fastopenq->lock);
                req1 = fastopenq->rskq_rst_head;
                if (!req1 || time_after(req1->rsk_timer.expires, jiffies)) {
+                       __NET_INC_STATS(sock_net(sk),
+                                       LINUX_MIB_TCPFASTOPENLISTENOVERFLOW);
                        spin_unlock(&fastopenq->lock);
-                       NET_INC_STATS_BH(sock_net(sk),
-                                        LINUX_MIB_TCPFASTOPENLISTENOVERFLOW);
                        return false;
                }
                fastopenq->rskq_rst_head = req1->dl_next;
@@ -282,7 +282,7 @@ struct sock *tcp_try_fastopen(struct sock *sk, struct sk_buff *skb,
        struct sock *child;
 
        if (foc->len == 0) /* Client requests a cookie */
-               NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPFASTOPENCOOKIEREQD);
+               NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPFASTOPENCOOKIEREQD);
 
        if (!((sysctl_tcp_fastopen & TFO_SERVER_ENABLE) &&
              (syn_data || foc->len >= 0) &&
@@ -311,13 +311,13 @@ fastopen:
                child = tcp_fastopen_create_child(sk, skb, dst, req);
                if (child) {
                        foc->len = -1;
-                       NET_INC_STATS_BH(sock_net(sk),
-                                        LINUX_MIB_TCPFASTOPENPASSIVE);
+                       NET_INC_STATS(sock_net(sk),
+                                     LINUX_MIB_TCPFASTOPENPASSIVE);
                        return child;
                }
-               NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPFASTOPENPASSIVEFAIL);
+               NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPFASTOPENPASSIVEFAIL);
        } else if (foc->len > 0) /* Client presents an invalid cookie */
-               NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPFASTOPENPASSIVEFAIL);
+               NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPFASTOPENPASSIVEFAIL);
 
        valid_foc.exp = foc->exp;
        *foc = valid_foc;
index dcad8f9..a914e06 100644 (file)
@@ -869,7 +869,7 @@ static void tcp_update_reordering(struct sock *sk, const int metric,
                else
                        mib_idx = LINUX_MIB_TCPSACKREORDER;
 
-               NET_INC_STATS_BH(sock_net(sk), mib_idx);
+               NET_INC_STATS(sock_net(sk), mib_idx);
 #if FASTRETRANS_DEBUG > 1
                pr_debug("Disorder%d %d %u f%u s%u rr%d\n",
                         tp->rx_opt.sack_ok, inet_csk(sk)->icsk_ca_state,
@@ -1062,7 +1062,7 @@ static bool tcp_check_dsack(struct sock *sk, const struct sk_buff *ack_skb,
        if (before(start_seq_0, TCP_SKB_CB(ack_skb)->ack_seq)) {
                dup_sack = true;
                tcp_dsack_seen(tp);
-               NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPDSACKRECV);
+               NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPDSACKRECV);
        } else if (num_sacks > 1) {
                u32 end_seq_1 = get_unaligned_be32(&sp[1].end_seq);
                u32 start_seq_1 = get_unaligned_be32(&sp[1].start_seq);
@@ -1071,7 +1071,7 @@ static bool tcp_check_dsack(struct sock *sk, const struct sk_buff *ack_skb,
                    !before(start_seq_0, start_seq_1)) {
                        dup_sack = true;
                        tcp_dsack_seen(tp);
-                       NET_INC_STATS_BH(sock_net(sk),
+                       NET_INC_STATS(sock_net(sk),
                                        LINUX_MIB_TCPDSACKOFORECV);
                }
        }
@@ -1289,7 +1289,7 @@ static bool tcp_shifted_skb(struct sock *sk, struct sk_buff *skb,
 
        if (skb->len > 0) {
                BUG_ON(!tcp_skb_pcount(skb));
-               NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_SACKSHIFTED);
+               NET_INC_STATS(sock_net(sk), LINUX_MIB_SACKSHIFTED);
                return false;
        }
 
@@ -1303,6 +1303,7 @@ static bool tcp_shifted_skb(struct sock *sk, struct sk_buff *skb,
        }
 
        TCP_SKB_CB(prev)->tcp_flags |= TCP_SKB_CB(skb)->tcp_flags;
+       TCP_SKB_CB(prev)->eor = TCP_SKB_CB(skb)->eor;
        if (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN)
                TCP_SKB_CB(prev)->end_seq++;
 
@@ -1313,7 +1314,7 @@ static bool tcp_shifted_skb(struct sock *sk, struct sk_buff *skb,
        tcp_unlink_write_queue(skb, sk);
        sk_wmem_free_skb(sk, skb);
 
-       NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_SACKMERGED);
+       NET_INC_STATS(sock_net(sk), LINUX_MIB_SACKMERGED);
 
        return true;
 }
@@ -1368,6 +1369,9 @@ static struct sk_buff *tcp_shift_skb_data(struct sock *sk, struct sk_buff *skb,
        if ((TCP_SKB_CB(prev)->sacked & TCPCB_TAGBITS) != TCPCB_SACKED_ACKED)
                goto fallback;
 
+       if (!tcp_skb_can_collapse_to(prev))
+               goto fallback;
+
        in_sack = !after(start_seq, TCP_SKB_CB(skb)->seq) &&
                  !before(end_seq, TCP_SKB_CB(skb)->end_seq);
 
@@ -1469,7 +1473,7 @@ noop:
        return skb;
 
 fallback:
-       NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_SACKSHIFTFALLBACK);
+       NET_INC_STATS(sock_net(sk), LINUX_MIB_SACKSHIFTFALLBACK);
        return NULL;
 }
 
@@ -1657,7 +1661,7 @@ tcp_sacktag_write_queue(struct sock *sk, const struct sk_buff *ack_skb,
                                mib_idx = LINUX_MIB_TCPSACKDISCARD;
                        }
 
-                       NET_INC_STATS_BH(sock_net(sk), mib_idx);
+                       NET_INC_STATS(sock_net(sk), mib_idx);
                        if (i == 0)
                                first_sack_index = -1;
                        continue;
@@ -1909,7 +1913,7 @@ void tcp_enter_loss(struct sock *sk)
        skb = tcp_write_queue_head(sk);
        is_reneg = skb && (TCP_SKB_CB(skb)->sacked & TCPCB_SACKED_ACKED);
        if (is_reneg) {
-               NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPSACKRENEGING);
+               NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPSACKRENEGING);
                tp->sacked_out = 0;
                tp->fackets_out = 0;
        }
@@ -2395,7 +2399,7 @@ static bool tcp_try_undo_recovery(struct sock *sk)
                else
                        mib_idx = LINUX_MIB_TCPFULLUNDO;
 
-               NET_INC_STATS_BH(sock_net(sk), mib_idx);
+               NET_INC_STATS(sock_net(sk), mib_idx);
        }
        if (tp->snd_una == tp->high_seq && tcp_is_reno(tp)) {
                /* Hold old state until something *above* high_seq
@@ -2417,7 +2421,7 @@ static bool tcp_try_undo_dsack(struct sock *sk)
        if (tp->undo_marker && !tp->undo_retrans) {
                DBGUNDO(sk, "D-SACK");
                tcp_undo_cwnd_reduction(sk, false);
-               NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPDSACKUNDO);
+               NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPDSACKUNDO);
                return true;
        }
        return false;
@@ -2432,10 +2436,10 @@ static bool tcp_try_undo_loss(struct sock *sk, bool frto_undo)
                tcp_undo_cwnd_reduction(sk, true);
 
                DBGUNDO(sk, "partial loss");
-               NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPLOSSUNDO);
+               NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPLOSSUNDO);
                if (frto_undo)
-                       NET_INC_STATS_BH(sock_net(sk),
-                                        LINUX_MIB_TCPSPURIOUSRTOS);
+                       NET_INC_STATS(sock_net(sk),
+                                       LINUX_MIB_TCPSPURIOUSRTOS);
                inet_csk(sk)->icsk_retransmits = 0;
                if (frto_undo || tcp_is_sack(tp))
                        tcp_set_ca_state(sk, TCP_CA_Open);
@@ -2559,7 +2563,7 @@ static void tcp_mtup_probe_failed(struct sock *sk)
 
        icsk->icsk_mtup.search_high = icsk->icsk_mtup.probe_size - 1;
        icsk->icsk_mtup.probe_size = 0;
-       NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPMTUPFAIL);
+       NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPMTUPFAIL);
 }
 
 static void tcp_mtup_probe_success(struct sock *sk)
@@ -2579,7 +2583,7 @@ static void tcp_mtup_probe_success(struct sock *sk)
        icsk->icsk_mtup.search_low = icsk->icsk_mtup.probe_size;
        icsk->icsk_mtup.probe_size = 0;
        tcp_sync_mss(sk, icsk->icsk_pmtu_cookie);
-       NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPMTUPSUCCESS);
+       NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPMTUPSUCCESS);
 }
 
 /* Do a simple retransmit without using the backoff mechanisms in
@@ -2643,7 +2647,7 @@ static void tcp_enter_recovery(struct sock *sk, bool ece_ack)
        else
                mib_idx = LINUX_MIB_TCPSACKRECOVERY;
 
-       NET_INC_STATS_BH(sock_net(sk), mib_idx);
+       NET_INC_STATS(sock_net(sk), mib_idx);
 
        tp->prior_ssthresh = 0;
        tcp_init_undo(tp);
@@ -2736,7 +2740,7 @@ static bool tcp_try_undo_partial(struct sock *sk, const int acked)
 
                DBGUNDO(sk, "partial recovery");
                tcp_undo_cwnd_reduction(sk, true);
-               NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPPARTIALUNDO);
+               NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPPARTIALUNDO);
                tcp_try_keep_open(sk);
                return true;
        }
@@ -3087,8 +3091,7 @@ static void tcp_ack_tstamp(struct sock *sk, struct sk_buff *skb,
                return;
 
        shinfo = skb_shinfo(skb);
-       if ((shinfo->tx_flags & SKBTX_ACK_TSTAMP) &&
-           !before(shinfo->tskey, prior_snd_una) &&
+       if (!before(shinfo->tskey, prior_snd_una) &&
            before(shinfo->tskey, tcp_sk(sk)->snd_una))
                __skb_tstamp_tx(skb, NULL, sk, SCM_TSTAMP_ACK);
 }
@@ -3352,9 +3355,10 @@ static void tcp_snd_una_update(struct tcp_sock *tp, u32 ack)
 {
        u32 delta = ack - tp->snd_una;
 
-       u64_stats_update_begin(&tp->syncp);
+       sock_owned_by_me((struct sock *)tp);
+       u64_stats_update_begin_raw(&tp->syncp);
        tp->bytes_acked += delta;
-       u64_stats_update_end(&tp->syncp);
+       u64_stats_update_end_raw(&tp->syncp);
        tp->snd_una = ack;
 }
 
@@ -3363,9 +3367,10 @@ static void tcp_rcv_nxt_update(struct tcp_sock *tp, u32 seq)
 {
        u32 delta = seq - tp->rcv_nxt;
 
-       u64_stats_update_begin(&tp->syncp);
+       sock_owned_by_me((struct sock *)tp);
+       u64_stats_update_begin_raw(&tp->syncp);
        tp->bytes_received += delta;
-       u64_stats_update_end(&tp->syncp);
+       u64_stats_update_end_raw(&tp->syncp);
        tp->rcv_nxt = seq;
 }
 
@@ -3431,7 +3436,7 @@ bool tcp_oow_rate_limited(struct net *net, const struct sk_buff *skb,
                s32 elapsed = (s32)(tcp_time_stamp - *last_oow_ack_time);
 
                if (0 <= elapsed && elapsed < sysctl_tcp_invalid_ratelimit) {
-                       NET_INC_STATS_BH(net, mib_idx);
+                       NET_INC_STATS(net, mib_idx);
                        return true;    /* rate-limited: don't send yet! */
                }
        }
@@ -3464,7 +3469,7 @@ static void tcp_send_challenge_ack(struct sock *sk, const struct sk_buff *skb)
                challenge_count = 0;
        }
        if (++challenge_count <= sysctl_tcp_challenge_ack_limit) {
-               NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPCHALLENGEACK);
+               NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPCHALLENGEACK);
                tcp_send_ack(sk);
        }
 }
@@ -3513,8 +3518,8 @@ static void tcp_process_tlp_ack(struct sock *sk, u32 ack, int flag)
                tcp_set_ca_state(sk, TCP_CA_CWR);
                tcp_end_cwnd_reduction(sk);
                tcp_try_keep_open(sk);
-               NET_INC_STATS_BH(sock_net(sk),
-                                LINUX_MIB_TCPLOSSPROBERECOVERY);
+               NET_INC_STATS(sock_net(sk),
+                               LINUX_MIB_TCPLOSSPROBERECOVERY);
        } else if (!(flag & (FLAG_SND_UNA_ADVANCED |
                             FLAG_NOT_DUP | FLAG_DATA_SACKED))) {
                /* Pure dupack: original and TLP probe arrived; no loss */
@@ -3618,14 +3623,14 @@ static int tcp_ack(struct sock *sk, const struct sk_buff *skb, int flag)
 
                tcp_in_ack_event(sk, CA_ACK_WIN_UPDATE);
 
-               NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPHPACKS);
+               NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPHPACKS);
        } else {
                u32 ack_ev_flags = CA_ACK_SLOWPATH;
 
                if (ack_seq != TCP_SKB_CB(skb)->end_seq)
                        flag |= FLAG_DATA;
                else
-                       NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPPUREACKS);
+                       NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPPUREACKS);
 
                flag |= tcp_ack_update_window(sk, skb, ack, ack_seq);
 
@@ -4128,7 +4133,7 @@ static void tcp_dsack_set(struct sock *sk, u32 seq, u32 end_seq)
                else
                        mib_idx = LINUX_MIB_TCPDSACKOFOSENT;
 
-               NET_INC_STATS_BH(sock_net(sk), mib_idx);
+               NET_INC_STATS(sock_net(sk), mib_idx);
 
                tp->rx_opt.dsack = 1;
                tp->duplicate_sack[0].start_seq = seq;
@@ -4152,7 +4157,7 @@ static void tcp_send_dupack(struct sock *sk, const struct sk_buff *skb)
 
        if (TCP_SKB_CB(skb)->end_seq != TCP_SKB_CB(skb)->seq &&
            before(TCP_SKB_CB(skb)->seq, tp->rcv_nxt)) {
-               NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_DELAYEDACKLOST);
+               NET_INC_STATS(sock_net(sk), LINUX_MIB_DELAYEDACKLOST);
                tcp_enter_quickack_mode(sk);
 
                if (tcp_is_sack(tp) && sysctl_tcp_dsack) {
@@ -4302,7 +4307,7 @@ static bool tcp_try_coalesce(struct sock *sk,
 
        atomic_add(delta, &sk->sk_rmem_alloc);
        sk_mem_charge(sk, delta);
-       NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPRCVCOALESCE);
+       NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPRCVCOALESCE);
        TCP_SKB_CB(to)->end_seq = TCP_SKB_CB(from)->end_seq;
        TCP_SKB_CB(to)->ack_seq = TCP_SKB_CB(from)->ack_seq;
        TCP_SKB_CB(to)->tcp_flags |= TCP_SKB_CB(from)->tcp_flags;
@@ -4390,7 +4395,7 @@ static void tcp_data_queue_ofo(struct sock *sk, struct sk_buff *skb)
        tcp_ecn_check_ce(tp, skb);
 
        if (unlikely(tcp_try_rmem_schedule(sk, skb, skb->truesize))) {
-               NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPOFODROP);
+               NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPOFODROP);
                tcp_drop(sk, skb);
                return;
        }
@@ -4399,7 +4404,7 @@ static void tcp_data_queue_ofo(struct sock *sk, struct sk_buff *skb)
        tp->pred_flags = 0;
        inet_csk_schedule_ack(sk);
 
-       NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPOFOQUEUE);
+       NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPOFOQUEUE);
        SOCK_DEBUG(sk, "out of order segment: rcv_next %X seq %X - %X\n",
                   tp->rcv_nxt, TCP_SKB_CB(skb)->seq, TCP_SKB_CB(skb)->end_seq);
 
@@ -4454,7 +4459,7 @@ static void tcp_data_queue_ofo(struct sock *sk, struct sk_buff *skb)
        if (skb1 && before(seq, TCP_SKB_CB(skb1)->end_seq)) {
                if (!after(end_seq, TCP_SKB_CB(skb1)->end_seq)) {
                        /* All the bits are present. Drop. */
-                       NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPOFOMERGE);
+                       NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPOFOMERGE);
                        tcp_drop(sk, skb);
                        skb = NULL;
                        tcp_dsack_set(sk, seq, end_seq);
@@ -4493,7 +4498,7 @@ static void tcp_data_queue_ofo(struct sock *sk, struct sk_buff *skb)
                __skb_unlink(skb1, &tp->out_of_order_queue);
                tcp_dsack_extend(sk, TCP_SKB_CB(skb1)->seq,
                                 TCP_SKB_CB(skb1)->end_seq);
-               NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPOFOMERGE);
+               NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPOFOMERGE);
                tcp_drop(sk, skb1);
        }
 
@@ -4608,14 +4613,12 @@ static void tcp_data_queue(struct sock *sk, struct sk_buff *skb)
 
                        __set_current_state(TASK_RUNNING);
 
-                       local_bh_enable();
                        if (!skb_copy_datagram_msg(skb, 0, tp->ucopy.msg, chunk)) {
                                tp->ucopy.len -= chunk;
                                tp->copied_seq += chunk;
                                eaten = (chunk == skb->len);
                                tcp_rcv_space_adjust(sk);
                        }
-                       local_bh_disable();
                }
 
                if (eaten <= 0) {
@@ -4658,7 +4661,7 @@ queue_and_out:
 
        if (!after(TCP_SKB_CB(skb)->end_seq, tp->rcv_nxt)) {
                /* A retransmit, 2nd most common case.  Force an immediate ack. */
-               NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_DELAYEDACKLOST);
+               NET_INC_STATS(sock_net(sk), LINUX_MIB_DELAYEDACKLOST);
                tcp_dsack_set(sk, TCP_SKB_CB(skb)->seq, TCP_SKB_CB(skb)->end_seq);
 
 out_of_window:
@@ -4704,7 +4707,7 @@ static struct sk_buff *tcp_collapse_one(struct sock *sk, struct sk_buff *skb,
 
        __skb_unlink(skb, list);
        __kfree_skb(skb);
-       NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPRCVCOLLAPSED);
+       NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPRCVCOLLAPSED);
 
        return next;
 }
@@ -4863,7 +4866,7 @@ static bool tcp_prune_ofo_queue(struct sock *sk)
        bool res = false;
 
        if (!skb_queue_empty(&tp->out_of_order_queue)) {
-               NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_OFOPRUNED);
+               NET_INC_STATS(sock_net(sk), LINUX_MIB_OFOPRUNED);
                __skb_queue_purge(&tp->out_of_order_queue);
 
                /* Reset SACK state.  A conforming SACK implementation will
@@ -4892,7 +4895,7 @@ static int tcp_prune_queue(struct sock *sk)
 
        SOCK_DEBUG(sk, "prune_queue: c=%x\n", tp->copied_seq);
 
-       NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_PRUNECALLED);
+       NET_INC_STATS(sock_net(sk), LINUX_MIB_PRUNECALLED);
 
        if (atomic_read(&sk->sk_rmem_alloc) >= sk->sk_rcvbuf)
                tcp_clamp_window(sk);
@@ -4922,7 +4925,7 @@ static int tcp_prune_queue(struct sock *sk)
         * drop receive data on the floor.  It will get retransmitted
         * and hopefully then we'll have sufficient space.
         */
-       NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_RCVPRUNED);
+       NET_INC_STATS(sock_net(sk), LINUX_MIB_RCVPRUNED);
 
        /* Massive buffer overcommit. */
        tp->pred_flags = 0;
@@ -5131,7 +5134,6 @@ static int tcp_copy_to_iovec(struct sock *sk, struct sk_buff *skb, int hlen)
        int chunk = skb->len - hlen;
        int err;
 
-       local_bh_enable();
        if (skb_csum_unnecessary(skb))
                err = skb_copy_datagram_msg(skb, hlen, tp->ucopy.msg, chunk);
        else
@@ -5143,32 +5145,9 @@ static int tcp_copy_to_iovec(struct sock *sk, struct sk_buff *skb, int hlen)
                tcp_rcv_space_adjust(sk);
        }
 
-       local_bh_disable();
        return err;
 }
 
-static __sum16 __tcp_checksum_complete_user(struct sock *sk,
-                                           struct sk_buff *skb)
-{
-       __sum16 result;
-
-       if (sock_owned_by_user(sk)) {
-               local_bh_enable();
-               result = __tcp_checksum_complete(skb);
-               local_bh_disable();
-       } else {
-               result = __tcp_checksum_complete(skb);
-       }
-       return result;
-}
-
-static inline bool tcp_checksum_complete_user(struct sock *sk,
-                                            struct sk_buff *skb)
-{
-       return !skb_csum_unnecessary(skb) &&
-              __tcp_checksum_complete_user(sk, skb);
-}
-
 /* Does PAWS and seqno based validation of an incoming segment, flags will
  * play significant role here.
  */
@@ -5181,7 +5160,7 @@ static bool tcp_validate_incoming(struct sock *sk, struct sk_buff *skb,
        if (tcp_fast_parse_options(skb, th, tp) && tp->rx_opt.saw_tstamp &&
            tcp_paws_discard(sk, skb)) {
                if (!th->rst) {
-                       NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_PAWSESTABREJECTED);
+                       NET_INC_STATS(sock_net(sk), LINUX_MIB_PAWSESTABREJECTED);
                        if (!tcp_oow_rate_limited(sock_net(sk), skb,
                                                  LINUX_MIB_TCPACKSKIPPEDPAWS,
                                                  &tp->last_oow_ack_time))
@@ -5233,8 +5212,8 @@ static bool tcp_validate_incoming(struct sock *sk, struct sk_buff *skb,
        if (th->syn) {
 syn_challenge:
                if (syn_inerr)
-                       TCP_INC_STATS_BH(sock_net(sk), TCP_MIB_INERRS);
-               NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPSYNCHALLENGE);
+                       TCP_INC_STATS(sock_net(sk), TCP_MIB_INERRS);
+               NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPSYNCHALLENGE);
                tcp_send_challenge_ack(sk, skb);
                goto discard;
        }
@@ -5349,7 +5328,7 @@ void tcp_rcv_established(struct sock *sk, struct sk_buff *skb,
                                tcp_data_snd_check(sk);
                                return;
                        } else { /* Header too small */
-                               TCP_INC_STATS_BH(sock_net(sk), TCP_MIB_INERRS);
+                               TCP_INC_STATS(sock_net(sk), TCP_MIB_INERRS);
                                goto discard;
                        }
                } else {
@@ -5377,12 +5356,13 @@ void tcp_rcv_established(struct sock *sk, struct sk_buff *skb,
 
                                        __skb_pull(skb, tcp_header_len);
                                        tcp_rcv_nxt_update(tp, TCP_SKB_CB(skb)->end_seq);
-                                       NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPHPHITSTOUSER);
+                                       NET_INC_STATS(sock_net(sk),
+                                                       LINUX_MIB_TCPHPHITSTOUSER);
                                        eaten = 1;
                                }
                        }
                        if (!eaten) {
-                               if (tcp_checksum_complete_user(sk, skb))
+                               if (tcp_checksum_complete(skb))
                                        goto csum_error;
 
                                if ((int)skb->truesize > sk->sk_forward_alloc)
@@ -5399,7 +5379,7 @@ void tcp_rcv_established(struct sock *sk, struct sk_buff *skb,
 
                                tcp_rcv_rtt_measure_ts(sk, skb);
 
-                               NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPHPHITS);
+                               NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPHPHITS);
 
                                /* Bulk data transfer: receiver */
                                eaten = tcp_queue_rcv(sk, skb, tcp_header_len,
@@ -5426,7 +5406,7 @@ no_ack:
        }
 
 slow_path:
-       if (len < (th->doff << 2) || tcp_checksum_complete_user(sk, skb))
+       if (len < (th->doff << 2) || tcp_checksum_complete(skb))
                goto csum_error;
 
        if (!th->ack && !th->rst && !th->syn)
@@ -5456,8 +5436,8 @@ step5:
        return;
 
 csum_error:
-       TCP_INC_STATS_BH(sock_net(sk), TCP_MIB_CSUMERRORS);
-       TCP_INC_STATS_BH(sock_net(sk), TCP_MIB_INERRS);
+       TCP_INC_STATS(sock_net(sk), TCP_MIB_CSUMERRORS);
+       TCP_INC_STATS(sock_net(sk), TCP_MIB_INERRS);
 
 discard:
        tcp_drop(sk, skb);
@@ -5549,12 +5529,14 @@ static bool tcp_rcv_fastopen_synack(struct sock *sk, struct sk_buff *synack,
                                break;
                }
                tcp_rearm_rto(sk);
-               NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPFASTOPENACTIVEFAIL);
+               NET_INC_STATS(sock_net(sk),
+                               LINUX_MIB_TCPFASTOPENACTIVEFAIL);
                return true;
        }
        tp->syn_data_acked = tp->syn_data;
        if (tp->syn_data_acked)
-               NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPFASTOPENACTIVE);
+               NET_INC_STATS(sock_net(sk),
+                               LINUX_MIB_TCPFASTOPENACTIVE);
 
        tcp_fastopen_add_skb(sk, synack);
 
@@ -5589,7 +5571,8 @@ static int tcp_rcv_synsent_state_process(struct sock *sk, struct sk_buff *skb,
                if (tp->rx_opt.saw_tstamp && tp->rx_opt.rcv_tsecr &&
                    !between(tp->rx_opt.rcv_tsecr, tp->retrans_stamp,
                             tcp_time_stamp)) {
-                       NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_PAWSACTIVEREJECTED);
+                       NET_INC_STATS(sock_net(sk),
+                                       LINUX_MIB_PAWSACTIVEREJECTED);
                        goto reset_and_undo;
                }
 
@@ -5815,24 +5798,7 @@ int tcp_rcv_state_process(struct sock *sk, struct sk_buff *skb)
                        if (icsk->icsk_af_ops->conn_request(sk, skb) < 0)
                                return 1;
 
-                       /* Now we have several options: In theory there is
-                        * nothing else in the frame. KA9Q has an option to
-                        * send data with the syn, BSD accepts data with the
-                        * syn up to the [to be] advertised window and
-                        * Solaris 2.1 gives you a protocol error. For now
-                        * we just ignore it, that fits the spec precisely
-                        * and avoids incompatibilities. It would be nice in
-                        * future to drop through and process the data.
-                        *
-                        * Now that TTCP is starting to be used we ought to
-                        * queue this data.
-                        * But, this leaves one open to an easy denial of
-                        * service attack, and SYN cookies can't defend
-                        * against this problem. So, we drop the data
-                        * in the interest of security over speed unless
-                        * it's still in use.
-                        */
-                       kfree_skb(skb);
+                       consume_skb(skb);
                        return 0;
                }
                goto discard;
@@ -5975,7 +5941,7 @@ int tcp_rcv_state_process(struct sock *sk, struct sk_buff *skb)
                    (TCP_SKB_CB(skb)->end_seq != TCP_SKB_CB(skb)->seq &&
                     after(TCP_SKB_CB(skb)->end_seq - th->fin, tp->rcv_nxt))) {
                        tcp_done(sk);
-                       NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPABORTONDATA);
+                       NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPABORTONDATA);
                        return 1;
                }
 
@@ -6032,7 +5998,7 @@ int tcp_rcv_state_process(struct sock *sk, struct sk_buff *skb)
                if (sk->sk_shutdown & RCV_SHUTDOWN) {
                        if (TCP_SKB_CB(skb)->end_seq != TCP_SKB_CB(skb)->seq &&
                            after(TCP_SKB_CB(skb)->end_seq - th->fin, tp->rcv_nxt)) {
-                               NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPABORTONDATA);
+                               NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPABORTONDATA);
                                tcp_reset(sk);
                                return 1;
                        }
@@ -6170,10 +6136,10 @@ static bool tcp_syn_flood_action(const struct sock *sk,
        if (net->ipv4.sysctl_tcp_syncookies) {
                msg = "Sending cookies";
                want_cookie = true;
-               NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPREQQFULLDOCOOKIES);
+               __NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPREQQFULLDOCOOKIES);
        } else
 #endif
-               NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPREQQFULLDROP);
+               __NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPREQQFULLDROP);
 
        if (!queue->synflood_warned &&
            net->ipv4.sysctl_tcp_syncookies != 2 &&
@@ -6234,7 +6200,7 @@ int tcp_conn_request(struct request_sock_ops *rsk_ops,
         * timeout.
         */
        if (sk_acceptq_is_full(sk) && inet_csk_reqsk_queue_young(sk) > 1) {
-               NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_LISTENOVERFLOWS);
+               NET_INC_STATS(sock_net(sk), LINUX_MIB_LISTENOVERFLOWS);
                goto drop;
        }
 
@@ -6281,7 +6247,7 @@ int tcp_conn_request(struct request_sock_ops *rsk_ops,
                        if (dst && strict &&
                            !tcp_peer_is_proven(req, dst, true,
                                                tmp_opt.saw_tstamp)) {
-                               NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_PAWSPASSIVEREJECTED);
+                               NET_INC_STATS(sock_net(sk), LINUX_MIB_PAWSPASSIVEREJECTED);
                                goto drop_and_release;
                        }
                }
index d2a5763..8219d0d 100644 (file)
@@ -320,7 +320,7 @@ void tcp_req_err(struct sock *sk, u32 seq, bool abort)
         * an established socket here.
         */
        if (seq != tcp_rsk(req)->snt_isn) {
-               NET_INC_STATS_BH(net, LINUX_MIB_OUTOFWINDOWICMPS);
+               __NET_INC_STATS(net, LINUX_MIB_OUTOFWINDOWICMPS);
        } else if (abort) {
                /*
                 * Still in SYN_RECV, just remove it silently.
@@ -372,7 +372,7 @@ void tcp_v4_err(struct sk_buff *icmp_skb, u32 info)
                                       th->dest, iph->saddr, ntohs(th->source),
                                       inet_iif(icmp_skb));
        if (!sk) {
-               ICMP_INC_STATS_BH(net, ICMP_MIB_INERRORS);
+               __ICMP_INC_STATS(net, ICMP_MIB_INERRORS);
                return;
        }
        if (sk->sk_state == TCP_TIME_WAIT) {
@@ -396,13 +396,13 @@ void tcp_v4_err(struct sk_buff *icmp_skb, u32 info)
         */
        if (sock_owned_by_user(sk)) {
                if (!(type == ICMP_DEST_UNREACH && code == ICMP_FRAG_NEEDED))
-                       NET_INC_STATS_BH(net, LINUX_MIB_LOCKDROPPEDICMPS);
+                       __NET_INC_STATS(net, LINUX_MIB_LOCKDROPPEDICMPS);
        }
        if (sk->sk_state == TCP_CLOSE)
                goto out;
 
        if (unlikely(iph->ttl < inet_sk(sk)->min_ttl)) {
-               NET_INC_STATS_BH(net, LINUX_MIB_TCPMINTTLDROP);
+               __NET_INC_STATS(net, LINUX_MIB_TCPMINTTLDROP);
                goto out;
        }
 
@@ -413,7 +413,7 @@ void tcp_v4_err(struct sk_buff *icmp_skb, u32 info)
        snd_una = fastopen ? tcp_rsk(fastopen)->snt_isn : tp->snd_una;
        if (sk->sk_state != TCP_LISTEN &&
            !between(seq, snd_una, tp->snd_nxt)) {
-               NET_INC_STATS_BH(net, LINUX_MIB_OUTOFWINDOWICMPS);
+               __NET_INC_STATS(net, LINUX_MIB_OUTOFWINDOWICMPS);
                goto out;
        }
 
@@ -692,13 +692,15 @@ static void tcp_v4_send_reset(const struct sock *sk, struct sk_buff *skb)
                     offsetof(struct inet_timewait_sock, tw_bound_dev_if));
 
        arg.tos = ip_hdr(skb)->tos;
+       local_bh_disable();
        ip_send_unicast_reply(*this_cpu_ptr(net->ipv4.tcp_sk),
                              skb, &TCP_SKB_CB(skb)->header.h4.opt,
                              ip_hdr(skb)->saddr, ip_hdr(skb)->daddr,
                              &arg, arg.iov[0].iov_len);
 
-       TCP_INC_STATS_BH(net, TCP_MIB_OUTSEGS);
-       TCP_INC_STATS_BH(net, TCP_MIB_OUTRSTS);
+       __TCP_INC_STATS(net, TCP_MIB_OUTSEGS);
+       __TCP_INC_STATS(net, TCP_MIB_OUTRSTS);
+       local_bh_enable();
 
 #ifdef CONFIG_TCP_MD5SIG
 out:
@@ -774,12 +776,14 @@ static void tcp_v4_send_ack(struct net *net,
        if (oif)
                arg.bound_dev_if = oif;
        arg.tos = tos;
+       local_bh_disable();
        ip_send_unicast_reply(*this_cpu_ptr(net->ipv4.tcp_sk),
                              skb, &TCP_SKB_CB(skb)->header.h4.opt,
                              ip_hdr(skb)->saddr, ip_hdr(skb)->daddr,
                              &arg, arg.iov[0].iov_len);
 
-       TCP_INC_STATS_BH(net, TCP_MIB_OUTSEGS);
+       __TCP_INC_STATS(net, TCP_MIB_OUTSEGS);
+       local_bh_enable();
 }
 
 static void tcp_v4_timewait_ack(struct sock *sk, struct sk_buff *skb)
@@ -1151,12 +1155,12 @@ static bool tcp_v4_inbound_md5_hash(const struct sock *sk,
                return false;
 
        if (hash_expected && !hash_location) {
-               NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPMD5NOTFOUND);
+               NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPMD5NOTFOUND);
                return true;
        }
 
        if (!hash_expected && hash_location) {
-               NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPMD5UNEXPECTED);
+               NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPMD5UNEXPECTED);
                return true;
        }
 
@@ -1342,7 +1346,7 @@ struct sock *tcp_v4_syn_recv_sock(const struct sock *sk, struct sk_buff *skb,
        return newsk;
 
 exit_overflow:
-       NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_LISTENOVERFLOWS);
+       NET_INC_STATS(sock_net(sk), LINUX_MIB_LISTENOVERFLOWS);
 exit_nonewsk:
        dst_release(dst);
 exit:
@@ -1432,8 +1436,8 @@ discard:
        return 0;
 
 csum_err:
-       TCP_INC_STATS_BH(sock_net(sk), TCP_MIB_CSUMERRORS);
-       TCP_INC_STATS_BH(sock_net(sk), TCP_MIB_INERRS);
+       TCP_INC_STATS(sock_net(sk), TCP_MIB_CSUMERRORS);
+       TCP_INC_STATS(sock_net(sk), TCP_MIB_INERRS);
        goto discard;
 }
 EXPORT_SYMBOL(tcp_v4_do_rcv);
@@ -1506,16 +1510,16 @@ bool tcp_prequeue(struct sock *sk, struct sk_buff *skb)
 
        __skb_queue_tail(&tp->ucopy.prequeue, skb);
        tp->ucopy.memory += skb->truesize;
-       if (tp->ucopy.memory > sk->sk_rcvbuf) {
+       if (skb_queue_len(&tp->ucopy.prequeue) >= 32 ||
+           tp->ucopy.memory + atomic_read(&sk->sk_rmem_alloc) > sk->sk_rcvbuf) {
                struct sk_buff *skb1;
 
                BUG_ON(sock_owned_by_user(sk));
+               __NET_ADD_STATS(sock_net(sk), LINUX_MIB_TCPPREQUEUEDROPPED,
+                               skb_queue_len(&tp->ucopy.prequeue));
 
-               while ((skb1 = __skb_dequeue(&tp->ucopy.prequeue)) != NULL) {
+               while ((skb1 = __skb_dequeue(&tp->ucopy.prequeue)) != NULL)
                        sk_backlog_rcv(sk, skb1);
-                       NET_INC_STATS_BH(sock_net(sk),
-                                        LINUX_MIB_TCPPREQUEUEDROPPED);
-               }
 
                tp->ucopy.memory = 0;
        } else if (skb_queue_len(&tp->ucopy.prequeue) == 1) {
@@ -1547,7 +1551,7 @@ int tcp_v4_rcv(struct sk_buff *skb)
                goto discard_it;
 
        /* Count it even if it's bad */
-       TCP_INC_STATS_BH(net, TCP_MIB_INSEGS);
+       __TCP_INC_STATS(net, TCP_MIB_INSEGS);
 
        if (!pskb_may_pull(skb, sizeof(struct tcphdr)))
                goto discard_it;
@@ -1629,7 +1633,7 @@ process:
                }
        }
        if (unlikely(iph->ttl < inet_sk(sk)->min_ttl)) {
-               NET_INC_STATS_BH(net, LINUX_MIB_TCPMINTTLDROP);
+               __NET_INC_STATS(net, LINUX_MIB_TCPMINTTLDROP);
                goto discard_and_relse;
        }
 
@@ -1662,7 +1666,7 @@ process:
        } else if (unlikely(sk_add_backlog(sk, skb,
                                           sk->sk_rcvbuf + sk->sk_sndbuf))) {
                bh_unlock_sock(sk);
-               NET_INC_STATS_BH(net, LINUX_MIB_TCPBACKLOGDROP);
+               __NET_INC_STATS(net, LINUX_MIB_TCPBACKLOGDROP);
                goto discard_and_relse;
        }
        bh_unlock_sock(sk);
@@ -1679,9 +1683,9 @@ no_tcp_socket:
 
        if (tcp_checksum_complete(skb)) {
 csum_error:
-               TCP_INC_STATS_BH(net, TCP_MIB_CSUMERRORS);
+               __TCP_INC_STATS(net, TCP_MIB_CSUMERRORS);
 bad_packet:
-               TCP_INC_STATS_BH(net, TCP_MIB_INERRS);
+               __TCP_INC_STATS(net, TCP_MIB_INERRS);
        } else {
                tcp_v4_send_reset(NULL, skb);
        }
@@ -1835,7 +1839,9 @@ void tcp_v4_destroy_sock(struct sock *sk)
        tcp_free_fastopen_req(tp);
        tcp_saved_syn_free(tp);
 
+       local_bh_disable();
        sk_sockets_allocated_dec(sk);
+       local_bh_enable();
 
        if (mem_cgroup_sockets_enabled && sk->sk_memcg)
                sock_release_memcg(sk);
index 4c53e7c..4b95ec4 100644 (file)
@@ -235,7 +235,7 @@ kill:
        }
 
        if (paws_reject)
-               NET_INC_STATS_BH(twsk_net(tw), LINUX_MIB_PAWSESTABREJECTED);
+               __NET_INC_STATS(twsk_net(tw), LINUX_MIB_PAWSESTABREJECTED);
 
        if (!th->rst) {
                /* In this case we must reset the TIMEWAIT timer.
@@ -337,7 +337,7 @@ void tcp_time_wait(struct sock *sk, int state, int timeo)
                 * socket up.  We've got bigger problems than
                 * non-graceful socket closings.
                 */
-               NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPTIMEWAITOVERFLOW);
+               NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPTIMEWAITOVERFLOW);
        }
 
        tcp_update_metrics(sk);
@@ -545,7 +545,7 @@ struct sock *tcp_create_openreq_child(const struct sock *sk,
                newtp->rack.mstamp.v64 = 0;
                newtp->rack.advanced = 0;
 
-               TCP_INC_STATS_BH(sock_net(sk), TCP_MIB_PASSIVEOPENS);
+               __TCP_INC_STATS(sock_net(sk), TCP_MIB_PASSIVEOPENS);
        }
        return newsk;
 }
@@ -710,7 +710,7 @@ struct sock *tcp_check_req(struct sock *sk, struct sk_buff *skb,
                                          &tcp_rsk(req)->last_oow_ack_time))
                        req->rsk_ops->send_ack(sk, skb, req);
                if (paws_reject)
-                       NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_PAWSESTABREJECTED);
+                       __NET_INC_STATS(sock_net(sk), LINUX_MIB_PAWSESTABREJECTED);
                return NULL;
        }
 
@@ -729,7 +729,7 @@ struct sock *tcp_check_req(struct sock *sk, struct sk_buff *skb,
         *         "fourth, check the SYN bit"
         */
        if (flg & (TCP_FLAG_RST|TCP_FLAG_SYN)) {
-               TCP_INC_STATS_BH(sock_net(sk), TCP_MIB_ATTEMPTFAILS);
+               __TCP_INC_STATS(sock_net(sk), TCP_MIB_ATTEMPTFAILS);
                goto embryonic_reset;
        }
 
@@ -752,7 +752,7 @@ struct sock *tcp_check_req(struct sock *sk, struct sk_buff *skb,
        if (req->num_timeout < inet_csk(sk)->icsk_accept_queue.rskq_defer_accept &&
            TCP_SKB_CB(skb)->end_seq == tcp_rsk(req)->rcv_isn + 1) {
                inet_rsk(req)->acked = 1;
-               NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPDEFERACCEPTDROP);
+               __NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPDEFERACCEPTDROP);
                return NULL;
        }
 
@@ -791,7 +791,7 @@ embryonic_reset:
        }
        if (!fastopen) {
                inet_csk_reqsk_queue_drop(sk, req);
-               NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_EMBRYONICRSTS);
+               __NET_INC_STATS(sock_net(sk), LINUX_MIB_EMBRYONICRSTS);
        }
        return NULL;
 }
index 9d3b4b3..8daefd8 100644 (file)
@@ -949,7 +949,7 @@ static int tcp_transmit_skb(struct sock *sk, struct sk_buff *skb, int clone_it,
 
        skb_orphan(skb);
        skb->sk = sk;
-       skb->destructor = skb_is_tcp_pure_ack(skb) ? sock_wfree : tcp_wfree;
+       skb->destructor = skb_is_tcp_pure_ack(skb) ? __sock_wfree : tcp_wfree;
        skb_set_hash_from_sk(skb, sk);
        atomic_add(skb->truesize, &sk->sk_wmem_alloc);
 
@@ -1111,11 +1111,17 @@ static void tcp_adjust_pcount(struct sock *sk, const struct sk_buff *skb, int de
        tcp_verify_left_out(tp);
 }
 
+static bool tcp_has_tx_tstamp(const struct sk_buff *skb)
+{
+       return TCP_SKB_CB(skb)->txstamp_ack ||
+               (skb_shinfo(skb)->tx_flags & SKBTX_ANY_TSTAMP);
+}
+
 static void tcp_fragment_tstamp(struct sk_buff *skb, struct sk_buff *skb2)
 {
        struct skb_shared_info *shinfo = skb_shinfo(skb);
 
-       if (unlikely(shinfo->tx_flags & SKBTX_ANY_TSTAMP) &&
+       if (unlikely(tcp_has_tx_tstamp(skb)) &&
            !before(shinfo->tskey, TCP_SKB_CB(skb2)->seq)) {
                struct skb_shared_info *shinfo2 = skb_shinfo(skb2);
                u8 tsflags = shinfo->tx_flags & SKBTX_ANY_TSTAMP;
@@ -1128,6 +1134,12 @@ static void tcp_fragment_tstamp(struct sk_buff *skb, struct sk_buff *skb2)
        }
 }
 
+static void tcp_skb_fragment_eor(struct sk_buff *skb, struct sk_buff *skb2)
+{
+       TCP_SKB_CB(skb2)->eor = TCP_SKB_CB(skb)->eor;
+       TCP_SKB_CB(skb)->eor = 0;
+}
+
 /* Function to create two new TCP segments.  Shrinks the given segment
  * to the specified size and appends a new segment with the rest of the
  * packet to the list.  This won't be called frequently, I hope.
@@ -1173,6 +1185,7 @@ int tcp_fragment(struct sock *sk, struct sk_buff *skb, u32 len,
        TCP_SKB_CB(skb)->tcp_flags = flags & ~(TCPHDR_FIN | TCPHDR_PSH);
        TCP_SKB_CB(buff)->tcp_flags = flags;
        TCP_SKB_CB(buff)->sacked = TCP_SKB_CB(skb)->sacked;
+       tcp_skb_fragment_eor(skb, buff);
 
        if (!skb_shinfo(skb)->nr_frags && skb->ip_summed != CHECKSUM_PARTIAL) {
                /* Copy and checksum data tail into the new buffer. */
@@ -1733,6 +1746,8 @@ static int tso_fragment(struct sock *sk, struct sk_buff *skb, unsigned int len,
        /* This packet was never sent out yet, so no SACK bits. */
        TCP_SKB_CB(buff)->sacked = 0;
 
+       tcp_skb_fragment_eor(skb, buff);
+
        buff->ip_summed = skb->ip_summed = CHECKSUM_PARTIAL;
        skb_split(skb, buff, len);
        tcp_fragment_tstamp(skb, buff);
@@ -2206,14 +2221,13 @@ bool tcp_schedule_loss_probe(struct sock *sk)
 /* Thanks to skb fast clones, we can detect if a prior transmit of
  * a packet is still in a qdisc or driver queue.
  * In this case, there is very little point doing a retransmit !
- * Note: This is called from BH context only.
  */
 static bool skb_still_in_host_queue(const struct sock *sk,
                                    const struct sk_buff *skb)
 {
        if (unlikely(skb_fclone_busy(sk, skb))) {
-               NET_INC_STATS_BH(sock_net(sk),
-                                LINUX_MIB_TCPSPURIOUS_RTX_HOSTQUEUES);
+               NET_INC_STATS(sock_net(sk),
+                             LINUX_MIB_TCPSPURIOUS_RTX_HOSTQUEUES);
                return true;
        }
        return false;
@@ -2275,7 +2289,7 @@ void tcp_send_loss_probe(struct sock *sk)
        tp->tlp_high_seq = tp->snd_nxt;
 
 probe_sent:
-       NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPLOSSPROBES);
+       NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPLOSSPROBES);
        /* Reset s.t. tcp_rearm_rto will restart timer from now */
        inet_csk(sk)->icsk_pending = 0;
 rearm_timer:
@@ -2446,13 +2460,12 @@ u32 __tcp_select_window(struct sock *sk)
 void tcp_skb_collapse_tstamp(struct sk_buff *skb,
                             const struct sk_buff *next_skb)
 {
-       const struct skb_shared_info *next_shinfo = skb_shinfo(next_skb);
-       u8 tsflags = next_shinfo->tx_flags & SKBTX_ANY_TSTAMP;
-
-       if (unlikely(tsflags)) {
+       if (unlikely(tcp_has_tx_tstamp(next_skb))) {
+               const struct skb_shared_info *next_shinfo =
+                       skb_shinfo(next_skb);
                struct skb_shared_info *shinfo = skb_shinfo(skb);
 
-               shinfo->tx_flags |= tsflags;
+               shinfo->tx_flags |= next_shinfo->tx_flags & SKBTX_ANY_TSTAMP;
                shinfo->tskey = next_shinfo->tskey;
                TCP_SKB_CB(skb)->txstamp_ack |=
                        TCP_SKB_CB(next_skb)->txstamp_ack;
@@ -2494,6 +2507,7 @@ static void tcp_collapse_retrans(struct sock *sk, struct sk_buff *skb)
         * packet counting does not break.
         */
        TCP_SKB_CB(skb)->sacked |= TCP_SKB_CB(next_skb)->sacked & TCPCB_EVER_RETRANS;
+       TCP_SKB_CB(skb)->eor = TCP_SKB_CB(next_skb)->eor;
 
        /* changed transmit queue under us so clear hints */
        tcp_clear_retrans_hints_partial(tp);
@@ -2545,6 +2559,9 @@ static void tcp_retrans_try_collapse(struct sock *sk, struct sk_buff *to,
                if (!tcp_can_collapse(sk, skb))
                        break;
 
+               if (!tcp_skb_can_collapse_to(to))
+                       break;
+
                space -= skb->len;
 
                if (first) {
@@ -2656,7 +2673,7 @@ int __tcp_retransmit_skb(struct sock *sk, struct sk_buff *skb, int segs)
                /* Update global TCP statistics. */
                TCP_ADD_STATS(sock_net(sk), TCP_MIB_RETRANSSEGS, segs);
                if (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_SYN)
-                       NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPSYNRETRANS);
+                       __NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPSYNRETRANS);
                tp->total_retrans += segs;
        }
        return err;
@@ -2681,7 +2698,7 @@ int tcp_retransmit_skb(struct sock *sk, struct sk_buff *skb, int segs)
                        tp->retrans_stamp = tcp_skb_timestamp(skb);
 
        } else if (err != -EBUSY) {
-               NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPRETRANSFAIL);
+               NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPRETRANSFAIL);
        }
 
        if (tp->undo_retrans < 0)
@@ -2805,7 +2822,7 @@ begin_fwd:
                if (tcp_retransmit_skb(sk, skb, segs))
                        return;
 
-               NET_INC_STATS_BH(sock_net(sk), mib_idx);
+               NET_INC_STATS(sock_net(sk), mib_idx);
 
                if (tcp_in_cwnd_reduction(sk))
                        tp->prr_out += tcp_skb_pcount(skb);
@@ -3042,7 +3059,7 @@ struct sk_buff *tcp_make_synack(const struct sock *sk, struct dst_entry *dst,
        th->window = htons(min(req->rsk_rcv_wnd, 65535U));
        tcp_options_write((__be32 *)(th + 1), NULL, &opts);
        th->doff = (tcp_header_size >> 2);
-       TCP_INC_STATS_BH(sock_net(sk), TCP_MIB_OUTSEGS);
+       __TCP_INC_STATS(sock_net(sk), TCP_MIB_OUTSEGS);
 
 #ifdef CONFIG_TCP_MD5SIG
        /* Okay, we have all we need - do the md5 hash if needed */
@@ -3540,8 +3557,8 @@ int tcp_rtx_synack(const struct sock *sk, struct request_sock *req)
        tcp_rsk(req)->txhash = net_tx_rndhash();
        res = af_ops->send_synack(sk, NULL, &fl, req, NULL, TCP_SYNACK_NORMAL);
        if (!res) {
-               TCP_INC_STATS_BH(sock_net(sk), TCP_MIB_RETRANSSEGS);
-               NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPSYNRETRANS);
+               __TCP_INC_STATS(sock_net(sk), TCP_MIB_RETRANSSEGS);
+               __NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPSYNRETRANS);
        }
        return res;
 }
index 5353085..e36df4f 100644 (file)
@@ -65,8 +65,8 @@ int tcp_rack_mark_lost(struct sock *sk)
                        if (scb->sacked & TCPCB_SACKED_RETRANS) {
                                scb->sacked &= ~TCPCB_SACKED_RETRANS;
                                tp->retrans_out -= tcp_skb_pcount(skb);
-                               NET_INC_STATS_BH(sock_net(sk),
-                                                LINUX_MIB_TCPLOSTRETRANSMIT);
+                               NET_INC_STATS(sock_net(sk),
+                                             LINUX_MIB_TCPLOSTRETRANSMIT);
                        }
                } else if (!(scb->sacked & TCPCB_RETRANS)) {
                        /* Original data are sent sequentially so stop early
index 373b03e..debdd8b 100644 (file)
@@ -30,7 +30,7 @@ static void tcp_write_err(struct sock *sk)
        sk->sk_error_report(sk);
 
        tcp_done(sk);
-       NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPABORTONTIMEOUT);
+       __NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPABORTONTIMEOUT);
 }
 
 /* Do not allow orphaned sockets to eat all our resources.
@@ -68,7 +68,7 @@ static int tcp_out_of_resources(struct sock *sk, bool do_reset)
                if (do_reset)
                        tcp_send_active_reset(sk, GFP_ATOMIC);
                tcp_done(sk);
-               NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPABORTONMEMORY);
+               __NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPABORTONMEMORY);
                return 1;
        }
        return 0;
@@ -162,8 +162,8 @@ static int tcp_write_timeout(struct sock *sk)
                        if (tp->syn_fastopen || tp->syn_data)
                                tcp_fastopen_cache_set(sk, 0, NULL, true, 0);
                        if (tp->syn_data && icsk->icsk_retransmits == 1)
-                               NET_INC_STATS_BH(sock_net(sk),
-                                                LINUX_MIB_TCPFASTOPENACTIVEFAIL);
+                               NET_INC_STATS(sock_net(sk),
+                                             LINUX_MIB_TCPFASTOPENACTIVEFAIL);
                }
                retry_until = icsk->icsk_syn_retries ? : net->ipv4.sysctl_tcp_syn_retries;
                syn_set = true;
@@ -178,8 +178,8 @@ static int tcp_write_timeout(struct sock *sk)
                            tp->bytes_acked <= tp->rx_opt.mss_clamp) {
                                tcp_fastopen_cache_set(sk, 0, NULL, true, 0);
                                if (icsk->icsk_retransmits == net->ipv4.sysctl_tcp_retries1)
-                                       NET_INC_STATS_BH(sock_net(sk),
-                                                        LINUX_MIB_TCPFASTOPENACTIVEFAIL);
+                                       NET_INC_STATS(sock_net(sk),
+                                                     LINUX_MIB_TCPFASTOPENACTIVEFAIL);
                        }
                        /* Black hole detection */
                        tcp_mtu_probing(icsk, sk);
@@ -209,6 +209,7 @@ static int tcp_write_timeout(struct sock *sk)
        return 0;
 }
 
+/* Called with BH disabled */
 void tcp_delack_timer_handler(struct sock *sk)
 {
        struct tcp_sock *tp = tcp_sk(sk);
@@ -228,7 +229,7 @@ void tcp_delack_timer_handler(struct sock *sk)
        if (!skb_queue_empty(&tp->ucopy.prequeue)) {
                struct sk_buff *skb;
 
-               NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPSCHEDULERFAILED);
+               __NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPSCHEDULERFAILED);
 
                while ((skb = __skb_dequeue(&tp->ucopy.prequeue)) != NULL)
                        sk_backlog_rcv(sk, skb);
@@ -248,7 +249,7 @@ void tcp_delack_timer_handler(struct sock *sk)
                        icsk->icsk_ack.ato      = TCP_ATO_MIN;
                }
                tcp_send_ack(sk);
-               NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_DELAYEDACKS);
+               __NET_INC_STATS(sock_net(sk), LINUX_MIB_DELAYEDACKS);
        }
 
 out:
@@ -265,7 +266,7 @@ static void tcp_delack_timer(unsigned long data)
                tcp_delack_timer_handler(sk);
        } else {
                inet_csk(sk)->icsk_ack.blocked = 1;
-               NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_DELAYEDACKLOCKED);
+               __NET_INC_STATS(sock_net(sk), LINUX_MIB_DELAYEDACKLOCKED);
                /* deleguate our work to tcp_release_cb() */
                if (!test_and_set_bit(TCP_DELACK_TIMER_DEFERRED, &tcp_sk(sk)->tsq_flags))
                        sock_hold(sk);
@@ -431,7 +432,7 @@ void tcp_retransmit_timer(struct sock *sk)
                } else {
                        mib_idx = LINUX_MIB_TCPTIMEOUTS;
                }
-               NET_INC_STATS_BH(sock_net(sk), mib_idx);
+               __NET_INC_STATS(sock_net(sk), mib_idx);
        }
 
        tcp_enter_loss(sk);
@@ -493,6 +494,7 @@ out_reset_timer:
 out:;
 }
 
+/* Called with BH disabled */
 void tcp_write_timer_handler(struct sock *sk)
 {
        struct inet_connection_sock *icsk = inet_csk(sk);
@@ -549,7 +551,7 @@ void tcp_syn_ack_timeout(const struct request_sock *req)
 {
        struct net *net = read_pnet(&inet_rsk(req)->ireq_net);
 
-       NET_INC_STATS_BH(net, LINUX_MIB_TCPTIMEOUTS);
+       __NET_INC_STATS(net, LINUX_MIB_TCPTIMEOUTS);
 }
 EXPORT_SYMBOL(tcp_syn_ack_timeout);
 
index 76ea0a8..f67f52b 100644 (file)
@@ -688,7 +688,7 @@ void __udp4_lib_err(struct sk_buff *skb, u32 info, struct udp_table *udptable)
                        iph->saddr, uh->source, skb->dev->ifindex, udptable,
                        NULL);
        if (!sk) {
-               ICMP_INC_STATS_BH(net, ICMP_MIB_INERRORS);
+               __ICMP_INC_STATS(net, ICMP_MIB_INERRORS);
                return; /* No socket for error */
        }
 
@@ -882,13 +882,13 @@ send:
        err = ip_send_skb(sock_net(sk), skb);
        if (err) {
                if (err == -ENOBUFS && !inet->recverr) {
-                       UDP_INC_STATS_USER(sock_net(sk),
-                                          UDP_MIB_SNDBUFERRORS, is_udplite);
+                       UDP_INC_STATS(sock_net(sk),
+                                     UDP_MIB_SNDBUFERRORS, is_udplite);
                        err = 0;
                }
        } else
-               UDP_INC_STATS_USER(sock_net(sk),
-                                  UDP_MIB_OUTDATAGRAMS, is_udplite);
+               UDP_INC_STATS(sock_net(sk),
+                             UDP_MIB_OUTDATAGRAMS, is_udplite);
        return err;
 }
 
@@ -1157,8 +1157,8 @@ out:
         * seems like overkill.
         */
        if (err == -ENOBUFS || test_bit(SOCK_NOSPACE, &sk->sk_socket->flags)) {
-               UDP_INC_STATS_USER(sock_net(sk),
-                               UDP_MIB_SNDBUFERRORS, is_udplite);
+               UDP_INC_STATS(sock_net(sk),
+                             UDP_MIB_SNDBUFERRORS, is_udplite);
        }
        return err;
 
@@ -1242,10 +1242,10 @@ static unsigned int first_packet_length(struct sock *sk)
        spin_lock_bh(&rcvq->lock);
        while ((skb = skb_peek(rcvq)) != NULL &&
                udp_lib_checksum_complete(skb)) {
-               UDP_INC_STATS_BH(sock_net(sk), UDP_MIB_CSUMERRORS,
-                                IS_UDPLITE(sk));
-               UDP_INC_STATS_BH(sock_net(sk), UDP_MIB_INERRORS,
-                                IS_UDPLITE(sk));
+               __UDP_INC_STATS(sock_net(sk), UDP_MIB_CSUMERRORS,
+                               IS_UDPLITE(sk));
+               __UDP_INC_STATS(sock_net(sk), UDP_MIB_INERRORS,
+                               IS_UDPLITE(sk));
                atomic_inc(&sk->sk_drops);
                __skb_unlink(skb, rcvq);
                __skb_queue_tail(&list_kill, skb);
@@ -1352,16 +1352,16 @@ try_again:
                trace_kfree_skb(skb, udp_recvmsg);
                if (!peeked) {
                        atomic_inc(&sk->sk_drops);
-                       UDP_INC_STATS_USER(sock_net(sk),
-                                          UDP_MIB_INERRORS, is_udplite);
+                       UDP_INC_STATS(sock_net(sk),
+                                     UDP_MIB_INERRORS, is_udplite);
                }
                skb_free_datagram_locked(sk, skb);
                return err;
        }
 
        if (!peeked)
-               UDP_INC_STATS_USER(sock_net(sk),
-                               UDP_MIB_INDATAGRAMS, is_udplite);
+               UDP_INC_STATS(sock_net(sk),
+                             UDP_MIB_INDATAGRAMS, is_udplite);
 
        sock_recv_ts_and_drops(msg, sk, skb);
 
@@ -1386,8 +1386,8 @@ try_again:
 csum_copy_err:
        slow = lock_sock_fast(sk);
        if (!skb_kill_datagram(sk, skb, flags)) {
-               UDP_INC_STATS_USER(sock_net(sk), UDP_MIB_CSUMERRORS, is_udplite);
-               UDP_INC_STATS_USER(sock_net(sk), UDP_MIB_INERRORS, is_udplite);
+               UDP_INC_STATS(sock_net(sk), UDP_MIB_CSUMERRORS, is_udplite);
+               UDP_INC_STATS(sock_net(sk), UDP_MIB_INERRORS, is_udplite);
        }
        unlock_sock_fast(sk, slow);
 
@@ -1514,9 +1514,9 @@ static int __udp_queue_rcv_skb(struct sock *sk, struct sk_buff *skb)
 
                /* Note that an ENOMEM error is charged twice */
                if (rc == -ENOMEM)
-                       UDP_INC_STATS_BH(sock_net(sk), UDP_MIB_RCVBUFERRORS,
-                                        is_udplite);
-               UDP_INC_STATS_BH(sock_net(sk), UDP_MIB_INERRORS, is_udplite);
+                       UDP_INC_STATS(sock_net(sk), UDP_MIB_RCVBUFERRORS,
+                                       is_udplite);
+               UDP_INC_STATS(sock_net(sk), UDP_MIB_INERRORS, is_udplite);
                kfree_skb(skb);
                trace_udp_fail_queue_rcv_skb(rc, sk);
                return -1;
@@ -1580,9 +1580,9 @@ int udp_queue_rcv_skb(struct sock *sk, struct sk_buff *skb)
 
                        ret = encap_rcv(sk, skb);
                        if (ret <= 0) {
-                               UDP_INC_STATS_BH(sock_net(sk),
-                                                UDP_MIB_INDATAGRAMS,
-                                                is_udplite);
+                               __UDP_INC_STATS(sock_net(sk),
+                                               UDP_MIB_INDATAGRAMS,
+                                               is_udplite);
                                return -ret;
                        }
                }
@@ -1633,8 +1633,8 @@ int udp_queue_rcv_skb(struct sock *sk, struct sk_buff *skb)
 
        udp_csum_pull_header(skb);
        if (sk_rcvqueues_full(sk, sk->sk_rcvbuf)) {
-               UDP_INC_STATS_BH(sock_net(sk), UDP_MIB_RCVBUFERRORS,
-                                is_udplite);
+               __UDP_INC_STATS(sock_net(sk), UDP_MIB_RCVBUFERRORS,
+                               is_udplite);
                goto drop;
        }
 
@@ -1653,9 +1653,9 @@ int udp_queue_rcv_skb(struct sock *sk, struct sk_buff *skb)
        return rc;
 
 csum_error:
-       UDP_INC_STATS_BH(sock_net(sk), UDP_MIB_CSUMERRORS, is_udplite);
+       __UDP_INC_STATS(sock_net(sk), UDP_MIB_CSUMERRORS, is_udplite);
 drop:
-       UDP_INC_STATS_BH(sock_net(sk), UDP_MIB_INERRORS, is_udplite);
+       __UDP_INC_STATS(sock_net(sk), UDP_MIB_INERRORS, is_udplite);
        atomic_inc(&sk->sk_drops);
        kfree_skb(skb);
        return -1;
@@ -1715,10 +1715,10 @@ start_lookup:
 
                if (unlikely(!nskb)) {
                        atomic_inc(&sk->sk_drops);
-                       UDP_INC_STATS_BH(net, UDP_MIB_RCVBUFERRORS,
-                                        IS_UDPLITE(sk));
-                       UDP_INC_STATS_BH(net, UDP_MIB_INERRORS,
-                                        IS_UDPLITE(sk));
+                       __UDP_INC_STATS(net, UDP_MIB_RCVBUFERRORS,
+                                       IS_UDPLITE(sk));
+                       __UDP_INC_STATS(net, UDP_MIB_INERRORS,
+                                       IS_UDPLITE(sk));
                        continue;
                }
                if (udp_queue_rcv_skb(sk, nskb) > 0)
@@ -1736,8 +1736,8 @@ start_lookup:
                        consume_skb(skb);
        } else {
                kfree_skb(skb);
-               UDP_INC_STATS_BH(net, UDP_MIB_IGNOREDMULTI,
-                                proto == IPPROTO_UDPLITE);
+               __UDP_INC_STATS(net, UDP_MIB_IGNOREDMULTI,
+                               proto == IPPROTO_UDPLITE);
        }
        return 0;
 }
@@ -1851,7 +1851,7 @@ int __udp4_lib_rcv(struct sk_buff *skb, struct udp_table *udptable,
        if (udp_lib_checksum_complete(skb))
                goto csum_error;
 
-       UDP_INC_STATS_BH(net, UDP_MIB_NOPORTS, proto == IPPROTO_UDPLITE);
+       __UDP_INC_STATS(net, UDP_MIB_NOPORTS, proto == IPPROTO_UDPLITE);
        icmp_send(skb, ICMP_DEST_UNREACH, ICMP_PORT_UNREACH, 0);
 
        /*
@@ -1878,9 +1878,9 @@ csum_error:
                            proto == IPPROTO_UDPLITE ? "Lite" : "",
                            &saddr, ntohs(uh->source), &daddr, ntohs(uh->dest),
                            ulen);
-       UDP_INC_STATS_BH(net, UDP_MIB_CSUMERRORS, proto == IPPROTO_UDPLITE);
+       __UDP_INC_STATS(net, UDP_MIB_CSUMERRORS, proto == IPPROTO_UDPLITE);
 drop:
-       UDP_INC_STATS_BH(net, UDP_MIB_INERRORS, proto == IPPROTO_UDPLITE);
+       __UDP_INC_STATS(net, UDP_MIB_INERRORS, proto == IPPROTO_UDPLITE);
        kfree_skb(skb);
        return 0;
 }
index 11e875f..3f84113 100644 (file)
@@ -218,6 +218,7 @@ config IPV6_GRE
        tristate "IPv6: GRE tunnel"
        select IPV6_TUNNEL
        select NET_IP_TUNNEL
+       depends on NET_IPGRE_DEMUX
        ---help---
          Tunneling means encapsulating data of one protocol type within
          another protocol and sending it over a channel that understands the
index f5a77a9..47f837a 100644 (file)
@@ -3175,35 +3175,9 @@ static void addrconf_gre_config(struct net_device *dev)
 }
 #endif
 
-#if IS_ENABLED(CONFIG_NET_L3_MASTER_DEV)
-/* If the host route is cached on the addr struct make sure it is associated
- * with the proper table. e.g., enslavement can change and if so the cached
- * host route needs to move to the new table.
- */
-static void l3mdev_check_host_rt(struct inet6_dev *idev,
-                                 struct inet6_ifaddr *ifp)
-{
-       if (ifp->rt) {
-               u32 tb_id = l3mdev_fib_table(idev->dev) ? : RT6_TABLE_LOCAL;
-
-               if (tb_id != ifp->rt->rt6i_table->tb6_id) {
-                       ip6_del_rt(ifp->rt);
-                       ifp->rt = NULL;
-               }
-       }
-}
-#else
-static void l3mdev_check_host_rt(struct inet6_dev *idev,
-                                 struct inet6_ifaddr *ifp)
-{
-}
-#endif
-
 static int fixup_permanent_addr(struct inet6_dev *idev,
                                struct inet6_ifaddr *ifp)
 {
-       l3mdev_check_host_rt(idev, ifp);
-
        if (!ifp->rt) {
                struct rt6_info *rt;
 
@@ -3303,6 +3277,9 @@ static int addrconf_notify(struct notifier_block *this, unsigned long event,
                        break;
 
                if (event == NETDEV_UP) {
+                       /* restore routes for permanent addresses */
+                       addrconf_permanent_addr(dev);
+
                        if (!addrconf_qdisc_ok(dev)) {
                                /* device is not ready yet. */
                                pr_info("ADDRCONF(NETDEV_UP): %s: link is not ready\n",
@@ -3336,9 +3313,6 @@ static int addrconf_notify(struct notifier_block *this, unsigned long event,
                        run_pending = 1;
                }
 
-               /* restore routes for permanent addresses */
-               addrconf_permanent_addr(dev);
-
                switch (dev->type) {
 #if IS_ENABLED(CONFIG_IPV6_SIT)
                case ARPHRD_SIT:
@@ -3555,6 +3529,8 @@ restart:
 
        INIT_LIST_HEAD(&del_list);
        list_for_each_entry_safe(ifa, tmp, &idev->addr_list, if_list) {
+               struct rt6_info *rt = NULL;
+
                addrconf_del_dad_work(ifa);
 
                write_unlock_bh(&idev->lock);
@@ -3567,6 +3543,9 @@ restart:
                        ifa->state = 0;
                        if (!(ifa->flags & IFA_F_NODAD))
                                ifa->flags |= IFA_F_TENTATIVE;
+
+                       rt = ifa->rt;
+                       ifa->rt = NULL;
                } else {
                        state = ifa->state;
                        ifa->state = INET6_IFADDR_STATE_DEAD;
@@ -3577,6 +3556,9 @@ restart:
 
                spin_unlock_bh(&ifa->lock);
 
+               if (rt)
+                       ip6_del_rt(rt);
+
                if (state != INET6_IFADDR_STATE_DEAD) {
                        __ipv6_ifa_notify(RTM_DELADDR, ifa);
                        inet6addr_notifier_call_chain(NETDEV_DOWN, ifa);
@@ -5344,10 +5326,10 @@ static void __ipv6_ifa_notify(int event, struct inet6_ifaddr *ifp)
                        if (rt)
                                ip6_del_rt(rt);
                }
-               dst_hold(&ifp->rt->dst);
-
-               ip6_del_rt(ifp->rt);
-
+               if (ifp->rt) {
+                       dst_hold(&ifp->rt->dst);
+                       ip6_del_rt(ifp->rt);
+               }
                rt_genid_bump_ipv6(net);
                break;
        }
index 3962b6c..00d0c29 100644 (file)
@@ -450,9 +450,10 @@ int ipv6_recv_error(struct sock *sk, struct msghdr *msg, int len, int *addr_len)
                copied = len;
        }
        err = skb_copy_datagram_msg(skb, 0, msg, copied);
-       if (err)
-               goto out_free_skb;
-
+       if (unlikely(err)) {
+               kfree_skb(skb);
+               return err;
+       }
        sock_recv_timestamp(msg, sk, skb);
 
        serr = SKB_EXT_ERR(skb);
@@ -509,8 +510,7 @@ int ipv6_recv_error(struct sock *sk, struct msghdr *msg, int len, int *addr_len)
        msg->msg_flags |= MSG_ERRQUEUE;
        err = copied;
 
-out_free_skb:
-       kfree_skb(skb);
+       consume_skb(skb);
 out:
        return err;
 }
@@ -727,14 +727,13 @@ EXPORT_SYMBOL_GPL(ip6_datagram_recv_ctl);
 
 int ip6_datagram_send_ctl(struct net *net, struct sock *sk,
                          struct msghdr *msg, struct flowi6 *fl6,
-                         struct ipv6_txoptions *opt,
-                         int *hlimit, int *tclass, int *dontfrag,
-                         struct sockcm_cookie *sockc)
+                         struct ipcm6_cookie *ipc6, struct sockcm_cookie *sockc)
 {
        struct in6_pktinfo *src_info;
        struct cmsghdr *cmsg;
        struct ipv6_rt_hdr *rthdr;
        struct ipv6_opt_hdr *hdr;
+       struct ipv6_txoptions *opt = ipc6->opt;
        int len;
        int err = 0;
 
@@ -953,8 +952,8 @@ int ip6_datagram_send_ctl(struct net *net, struct sock *sk,
                                goto exit_f;
                        }
 
-                       *hlimit = *(int *)CMSG_DATA(cmsg);
-                       if (*hlimit < -1 || *hlimit > 0xff) {
+                       ipc6->hlimit = *(int *)CMSG_DATA(cmsg);
+                       if (ipc6->hlimit < -1 || ipc6->hlimit > 0xff) {
                                err = -EINVAL;
                                goto exit_f;
                        }
@@ -974,7 +973,7 @@ int ip6_datagram_send_ctl(struct net *net, struct sock *sk,
                                goto exit_f;
 
                        err = 0;
-                       *tclass = tc;
+                       ipc6->tclass = tc;
 
                        break;
                    }
@@ -992,7 +991,7 @@ int ip6_datagram_send_ctl(struct net *net, struct sock *sk,
                                goto exit_f;
 
                        err = 0;
-                       *dontfrag = df;
+                       ipc6->dontfrag = df;
 
                        break;
                    }
index ea7c4d6..8de5dd7 100644 (file)
@@ -258,8 +258,8 @@ static int ipv6_destopt_rcv(struct sk_buff *skb)
        if (!pskb_may_pull(skb, skb_transport_offset(skb) + 8) ||
            !pskb_may_pull(skb, (skb_transport_offset(skb) +
                                 ((skb_transport_header(skb)[1] + 1) << 3)))) {
-               IP6_INC_STATS_BH(dev_net(dst->dev), ip6_dst_idev(dst),
-                                IPSTATS_MIB_INHDRERRORS);
+               __IP6_INC_STATS(dev_net(dst->dev), ip6_dst_idev(dst),
+                               IPSTATS_MIB_INHDRERRORS);
                kfree_skb(skb);
                return -1;
        }
@@ -280,8 +280,8 @@ static int ipv6_destopt_rcv(struct sk_buff *skb)
                return 1;
        }
 
-       IP6_INC_STATS_BH(dev_net(dst->dev),
-                        ip6_dst_idev(dst), IPSTATS_MIB_INHDRERRORS);
+       __IP6_INC_STATS(dev_net(dst->dev),
+                       ip6_dst_idev(dst), IPSTATS_MIB_INHDRERRORS);
        return -1;
 }
 
@@ -309,8 +309,8 @@ static int ipv6_rthdr_rcv(struct sk_buff *skb)
        if (!pskb_may_pull(skb, skb_transport_offset(skb) + 8) ||
            !pskb_may_pull(skb, (skb_transport_offset(skb) +
                                 ((skb_transport_header(skb)[1] + 1) << 3)))) {
-               IP6_INC_STATS_BH(net, ip6_dst_idev(skb_dst(skb)),
-                                IPSTATS_MIB_INHDRERRORS);
+               __IP6_INC_STATS(net, ip6_dst_idev(skb_dst(skb)),
+                               IPSTATS_MIB_INHDRERRORS);
                kfree_skb(skb);
                return -1;
        }
@@ -319,8 +319,8 @@ static int ipv6_rthdr_rcv(struct sk_buff *skb)
 
        if (ipv6_addr_is_multicast(&ipv6_hdr(skb)->daddr) ||
            skb->pkt_type != PACKET_HOST) {
-               IP6_INC_STATS_BH(net, ip6_dst_idev(skb_dst(skb)),
-                                IPSTATS_MIB_INADDRERRORS);
+               __IP6_INC_STATS(net, ip6_dst_idev(skb_dst(skb)),
+                               IPSTATS_MIB_INADDRERRORS);
                kfree_skb(skb);
                return -1;
        }
@@ -334,8 +334,8 @@ looped_back:
                         * processed by own
                         */
                        if (!addr) {
-                               IP6_INC_STATS_BH(net, ip6_dst_idev(skb_dst(skb)),
-                                                IPSTATS_MIB_INADDRERRORS);
+                               __IP6_INC_STATS(net, ip6_dst_idev(skb_dst(skb)),
+                                               IPSTATS_MIB_INADDRERRORS);
                                kfree_skb(skb);
                                return -1;
                        }
@@ -360,8 +360,8 @@ looped_back:
                        goto unknown_rh;
                /* Silently discard invalid RTH type 2 */
                if (hdr->hdrlen != 2 || hdr->segments_left != 1) {
-                       IP6_INC_STATS_BH(net, ip6_dst_idev(skb_dst(skb)),
-                                        IPSTATS_MIB_INHDRERRORS);
+                       __IP6_INC_STATS(net, ip6_dst_idev(skb_dst(skb)),
+                                       IPSTATS_MIB_INHDRERRORS);
                        kfree_skb(skb);
                        return -1;
                }
@@ -379,8 +379,8 @@ looped_back:
        n = hdr->hdrlen >> 1;
 
        if (hdr->segments_left > n) {
-               IP6_INC_STATS_BH(net, ip6_dst_idev(skb_dst(skb)),
-                                IPSTATS_MIB_INHDRERRORS);
+               __IP6_INC_STATS(net, ip6_dst_idev(skb_dst(skb)),
+                               IPSTATS_MIB_INHDRERRORS);
                icmpv6_param_prob(skb, ICMPV6_HDR_FIELD,
                                  ((&hdr->segments_left) -
                                   skb_network_header(skb)));
@@ -393,8 +393,8 @@ looped_back:
        if (skb_cloned(skb)) {
                /* the copy is a forwarded packet */
                if (pskb_expand_head(skb, 0, 0, GFP_ATOMIC)) {
-                       IP6_INC_STATS_BH(net, ip6_dst_idev(skb_dst(skb)),
-                                        IPSTATS_MIB_OUTDISCARDS);
+                       __IP6_INC_STATS(net, ip6_dst_idev(skb_dst(skb)),
+                                       IPSTATS_MIB_OUTDISCARDS);
                        kfree_skb(skb);
                        return -1;
                }
@@ -416,14 +416,14 @@ looped_back:
                if (xfrm6_input_addr(skb, (xfrm_address_t *)addr,
                                     (xfrm_address_t *)&ipv6_hdr(skb)->saddr,
                                     IPPROTO_ROUTING) < 0) {
-                       IP6_INC_STATS_BH(net, ip6_dst_idev(skb_dst(skb)),
-                                        IPSTATS_MIB_INADDRERRORS);
+                       __IP6_INC_STATS(net, ip6_dst_idev(skb_dst(skb)),
+                                       IPSTATS_MIB_INADDRERRORS);
                        kfree_skb(skb);
                        return -1;
                }
                if (!ipv6_chk_home_addr(dev_net(skb_dst(skb)->dev), addr)) {
-                       IP6_INC_STATS_BH(net, ip6_dst_idev(skb_dst(skb)),
-                                        IPSTATS_MIB_INADDRERRORS);
+                       __IP6_INC_STATS(net, ip6_dst_idev(skb_dst(skb)),
+                                       IPSTATS_MIB_INADDRERRORS);
                        kfree_skb(skb);
                        return -1;
                }
@@ -434,8 +434,8 @@ looped_back:
        }
 
        if (ipv6_addr_is_multicast(addr)) {
-               IP6_INC_STATS_BH(net, ip6_dst_idev(skb_dst(skb)),
-                                IPSTATS_MIB_INADDRERRORS);
+               __IP6_INC_STATS(net, ip6_dst_idev(skb_dst(skb)),
+                               IPSTATS_MIB_INADDRERRORS);
                kfree_skb(skb);
                return -1;
        }
@@ -454,8 +454,8 @@ looped_back:
 
        if (skb_dst(skb)->dev->flags&IFF_LOOPBACK) {
                if (ipv6_hdr(skb)->hop_limit <= 1) {
-                       IP6_INC_STATS_BH(net, ip6_dst_idev(skb_dst(skb)),
-                                        IPSTATS_MIB_INHDRERRORS);
+                       __IP6_INC_STATS(net, ip6_dst_idev(skb_dst(skb)),
+                                       IPSTATS_MIB_INHDRERRORS);
                        icmpv6_send(skb, ICMPV6_TIME_EXCEED, ICMPV6_EXC_HOPLIMIT,
                                    0);
                        kfree_skb(skb);
@@ -470,7 +470,7 @@ looped_back:
        return -1;
 
 unknown_rh:
-       IP6_INC_STATS_BH(net, ip6_dst_idev(skb_dst(skb)), IPSTATS_MIB_INHDRERRORS);
+       __IP6_INC_STATS(net, ip6_dst_idev(skb_dst(skb)), IPSTATS_MIB_INHDRERRORS);
        icmpv6_param_prob(skb, ICMPV6_HDR_FIELD,
                          (&hdr->type) - skb_network_header(skb));
        return -1;
@@ -568,28 +568,28 @@ static bool ipv6_hop_jumbo(struct sk_buff *skb, int optoff)
        if (nh[optoff + 1] != 4 || (optoff & 3) != 2) {
                net_dbg_ratelimited("ipv6_hop_jumbo: wrong jumbo opt length/alignment %d\n",
                                    nh[optoff+1]);
-               IP6_INC_STATS_BH(net, ipv6_skb_idev(skb),
-                                IPSTATS_MIB_INHDRERRORS);
+               __IP6_INC_STATS(net, ipv6_skb_idev(skb),
+                               IPSTATS_MIB_INHDRERRORS);
                goto drop;
        }
 
        pkt_len = ntohl(*(__be32 *)(nh + optoff + 2));
        if (pkt_len <= IPV6_MAXPLEN) {
-               IP6_INC_STATS_BH(net, ipv6_skb_idev(skb),
-                                IPSTATS_MIB_INHDRERRORS);
+               __IP6_INC_STATS(net, ipv6_skb_idev(skb),
+                               IPSTATS_MIB_INHDRERRORS);
                icmpv6_param_prob(skb, ICMPV6_HDR_FIELD, optoff+2);
                return false;
        }
        if (ipv6_hdr(skb)->payload_len) {
-               IP6_INC_STATS_BH(net, ipv6_skb_idev(skb),
-                                IPSTATS_MIB_INHDRERRORS);
+               __IP6_INC_STATS(net, ipv6_skb_idev(skb),
+                               IPSTATS_MIB_INHDRERRORS);
                icmpv6_param_prob(skb, ICMPV6_HDR_FIELD, optoff);
                return false;
        }
 
        if (pkt_len > skb->len - sizeof(struct ipv6hdr)) {
-               IP6_INC_STATS_BH(net, ipv6_skb_idev(skb),
-                                IPSTATS_MIB_INTRUNCATEDPKTS);
+               __IP6_INC_STATS(net, ipv6_skb_idev(skb),
+                               IPSTATS_MIB_INTRUNCATEDPKTS);
                goto drop;
        }
 
index 6b573eb..9554b99 100644 (file)
@@ -401,10 +401,10 @@ static void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info)
        struct flowi6 fl6;
        struct icmpv6_msg msg;
        struct sockcm_cookie sockc_unused = {0};
+       struct ipcm6_cookie ipc6;
        int iif = 0;
        int addr_type = 0;
        int len;
-       int hlimit;
        int err = 0;
        u32 mark = IP6_REPLY_MARK(net, skb->mark);
 
@@ -507,7 +507,10 @@ static void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info)
        if (IS_ERR(dst))
                goto out;
 
-       hlimit = ip6_sk_dst_hoplimit(np, &fl6, dst);
+       ipc6.hlimit = ip6_sk_dst_hoplimit(np, &fl6, dst);
+       ipc6.tclass = np->tclass;
+       ipc6.dontfrag = np->dontfrag;
+       ipc6.opt = NULL;
 
        msg.skb = skb;
        msg.offset = skb_network_offset(skb);
@@ -526,9 +529,9 @@ static void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info)
 
        err = ip6_append_data(sk, icmpv6_getfrag, &msg,
                              len + sizeof(struct icmp6hdr),
-                             sizeof(struct icmp6hdr), hlimit,
-                             np->tclass, NULL, &fl6, (struct rt6_info *)dst,
-                             MSG_DONTWAIT, np->dontfrag, &sockc_unused);
+                             sizeof(struct icmp6hdr),
+                             &ipc6, &fl6, (struct rt6_info *)dst,
+                             MSG_DONTWAIT, &sockc_unused);
        if (err) {
                ICMP6_INC_STATS(net, idev, ICMP6_MIB_OUTERRORS);
                ip6_flush_pending_frames(sk);
@@ -563,9 +566,8 @@ static void icmpv6_echo_reply(struct sk_buff *skb)
        struct flowi6 fl6;
        struct icmpv6_msg msg;
        struct dst_entry *dst;
+       struct ipcm6_cookie ipc6;
        int err = 0;
-       int hlimit;
-       u8 tclass;
        u32 mark = IP6_REPLY_MARK(net, skb->mark);
        struct sockcm_cookie sockc_unused = {0};
 
@@ -607,22 +609,24 @@ static void icmpv6_echo_reply(struct sk_buff *skb)
        if (IS_ERR(dst))
                goto out;
 
-       hlimit = ip6_sk_dst_hoplimit(np, &fl6, dst);
-
        idev = __in6_dev_get(skb->dev);
 
        msg.skb = skb;
        msg.offset = 0;
        msg.type = ICMPV6_ECHO_REPLY;
 
-       tclass = ipv6_get_dsfield(ipv6_hdr(skb));
+       ipc6.hlimit = ip6_sk_dst_hoplimit(np, &fl6, dst);
+       ipc6.tclass = ipv6_get_dsfield(ipv6_hdr(skb));
+       ipc6.dontfrag = np->dontfrag;
+       ipc6.opt = NULL;
+
        err = ip6_append_data(sk, icmpv6_getfrag, &msg, skb->len + sizeof(struct icmp6hdr),
-                               sizeof(struct icmp6hdr), hlimit, tclass, NULL, &fl6,
+                               sizeof(struct icmp6hdr), &ipc6, &fl6,
                                (struct rt6_info *)dst, MSG_DONTWAIT,
-                               np->dontfrag, &sockc_unused);
+                               &sockc_unused);
 
        if (err) {
-               ICMP6_INC_STATS_BH(net, idev, ICMP6_MIB_OUTERRORS);
+               __ICMP6_INC_STATS(net, idev, ICMP6_MIB_OUTERRORS);
                ip6_flush_pending_frames(sk);
        } else {
                err = icmpv6_push_pending_frames(sk, &fl6, &tmp_hdr,
@@ -674,7 +678,7 @@ void icmpv6_notify(struct sk_buff *skb, u8 type, u8 code, __be32 info)
        return;
 
 out:
-       ICMP6_INC_STATS_BH(net, __in6_dev_get(skb->dev), ICMP6_MIB_INERRORS);
+       __ICMP6_INC_STATS(net, __in6_dev_get(skb->dev), ICMP6_MIB_INERRORS);
 }
 
 /*
@@ -710,7 +714,7 @@ static int icmpv6_rcv(struct sk_buff *skb)
                skb_set_network_header(skb, nh);
        }
 
-       ICMP6_INC_STATS_BH(dev_net(dev), idev, ICMP6_MIB_INMSGS);
+       __ICMP6_INC_STATS(dev_net(dev), idev, ICMP6_MIB_INMSGS);
 
        saddr = &ipv6_hdr(skb)->saddr;
        daddr = &ipv6_hdr(skb)->daddr;
@@ -728,7 +732,7 @@ static int icmpv6_rcv(struct sk_buff *skb)
 
        type = hdr->icmp6_type;
 
-       ICMP6MSGIN_INC_STATS_BH(dev_net(dev), idev, type);
+       ICMP6MSGIN_INC_STATS(dev_net(dev), idev, type);
 
        switch (type) {
        case ICMPV6_ECHO_REQUEST:
@@ -812,9 +816,9 @@ static int icmpv6_rcv(struct sk_buff *skb)
        return 0;
 
 csum_error:
-       ICMP6_INC_STATS_BH(dev_net(dev), idev, ICMP6_MIB_CSUMERRORS);
+       __ICMP6_INC_STATS(dev_net(dev), idev, ICMP6_MIB_CSUMERRORS);
 discard_it:
-       ICMP6_INC_STATS_BH(dev_net(dev), idev, ICMP6_MIB_INERRORS);
+       __ICMP6_INC_STATS(dev_net(dev), idev, ICMP6_MIB_INERRORS);
 drop_no_count:
        kfree_skb(skb);
        return 0;
index 28542cb..d08fd2d 100644 (file)
 #include <net/protocol.h>
 #include <uapi/linux/ila.h>
 
+struct ila_locator {
+       union {
+               __u8            v8[8];
+               __be16          v16[4];
+               __be32          v32[2];
+               __be64          v64;
+       };
+};
+
+struct ila_identifier {
+       union {
+               struct {
+#if defined(__LITTLE_ENDIAN_BITFIELD)
+                       u8 __space:4;
+                       u8 csum_neutral:1;
+                       u8 type:3;
+#elif defined(__BIG_ENDIAN_BITFIELD)
+                       u8 type:3;
+                       u8 csum_neutral:1;
+                       u8 __space:4;
+#else
+#error  "Adjust your <asm/byteorder.h> defines"
+#endif
+                       u8 __space2[7];
+               };
+               __u8            v8[8];
+               __be16          v16[4];
+               __be32          v32[2];
+               __be64          v64;
+       };
+};
+
+enum {
+       ILA_ATYPE_IID = 0,
+       ILA_ATYPE_LUID,
+       ILA_ATYPE_VIRT_V4,
+       ILA_ATYPE_VIRT_UNI_V6,
+       ILA_ATYPE_VIRT_MULTI_V6,
+       ILA_ATYPE_RSVD_1,
+       ILA_ATYPE_RSVD_2,
+       ILA_ATYPE_RSVD_3,
+};
+
+#define CSUM_NEUTRAL_FLAG      htonl(0x10000000)
+
+struct ila_addr {
+       union {
+               struct in6_addr addr;
+               struct {
+                       struct ila_locator loc;
+                       struct ila_identifier ident;
+               };
+       };
+};
+
+static inline struct ila_addr *ila_a2i(struct in6_addr *addr)
+{
+       return (struct ila_addr *)addr;
+}
+
+static inline bool ila_addr_is_ila(struct ila_addr *iaddr)
+{
+       return (iaddr->ident.type != ILA_ATYPE_IID);
+}
+
 struct ila_params {
-       __be64 locator;
-       __be64 locator_match;
+       struct ila_locator locator;
+       struct ila_locator locator_match;
        __wsum csum_diff;
+       u8 csum_mode;
 };
 
 static inline __wsum compute_csum_diff8(const __be32 *from, const __be32 *to)
@@ -38,7 +104,14 @@ static inline __wsum compute_csum_diff8(const __be32 *from, const __be32 *to)
        return csum_partial(diff, sizeof(diff), 0);
 }
 
-void update_ipv6_locator(struct sk_buff *skb, struct ila_params *p);
+static inline bool ila_csum_neutral_set(struct ila_identifier ident)
+{
+       return !!(ident.csum_neutral);
+}
+
+void ila_update_ipv6_locator(struct sk_buff *skb, struct ila_params *p);
+
+void ila_init_saved_csum(struct ila_params *p);
 
 int ila_lwt_init(void);
 void ila_lwt_fini(void);
index 3061305..0e94042 100644 (file)
 
 static __wsum get_csum_diff(struct ipv6hdr *ip6h, struct ila_params *p)
 {
-       if (*(__be64 *)&ip6h->daddr == p->locator_match)
+       struct ila_addr *iaddr = ila_a2i(&ip6h->daddr);
+
+       if (p->locator_match.v64)
                return p->csum_diff;
        else
-               return compute_csum_diff8((__be32 *)&ip6h->daddr,
+               return compute_csum_diff8((__be32 *)&iaddr->loc,
+                                         (__be32 *)&p->locator);
+}
+
+static void ila_csum_do_neutral(struct ila_addr *iaddr,
+                               struct ila_params *p)
+{
+       __sum16 *adjust = (__force __sum16 *)&iaddr->ident.v16[3];
+       __wsum diff, fval;
+
+       /* Check if checksum adjust value has been cached */
+       if (p->locator_match.v64) {
+               diff = p->csum_diff;
+       } else {
+               diff = compute_csum_diff8((__be32 *)iaddr,
                                          (__be32 *)&p->locator);
+       }
+
+       fval = (__force __wsum)(ila_csum_neutral_set(iaddr->ident) ?
+                       ~CSUM_NEUTRAL_FLAG : CSUM_NEUTRAL_FLAG);
+
+       diff = csum_add(diff, fval);
+
+       *adjust = ~csum_fold(csum_add(diff, csum_unfold(*adjust)));
+
+       /* Flip the csum-neutral bit. Either we are doing a SIR->ILA
+        * translation with ILA_CSUM_NEUTRAL_MAP as the csum_method
+        * and the C-bit is not set, or we are doing an ILA-SIR
+        * tranlsation and the C-bit is set.
+        */
+       iaddr->ident.csum_neutral ^= 1;
 }
 
-void update_ipv6_locator(struct sk_buff *skb, struct ila_params *p)
+static void ila_csum_adjust_transport(struct sk_buff *skb,
+                                     struct ila_params *p)
 {
        __wsum diff;
        struct ipv6hdr *ip6h = ipv6_hdr(skb);
+       struct ila_addr *iaddr = ila_a2i(&ip6h->daddr);
        size_t nhoff = sizeof(struct ipv6hdr);
 
-       /* First update checksum */
        switch (ip6h->nexthdr) {
        case NEXTHDR_TCP:
                if (likely(pskb_may_pull(skb, nhoff + sizeof(struct tcphdr)))) {
@@ -68,7 +100,46 @@ void update_ipv6_locator(struct sk_buff *skb, struct ila_params *p)
        }
 
        /* Now change destination address */
-       *(__be64 *)&ip6h->daddr = p->locator;
+       iaddr->loc = p->locator;
+}
+
+void ila_update_ipv6_locator(struct sk_buff *skb, struct ila_params *p)
+{
+       struct ipv6hdr *ip6h = ipv6_hdr(skb);
+       struct ila_addr *iaddr = ila_a2i(&ip6h->daddr);
+
+       /* First deal with the transport checksum */
+       if (ila_csum_neutral_set(iaddr->ident)) {
+               /* C-bit is set in the locator indicating that this
+                * is a locator being translated to a SIR address.
+                * Perform (receiver) checksum-neutral translation.
+                */
+               ila_csum_do_neutral(iaddr, p);
+       } else {
+               switch (p->csum_mode) {
+               case ILA_CSUM_ADJUST_TRANSPORT:
+                       ila_csum_adjust_transport(skb, p);
+                       break;
+               case ILA_CSUM_NEUTRAL_MAP:
+                       ila_csum_do_neutral(iaddr, p);
+                       break;
+               case ILA_CSUM_NO_ACTION:
+                       break;
+               }
+       }
+
+       /* Now change destination address */
+       iaddr->loc = p->locator;
+}
+
+void ila_init_saved_csum(struct ila_params *p)
+{
+       if (!p->locator_match.v64)
+               return;
+
+       p->csum_diff = compute_csum_diff8(
+                               (__be32 *)&p->locator_match,
+                               (__be32 *)&p->locator);
 }
 
 static int __init ila_init(void)
index 2ae3c4f..17038e1 100644 (file)
@@ -26,7 +26,7 @@ static int ila_output(struct net *net, struct sock *sk, struct sk_buff *skb)
        if (skb->protocol != htons(ETH_P_IPV6))
                goto drop;
 
-       update_ipv6_locator(skb, ila_params_lwtunnel(dst->lwtstate));
+       ila_update_ipv6_locator(skb, ila_params_lwtunnel(dst->lwtstate));
 
        return dst->lwtstate->orig_output(net, sk, skb);
 
@@ -42,7 +42,7 @@ static int ila_input(struct sk_buff *skb)
        if (skb->protocol != htons(ETH_P_IPV6))
                goto drop;
 
-       update_ipv6_locator(skb, ila_params_lwtunnel(dst->lwtstate));
+       ila_update_ipv6_locator(skb, ila_params_lwtunnel(dst->lwtstate));
 
        return dst->lwtstate->orig_input(skb);
 
@@ -53,6 +53,7 @@ drop:
 
 static struct nla_policy ila_nl_policy[ILA_ATTR_MAX + 1] = {
        [ILA_ATTR_LOCATOR] = { .type = NLA_U64, },
+       [ILA_ATTR_CSUM_MODE] = { .type = NLA_U8, },
 };
 
 static int ila_build_state(struct net_device *dev, struct nlattr *nla,
@@ -64,11 +65,28 @@ static int ila_build_state(struct net_device *dev, struct nlattr *nla,
        size_t encap_len = sizeof(*p);
        struct lwtunnel_state *newts;
        const struct fib6_config *cfg6 = cfg;
+       struct ila_addr *iaddr;
        int ret;
 
        if (family != AF_INET6)
                return -EINVAL;
 
+       if (cfg6->fc_dst_len < sizeof(struct ila_locator) + 1) {
+               /* Need to have full locator and at least type field
+                * included in destination
+                */
+               return -EINVAL;
+       }
+
+       iaddr = (struct ila_addr *)&cfg6->fc_dst;
+
+       if (!ila_addr_is_ila(iaddr) || ila_csum_neutral_set(iaddr->ident)) {
+               /* Don't allow translation for a non-ILA address or checksum
+                * neutral flag to be set.
+                */
+               return -EINVAL;
+       }
+
        ret = nla_parse_nested(tb, ILA_ATTR_MAX, nla,
                               ila_nl_policy);
        if (ret < 0)
@@ -84,16 +102,19 @@ static int ila_build_state(struct net_device *dev, struct nlattr *nla,
        newts->len = encap_len;
        p = ila_params_lwtunnel(newts);
 
-       p->locator = (__force __be64)nla_get_u64(tb[ILA_ATTR_LOCATOR]);
+       p->locator.v64 = (__force __be64)nla_get_u64(tb[ILA_ATTR_LOCATOR]);
 
-       if (cfg6->fc_dst_len > sizeof(__be64)) {
-               /* Precompute checksum difference for translation since we
-                * know both the old locator and the new one.
-                */
-               p->locator_match = *(__be64 *)&cfg6->fc_dst;
-               p->csum_diff = compute_csum_diff8(
-                       (__be32 *)&p->locator_match, (__be32 *)&p->locator);
-       }
+       /* Precompute checksum difference for translation since we
+        * know both the old locator and the new one.
+        */
+       p->locator_match = iaddr->loc;
+       p->csum_diff = compute_csum_diff8(
+               (__be32 *)&p->locator_match, (__be32 *)&p->locator);
+
+       if (tb[ILA_ATTR_CSUM_MODE])
+               p->csum_mode = nla_get_u8(tb[ILA_ATTR_CSUM_MODE]);
+
+       ila_init_saved_csum(p);
 
        newts->type = LWTUNNEL_ENCAP_ILA;
        newts->flags |= LWTUNNEL_STATE_OUTPUT_REDIRECT |
@@ -109,7 +130,10 @@ static int ila_fill_encap_info(struct sk_buff *skb,
 {
        struct ila_params *p = ila_params_lwtunnel(lwtstate);
 
-       if (nla_put_u64(skb, ILA_ATTR_LOCATOR, (__force u64)p->locator))
+       if (nla_put_u64_64bit(skb, ILA_ATTR_LOCATOR, (__force u64)p->locator.v64,
+                             ILA_ATTR_PAD))
+               goto nla_put_failure;
+       if (nla_put_u64(skb, ILA_ATTR_CSUM_MODE, (__force u8)p->csum_mode))
                goto nla_put_failure;
 
        return 0;
@@ -120,8 +144,7 @@ nla_put_failure:
 
 static int ila_encap_nlsize(struct lwtunnel_state *lwtstate)
 {
-       /* No encapsulation overhead */
-       return 0;
+       return nla_total_size(sizeof(u64)); /* ILA_ATTR_LOCATOR */
 }
 
 static int ila_encap_cmp(struct lwtunnel_state *a, struct lwtunnel_state *b)
@@ -129,7 +152,7 @@ static int ila_encap_cmp(struct lwtunnel_state *a, struct lwtunnel_state *b)
        struct ila_params *a_p = ila_params_lwtunnel(a);
        struct ila_params *b_p = ila_params_lwtunnel(b);
 
-       return (a_p->locator != b_p->locator);
+       return (a_p->locator.v64 != b_p->locator.v64);
 }
 
 static const struct lwtunnel_encap_ops ila_encap_ops = {
index 0b03533..a90e572 100644 (file)
 
 struct ila_xlat_params {
        struct ila_params ip;
-       __be64 identifier;
        int ifindex;
-       unsigned int dir;
 };
 
 struct ila_map {
-       struct ila_xlat_params p;
+       struct ila_xlat_params xp;
        struct rhash_head node;
        struct ila_map __rcu *next;
        struct rcu_head rcu;
@@ -66,31 +64,29 @@ static __always_inline void __ila_hash_secret_init(void)
        net_get_random_once(&hashrnd, sizeof(hashrnd));
 }
 
-static inline u32 ila_identifier_hash(__be64 identifier)
+static inline u32 ila_locator_hash(struct ila_locator loc)
 {
-       u32 *v = (u32 *)&identifier;
+       u32 *v = (u32 *)loc.v32;
 
        return jhash_2words(v[0], v[1], hashrnd);
 }
 
-static inline spinlock_t *ila_get_lock(struct ila_net *ilan, __be64 identifier)
+static inline spinlock_t *ila_get_lock(struct ila_net *ilan,
+                                      struct ila_locator loc)
 {
-       return &ilan->locks[ila_identifier_hash(identifier) & ilan->locks_mask];
+       return &ilan->locks[ila_locator_hash(loc) & ilan->locks_mask];
 }
 
-static inline int ila_cmp_wildcards(struct ila_map *ila, __be64 loc,
-                                   int ifindex, unsigned int dir)
+static inline int ila_cmp_wildcards(struct ila_map *ila,
+                                   struct ila_addr *iaddr, int ifindex)
 {
-       return (ila->p.ip.locator_match && ila->p.ip.locator_match != loc) ||
-              (ila->p.ifindex && ila->p.ifindex != ifindex) ||
-              !(ila->p.dir & dir);
+       return (ila->xp.ifindex && ila->xp.ifindex != ifindex);
 }
 
-static inline int ila_cmp_params(struct ila_map *ila, struct ila_xlat_params *p)
+static inline int ila_cmp_params(struct ila_map *ila,
+                                struct ila_xlat_params *xp)
 {
-       return (ila->p.ip.locator_match != p->ip.locator_match) ||
-              (ila->p.ifindex != p->ifindex) ||
-              (ila->p.dir != p->dir);
+       return (ila->xp.ifindex != xp->ifindex);
 }
 
 static int ila_cmpfn(struct rhashtable_compare_arg *arg,
@@ -98,17 +94,14 @@ static int ila_cmpfn(struct rhashtable_compare_arg *arg,
 {
        const struct ila_map *ila = obj;
 
-       return (ila->p.identifier != *(__be64 *)arg->key);
+       return (ila->xp.ip.locator_match.v64 != *(__be64 *)arg->key);
 }
 
 static inline int ila_order(struct ila_map *ila)
 {
        int score = 0;
 
-       if (ila->p.ip.locator_match)
-               score += 1 << 0;
-
-       if (ila->p.ifindex)
+       if (ila->xp.ifindex)
                score += 1 << 1;
 
        return score;
@@ -117,7 +110,7 @@ static inline int ila_order(struct ila_map *ila)
 static const struct rhashtable_params rht_params = {
        .nelem_hint = 1024,
        .head_offset = offsetof(struct ila_map, node),
-       .key_offset = offsetof(struct ila_map, p.identifier),
+       .key_offset = offsetof(struct ila_map, xp.ip.locator_match),
        .key_len = sizeof(u64), /* identifier */
        .max_size = 1048576,
        .min_size = 256,
@@ -136,50 +129,45 @@ static struct genl_family ila_nl_family = {
 };
 
 static struct nla_policy ila_nl_policy[ILA_ATTR_MAX + 1] = {
-       [ILA_ATTR_IDENTIFIER] = { .type = NLA_U64, },
        [ILA_ATTR_LOCATOR] = { .type = NLA_U64, },
        [ILA_ATTR_LOCATOR_MATCH] = { .type = NLA_U64, },
        [ILA_ATTR_IFINDEX] = { .type = NLA_U32, },
-       [ILA_ATTR_DIR] = { .type = NLA_U32, },
+       [ILA_ATTR_CSUM_MODE] = { .type = NLA_U8, },
 };
 
 static int parse_nl_config(struct genl_info *info,
-                          struct ila_xlat_params *p)
+                          struct ila_xlat_params *xp)
 {
-       memset(p, 0, sizeof(*p));
-
-       if (info->attrs[ILA_ATTR_IDENTIFIER])
-               p->identifier = (__force __be64)nla_get_u64(
-                       info->attrs[ILA_ATTR_IDENTIFIER]);
+       memset(xp, 0, sizeof(*xp));
 
        if (info->attrs[ILA_ATTR_LOCATOR])
-               p->ip.locator = (__force __be64)nla_get_u64(
+               xp->ip.locator.v64 = (__force __be64)nla_get_u64(
                        info->attrs[ILA_ATTR_LOCATOR]);
 
        if (info->attrs[ILA_ATTR_LOCATOR_MATCH])
-               p->ip.locator_match = (__force __be64)nla_get_u64(
+               xp->ip.locator_match.v64 = (__force __be64)nla_get_u64(
                        info->attrs[ILA_ATTR_LOCATOR_MATCH]);
 
-       if (info->attrs[ILA_ATTR_IFINDEX])
-               p->ifindex = nla_get_s32(info->attrs[ILA_ATTR_IFINDEX]);
+       if (info->attrs[ILA_ATTR_CSUM_MODE])
+               xp->ip.csum_mode = nla_get_u8(info->attrs[ILA_ATTR_CSUM_MODE]);
 
-       if (info->attrs[ILA_ATTR_DIR])
-               p->dir = nla_get_u32(info->attrs[ILA_ATTR_DIR]);
+       if (info->attrs[ILA_ATTR_IFINDEX])
+               xp->ifindex = nla_get_s32(info->attrs[ILA_ATTR_IFINDEX]);
 
        return 0;
 }
 
 /* Must be called with rcu readlock */
-static inline struct ila_map *ila_lookup_wildcards(__be64 id, __be64 loc,
+static inline struct ila_map *ila_lookup_wildcards(struct ila_addr *iaddr,
                                                   int ifindex,
-                                                  unsigned int dir,
                                                   struct ila_net *ilan)
 {
        struct ila_map *ila;
 
-       ila = rhashtable_lookup_fast(&ilan->rhash_table, &id, rht_params);
+       ila = rhashtable_lookup_fast(&ilan->rhash_table, &iaddr->loc,
+                                    rht_params);
        while (ila) {
-               if (!ila_cmp_wildcards(ila, loc, ifindex, dir))
+               if (!ila_cmp_wildcards(ila, iaddr, ifindex))
                        return ila;
                ila = rcu_access_pointer(ila->next);
        }
@@ -188,15 +176,16 @@ static inline struct ila_map *ila_lookup_wildcards(__be64 id, __be64 loc,
 }
 
 /* Must be called with rcu readlock */
-static inline struct ila_map *ila_lookup_by_params(struct ila_xlat_params *p,
+static inline struct ila_map *ila_lookup_by_params(struct ila_xlat_params *xp,
                                                   struct ila_net *ilan)
 {
        struct ila_map *ila;
 
-       ila = rhashtable_lookup_fast(&ilan->rhash_table, &p->identifier,
+       ila = rhashtable_lookup_fast(&ilan->rhash_table,
+                                    &xp->ip.locator_match,
                                     rht_params);
        while (ila) {
-               if (!ila_cmp_params(ila, p))
+               if (!ila_cmp_params(ila, xp))
                        return ila;
                ila = rcu_access_pointer(ila->next);
        }
@@ -221,14 +210,14 @@ static void ila_free_cb(void *ptr, void *arg)
        }
 }
 
-static int ila_xlat_addr(struct sk_buff *skb, int dir);
+static int ila_xlat_addr(struct sk_buff *skb);
 
 static unsigned int
 ila_nf_input(void *priv,
             struct sk_buff *skb,
             const struct nf_hook_state *state)
 {
-       ila_xlat_addr(skb, ILA_DIR_IN);
+       ila_xlat_addr(skb);
        return NF_ACCEPT;
 }
 
@@ -241,11 +230,11 @@ static struct nf_hook_ops ila_nf_hook_ops[] __read_mostly = {
        },
 };
 
-static int ila_add_mapping(struct net *net, struct ila_xlat_params *p)
+static int ila_add_mapping(struct net *net, struct ila_xlat_params *xp)
 {
        struct ila_net *ilan = net_generic(net, ila_net_id);
        struct ila_map *ila, *head;
-       spinlock_t *lock = ila_get_lock(ilan, p->identifier);
+       spinlock_t *lock = ila_get_lock(ilan, xp->ip.locator_match);
        int err = 0, order;
 
        if (!ilan->hooks_registered) {
@@ -264,22 +253,16 @@ static int ila_add_mapping(struct net *net, struct ila_xlat_params *p)
        if (!ila)
                return -ENOMEM;
 
-       ila->p = *p;
+       ila_init_saved_csum(&xp->ip);
 
-       if (p->ip.locator_match) {
-               /* Precompute checksum difference for translation since we
-                * know both the old identifier and the new one.
-                */
-               ila->p.ip.csum_diff = compute_csum_diff8(
-                       (__be32 *)&p->ip.locator_match,
-                       (__be32 *)&p->ip.locator);
-       }
+       ila->xp = *xp;
 
        order = ila_order(ila);
 
        spin_lock(lock);
 
-       head = rhashtable_lookup_fast(&ilan->rhash_table, &p->identifier,
+       head = rhashtable_lookup_fast(&ilan->rhash_table,
+                                     &xp->ip.locator_match,
                                      rht_params);
        if (!head) {
                /* New entry for the rhash_table */
@@ -289,7 +272,7 @@ static int ila_add_mapping(struct net *net, struct ila_xlat_params *p)
                struct ila_map *tila = head, *prev = NULL;
 
                do {
-                       if (!ila_cmp_params(tila, p)) {
+                       if (!ila_cmp_params(tila, xp)) {
                                err = -EEXIST;
                                goto out;
                        }
@@ -326,23 +309,23 @@ out:
        return err;
 }
 
-static int ila_del_mapping(struct net *net, struct ila_xlat_params *p)
+static int ila_del_mapping(struct net *net, struct ila_xlat_params *xp)
 {
        struct ila_net *ilan = net_generic(net, ila_net_id);
        struct ila_map *ila, *head, *prev;
-       spinlock_t *lock = ila_get_lock(ilan, p->identifier);
+       spinlock_t *lock = ila_get_lock(ilan, xp->ip.locator_match);
        int err = -ENOENT;
 
        spin_lock(lock);
 
        head = rhashtable_lookup_fast(&ilan->rhash_table,
-                                     &p->identifier, rht_params);
+                                     &xp->ip.locator_match, rht_params);
        ila = head;
 
        prev = NULL;
 
        while (ila) {
-               if (ila_cmp_params(ila, p)) {
+               if (ila_cmp_params(ila, xp)) {
                        prev = ila;
                        ila = rcu_dereference_protected(ila->next,
                                                        lockdep_is_held(lock));
@@ -404,28 +387,28 @@ static int ila_nl_cmd_add_mapping(struct sk_buff *skb, struct genl_info *info)
 static int ila_nl_cmd_del_mapping(struct sk_buff *skb, struct genl_info *info)
 {
        struct net *net = genl_info_net(info);
-       struct ila_xlat_params p;
+       struct ila_xlat_params xp;
        int err;
 
-       err = parse_nl_config(info, &p);
+       err = parse_nl_config(info, &xp);
        if (err)
                return err;
 
-       ila_del_mapping(net, &p);
+       ila_del_mapping(net, &xp);
 
        return 0;
 }
 
 static int ila_fill_info(struct ila_map *ila, struct sk_buff *msg)
 {
-       if (nla_put_u64(msg, ILA_ATTR_IDENTIFIER,
-                       (__force u64)ila->p.identifier) ||
-           nla_put_u64(msg, ILA_ATTR_LOCATOR,
-                       (__force u64)ila->p.ip.locator) ||
-           nla_put_u64(msg, ILA_ATTR_LOCATOR_MATCH,
-                       (__force u64)ila->p.ip.locator_match) ||
-           nla_put_s32(msg, ILA_ATTR_IFINDEX, ila->p.ifindex) ||
-           nla_put_u32(msg, ILA_ATTR_DIR, ila->p.dir))
+       if (nla_put_u64_64bit(msg, ILA_ATTR_LOCATOR,
+                             (__force u64)ila->xp.ip.locator.v64,
+                             ILA_ATTR_PAD) ||
+           nla_put_u64_64bit(msg, ILA_ATTR_LOCATOR_MATCH,
+                             (__force u64)ila->xp.ip.locator_match.v64,
+                             ILA_ATTR_PAD) ||
+           nla_put_s32(msg, ILA_ATTR_IFINDEX, ila->xp.ifindex) ||
+           nla_put_u32(msg, ILA_ATTR_CSUM_MODE, ila->xp.ip.csum_mode))
                return -1;
 
        return 0;
@@ -457,11 +440,11 @@ static int ila_nl_cmd_get_mapping(struct sk_buff *skb, struct genl_info *info)
        struct net *net = genl_info_net(info);
        struct ila_net *ilan = net_generic(net, ila_net_id);
        struct sk_buff *msg;
-       struct ila_xlat_params p;
+       struct ila_xlat_params xp;
        struct ila_map *ila;
        int ret;
 
-       ret = parse_nl_config(info, &p);
+       ret = parse_nl_config(info, &xp);
        if (ret)
                return ret;
 
@@ -471,7 +454,7 @@ static int ila_nl_cmd_get_mapping(struct sk_buff *skb, struct genl_info *info)
 
        rcu_read_lock();
 
-       ila = ila_lookup_by_params(&p, ilan);
+       ila = ila_lookup_by_params(&xp, ilan);
        if (ila) {
                ret = ila_dump_info(ila,
                                    info->snd_portid,
@@ -614,45 +597,32 @@ static struct pernet_operations ila_net_ops = {
        .size = sizeof(struct ila_net),
 };
 
-static int ila_xlat_addr(struct sk_buff *skb, int dir)
+static int ila_xlat_addr(struct sk_buff *skb)
 {
        struct ila_map *ila;
        struct ipv6hdr *ip6h = ipv6_hdr(skb);
        struct net *net = dev_net(skb->dev);
        struct ila_net *ilan = net_generic(net, ila_net_id);
-       __be64 identifier, locator_match;
-       size_t nhoff;
+       struct ila_addr *iaddr = ila_a2i(&ip6h->daddr);
 
        /* Assumes skb contains a valid IPv6 header that is pulled */
 
-       identifier = *(__be64 *)&ip6h->daddr.in6_u.u6_addr8[8];
-       locator_match = *(__be64 *)&ip6h->daddr.in6_u.u6_addr8[0];
-       nhoff = sizeof(struct ipv6hdr);
+       if (!ila_addr_is_ila(iaddr)) {
+               /* Type indicates this is not an ILA address */
+               return 0;
+       }
 
        rcu_read_lock();
 
-       ila = ila_lookup_wildcards(identifier, locator_match,
-                                  skb->dev->ifindex, dir, ilan);
+       ila = ila_lookup_wildcards(iaddr, skb->dev->ifindex, ilan);
        if (ila)
-               update_ipv6_locator(skb, &ila->p.ip);
+               ila_update_ipv6_locator(skb, &ila->xp.ip);
 
        rcu_read_unlock();
 
        return 0;
 }
 
-int ila_xlat_incoming(struct sk_buff *skb)
-{
-       return ila_xlat_addr(skb, ILA_DIR_IN);
-}
-EXPORT_SYMBOL(ila_xlat_incoming);
-
-int ila_xlat_outgoing(struct sk_buff *skb)
-{
-       return ila_xlat_addr(skb, ILA_DIR_OUT);
-}
-EXPORT_SYMBOL(ila_xlat_outgoing);
-
 int ila_xlat_init(void)
 {
        int ret;
index f167838..00cf28a 100644 (file)
@@ -222,7 +222,7 @@ static int __inet6_check_established(struct inet_timewait_death_row *death_row,
        __sk_nulls_add_node_rcu(sk, &head->chain);
        if (tw) {
                sk_nulls_del_node_init_rcu((struct sock *)tw);
-               NET_INC_STATS_BH(net, LINUX_MIB_TIMEWAITRECYCLED);
+               __NET_INC_STATS(net, LINUX_MIB_TIMEWAITRECYCLED);
        }
        spin_unlock(lock);
        sock_prot_inuse_add(sock_net(sk), sk->sk_prot, 1);
index ea071fa..1bcef23 100644 (file)
@@ -240,6 +240,7 @@ struct fib6_table *fib6_new_table(struct net *net, u32 id)
 
        return tb;
 }
+EXPORT_SYMBOL_GPL(fib6_new_table);
 
 struct fib6_table *fib6_get_table(struct net *net, u32 id)
 {
index 35d3ddc..b912f0d 100644 (file)
@@ -373,7 +373,7 @@ fl_create(struct net *net, struct sock *sk, struct in6_flowlabel_req *freq,
                struct msghdr msg;
                struct flowi6 flowi6;
                struct sockcm_cookie sockc_junk;
-               int junk;
+               struct ipcm6_cookie ipc6;
 
                err = -ENOMEM;
                fl->opt = kmalloc(sizeof(*fl->opt) + olen, GFP_KERNEL);
@@ -390,8 +390,8 @@ fl_create(struct net *net, struct sock *sk, struct in6_flowlabel_req *freq,
                msg.msg_control = (void *)(fl->opt+1);
                memset(&flowi6, 0, sizeof(flowi6));
 
-               err = ip6_datagram_send_ctl(net, sk, &msg, &flowi6, fl->opt,
-                                           &junk, &junk, &junk, &sockc_junk);
+               ipc6.opt = fl->opt;
+               err = ip6_datagram_send_ctl(net, sk, &msg, &flowi6, &ipc6, &sockc_junk);
                if (err)
                        goto done;
                err = -EINVAL;
index ca5a2c5..47b671a 100644 (file)
@@ -54,6 +54,7 @@
 #include <net/ip6_fib.h>
 #include <net/ip6_route.h>
 #include <net/ip6_tunnel.h>
+#include <net/gre.h>
 
 
 static bool log_ecn_error = true;
@@ -443,137 +444,41 @@ static void ip6gre_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
        t->err_time = jiffies;
 }
 
-static int ip6gre_rcv(struct sk_buff *skb)
+static int ip6gre_rcv(struct sk_buff *skb, const struct tnl_ptk_info *tpi)
 {
        const struct ipv6hdr *ipv6h;
-       u8     *h;
-       __be16    flags;
-       __sum16   csum = 0;
-       __be32 key = 0;
-       u32    seqno = 0;
        struct ip6_tnl *tunnel;
-       int    offset = 4;
-       __be16 gre_proto;
-       int err;
-
-       if (!pskb_may_pull(skb, sizeof(struct in6_addr)))
-               goto drop;
 
        ipv6h = ipv6_hdr(skb);
-       h = skb->data;
-       flags = *(__be16 *)h;
-
-       if (flags&(GRE_CSUM|GRE_KEY|GRE_ROUTING|GRE_SEQ|GRE_VERSION)) {
-               /* - Version must be 0.
-                  - We do not support routing headers.
-                */
-               if (flags&(GRE_VERSION|GRE_ROUTING))
-                       goto drop;
-
-               if (flags&GRE_CSUM) {
-                       csum = skb_checksum_simple_validate(skb);
-                       offset += 4;
-               }
-               if (flags&GRE_KEY) {
-                       key = *(__be32 *)(h + offset);
-                       offset += 4;
-               }
-               if (flags&GRE_SEQ) {
-                       seqno = ntohl(*(__be32 *)(h + offset));
-                       offset += 4;
-               }
-       }
-
-       gre_proto = *(__be16 *)(h + 2);
-
        tunnel = ip6gre_tunnel_lookup(skb->dev,
-                                         &ipv6h->saddr, &ipv6h->daddr, key,
-                                         gre_proto);
+                                     &ipv6h->saddr, &ipv6h->daddr, tpi->key,
+                                     tpi->proto);
        if (tunnel) {
-               struct pcpu_sw_netstats *tstats;
-
-               if (!xfrm6_policy_check(NULL, XFRM_POLICY_IN, skb))
-                       goto drop;
-
-               if (!ip6_tnl_rcv_ctl(tunnel, &ipv6h->daddr, &ipv6h->saddr)) {
-                       tunnel->dev->stats.rx_dropped++;
-                       goto drop;
-               }
-
-               skb->protocol = gre_proto;
-               /* WCCP version 1 and 2 protocol decoding.
-                * - Change protocol to IPv6
-                * - When dealing with WCCPv2, Skip extra 4 bytes in GRE header
-                */
-               if (flags == 0 && gre_proto == htons(ETH_P_WCCP)) {
-                       skb->protocol = htons(ETH_P_IPV6);
-                       if ((*(h + offset) & 0xF0) != 0x40)
-                               offset += 4;
-               }
-
-               skb->mac_header = skb->network_header;
-               __pskb_pull(skb, offset);
-               skb_postpull_rcsum(skb, skb_transport_header(skb), offset);
-
-               if (((flags&GRE_CSUM) && csum) ||
-                   (!(flags&GRE_CSUM) && tunnel->parms.i_flags&GRE_CSUM)) {
-                       tunnel->dev->stats.rx_crc_errors++;
-                       tunnel->dev->stats.rx_errors++;
-                       goto drop;
-               }
-               if (tunnel->parms.i_flags&GRE_SEQ) {
-                       if (!(flags&GRE_SEQ) ||
-                           (tunnel->i_seqno &&
-                                       (s32)(seqno - tunnel->i_seqno) < 0)) {
-                               tunnel->dev->stats.rx_fifo_errors++;
-                               tunnel->dev->stats.rx_errors++;
-                               goto drop;
-                       }
-                       tunnel->i_seqno = seqno + 1;
-               }
-
-               /* Warning: All skb pointers will be invalidated! */
-               if (tunnel->dev->type == ARPHRD_ETHER) {
-                       if (!pskb_may_pull(skb, ETH_HLEN)) {
-                               tunnel->dev->stats.rx_length_errors++;
-                               tunnel->dev->stats.rx_errors++;
-                               goto drop;
-                       }
+               ip6_tnl_rcv(tunnel, skb, tpi, NULL, false);
 
-                       ipv6h = ipv6_hdr(skb);
-                       skb->protocol = eth_type_trans(skb, tunnel->dev);
-                       skb_postpull_rcsum(skb, eth_hdr(skb), ETH_HLEN);
-               }
-
-               __skb_tunnel_rx(skb, tunnel->dev, tunnel->net);
+               return PACKET_RCVD;
+       }
 
-               skb_reset_network_header(skb);
+       return PACKET_REJECT;
+}
 
-               err = IP6_ECN_decapsulate(ipv6h, skb);
-               if (unlikely(err)) {
-                       if (log_ecn_error)
-                               net_info_ratelimited("non-ECT from %pI6 with dsfield=%#x\n",
-                                                    &ipv6h->saddr,
-                                                    ipv6_get_dsfield(ipv6h));
-                       if (err > 1) {
-                               ++tunnel->dev->stats.rx_frame_errors;
-                               ++tunnel->dev->stats.rx_errors;
-                               goto drop;
-                       }
-               }
+static int gre_rcv(struct sk_buff *skb)
+{
+       struct tnl_ptk_info tpi;
+       bool csum_err = false;
+       int hdr_len;
 
-               tstats = this_cpu_ptr(tunnel->dev->tstats);
-               u64_stats_update_begin(&tstats->syncp);
-               tstats->rx_packets++;
-               tstats->rx_bytes += skb->len;
-               u64_stats_update_end(&tstats->syncp);
+       hdr_len = gre_parse_header(skb, &tpi, &csum_err);
+       if (hdr_len < 0)
+               goto drop;
 
-               netif_rx(skb);
+       if (iptunnel_pull_header(skb, hdr_len, tpi.proto, false))
+               goto drop;
 
+       if (ip6gre_rcv(skb, &tpi) == PACKET_RCVD)
                return 0;
-       }
-       icmpv6_send(skb, ICMPV6_DEST_UNREACH, ICMPV6_PORT_UNREACH, 0);
 
+       icmpv6_send(skb, ICMPV6_DEST_UNREACH, ICMPV6_PORT_UNREACH, 0);
 drop:
        kfree_skb(skb);
        return 0;
@@ -584,199 +489,40 @@ struct ipv6_tel_txoption {
        __u8 dst_opt[8];
 };
 
-static void init_tel_txopt(struct ipv6_tel_txoption *opt, __u8 encap_limit)
+static int gre_handle_offloads(struct sk_buff *skb, bool csum)
 {
-       memset(opt, 0, sizeof(struct ipv6_tel_txoption));
-
-       opt->dst_opt[2] = IPV6_TLV_TNL_ENCAP_LIMIT;
-       opt->dst_opt[3] = 1;
-       opt->dst_opt[4] = encap_limit;
-       opt->dst_opt[5] = IPV6_TLV_PADN;
-       opt->dst_opt[6] = 1;
-
-       opt->ops.dst0opt = (struct ipv6_opt_hdr *) opt->dst_opt;
-       opt->ops.opt_nflen = 8;
+       return iptunnel_handle_offloads(skb,
+                                       csum ? SKB_GSO_GRE_CSUM : SKB_GSO_GRE);
 }
 
-static __sum16 gre6_checksum(struct sk_buff *skb)
-{
-       __wsum csum;
-
-       if (skb->ip_summed == CHECKSUM_PARTIAL)
-               csum = lco_csum(skb);
-       else
-               csum = skb_checksum(skb, sizeof(struct ipv6hdr),
-                                   skb->len - sizeof(struct ipv6hdr), 0);
-       return csum_fold(csum);
-}
-
-static netdev_tx_t ip6gre_xmit2(struct sk_buff *skb,
-                        struct net_device *dev,
-                        __u8 dsfield,
-                        struct flowi6 *fl6,
-                        int encap_limit,
-                        __u32 *pmtu)
+static netdev_tx_t __gre6_xmit(struct sk_buff *skb,
+                              struct net_device *dev, __u8 dsfield,
+                              struct flowi6 *fl6, int encap_limit,
+                              __u32 *pmtu, __be16 proto)
 {
        struct ip6_tnl *tunnel = netdev_priv(dev);
-       struct net *net = tunnel->net;
-       struct net_device *tdev;    /* Device to other host */
-       struct ipv6hdr  *ipv6h;     /* Our new IP header */
-       unsigned int min_headroom = 0; /* The extra header space needed */
-       int    gre_hlen;
-       struct ipv6_tel_txoption opt;
-       int    mtu;
-       struct dst_entry *dst = NULL, *ndst = NULL;
-       struct net_device_stats *stats = &tunnel->dev->stats;
-       int err = -1;
-       u8 proto;
-       __be16 protocol;
+       __be16 protocol = (dev->type == ARPHRD_ETHER) ?
+                         htons(ETH_P_TEB) : proto;
 
        if (dev->type == ARPHRD_ETHER)
                IPCB(skb)->flags = 0;
 
-       if (dev->header_ops && dev->type == ARPHRD_IP6GRE) {
-               gre_hlen = 0;
-               ipv6h = (struct ipv6hdr *)skb->data;
-               fl6->daddr = ipv6h->daddr;
-       } else {
-               gre_hlen = tunnel->hlen;
+       if (dev->header_ops && dev->type == ARPHRD_IP6GRE)
+               fl6->daddr = ((struct ipv6hdr *)skb->data)->daddr;
+       else
                fl6->daddr = tunnel->parms.raddr;
-       }
-
-       if (!fl6->flowi6_mark)
-               dst = dst_cache_get(&tunnel->dst_cache);
-
-       if (!dst) {
-               dst = ip6_route_output(net, NULL, fl6);
 
-               if (dst->error)
-                       goto tx_err_link_failure;
-               dst = xfrm_lookup(net, dst, flowi6_to_flowi(fl6), NULL, 0);
-               if (IS_ERR(dst)) {
-                       err = PTR_ERR(dst);
-                       dst = NULL;
-                       goto tx_err_link_failure;
-               }
-               ndst = dst;
-       }
+       if (tunnel->parms.o_flags & TUNNEL_SEQ)
+               tunnel->o_seqno++;
 
-       tdev = dst->dev;
+       /* Push GRE header. */
+       gre_build_header(skb, tunnel->tun_hlen, tunnel->parms.o_flags,
+                        protocol, tunnel->parms.o_key, htonl(tunnel->o_seqno));
 
-       if (tdev == dev) {
-               stats->collisions++;
-               net_warn_ratelimited("%s: Local routing loop detected!\n",
-                                    tunnel->parms.name);
-               goto tx_err_dst_release;
-       }
+       skb_set_inner_protocol(skb, proto);
 
-       mtu = dst_mtu(dst) - sizeof(*ipv6h);
-       if (encap_limit >= 0) {
-               min_headroom += 8;
-               mtu -= 8;
-       }
-       if (mtu < IPV6_MIN_MTU)
-               mtu = IPV6_MIN_MTU;
-       if (skb_dst(skb))
-               skb_dst(skb)->ops->update_pmtu(skb_dst(skb), NULL, skb, mtu);
-       if (skb->len > mtu && !skb_is_gso(skb)) {
-               *pmtu = mtu;
-               err = -EMSGSIZE;
-               goto tx_err_dst_release;
-       }
-
-       if (tunnel->err_count > 0) {
-               if (time_before(jiffies,
-                               tunnel->err_time + IP6TUNNEL_ERR_TIMEO)) {
-                       tunnel->err_count--;
-
-                       dst_link_failure(skb);
-               } else
-                       tunnel->err_count = 0;
-       }
-
-       skb_scrub_packet(skb, !net_eq(tunnel->net, dev_net(dev)));
-
-       min_headroom += LL_RESERVED_SPACE(tdev) + gre_hlen + dst->header_len;
-
-       if (skb_headroom(skb) < min_headroom || skb_header_cloned(skb)) {
-               int head_delta = SKB_DATA_ALIGN(min_headroom -
-                                               skb_headroom(skb) +
-                                               16);
-
-               err = pskb_expand_head(skb, max_t(int, head_delta, 0),
-                                      0, GFP_ATOMIC);
-               if (min_headroom > dev->needed_headroom)
-                       dev->needed_headroom = min_headroom;
-               if (unlikely(err))
-                       goto tx_err_dst_release;
-       }
-
-       if (!fl6->flowi6_mark && ndst)
-               dst_cache_set_ip6(&tunnel->dst_cache, ndst, &fl6->saddr);
-       skb_dst_set(skb, dst);
-
-       proto = NEXTHDR_GRE;
-       if (encap_limit >= 0) {
-               init_tel_txopt(&opt, encap_limit);
-               ipv6_push_nfrag_opts(skb, &opt.ops, &proto, NULL);
-       }
-
-       err = iptunnel_handle_offloads(skb,
-                                      (tunnel->parms.o_flags & GRE_CSUM) ?
-                                      SKB_GSO_GRE_CSUM : SKB_GSO_GRE);
-       if (err)
-               goto tx_err_dst_release;
-
-       skb_push(skb, gre_hlen);
-       skb_reset_network_header(skb);
-       skb_set_transport_header(skb, sizeof(*ipv6h));
-
-       /*
-        *      Push down and install the IP header.
-        */
-       ipv6h = ipv6_hdr(skb);
-       ip6_flow_hdr(ipv6h, INET_ECN_encapsulate(0, dsfield),
-                    ip6_make_flowlabel(net, skb, fl6->flowlabel, true, fl6));
-       ipv6h->hop_limit = tunnel->parms.hop_limit;
-       ipv6h->nexthdr = proto;
-       ipv6h->saddr = fl6->saddr;
-       ipv6h->daddr = fl6->daddr;
-
-       ((__be16 *)(ipv6h + 1))[0] = tunnel->parms.o_flags;
-       protocol = (dev->type == ARPHRD_ETHER) ?
-                   htons(ETH_P_TEB) : skb->protocol;
-       ((__be16 *)(ipv6h + 1))[1] = protocol;
-
-       if (tunnel->parms.o_flags&(GRE_KEY|GRE_CSUM|GRE_SEQ)) {
-               __be32 *ptr = (__be32 *)(((u8 *)ipv6h) + tunnel->hlen - 4);
-
-               if (tunnel->parms.o_flags&GRE_SEQ) {
-                       ++tunnel->o_seqno;
-                       *ptr = htonl(tunnel->o_seqno);
-                       ptr--;
-               }
-               if (tunnel->parms.o_flags&GRE_KEY) {
-                       *ptr = tunnel->parms.o_key;
-                       ptr--;
-               }
-               if ((tunnel->parms.o_flags & GRE_CSUM) &&
-                   !(skb_shinfo(skb)->gso_type &
-                     (SKB_GSO_GRE | SKB_GSO_GRE_CSUM))) {
-                       *ptr = 0;
-                       *(__sum16 *)ptr = gre6_checksum(skb);
-               }
-       }
-
-       skb_set_inner_protocol(skb, protocol);
-
-       ip6tunnel_xmit(NULL, skb, dev);
-       return 0;
-tx_err_link_failure:
-       stats->tx_carrier_errors++;
-       dst_link_failure(skb);
-tx_err_dst_release:
-       dst_release(dst);
-       return err;
+       return ip6_tnl_xmit(skb, dev, dsfield, fl6, encap_limit, pmtu,
+                           NEXTHDR_GRE);
 }
 
 static inline int ip6gre_xmit_ipv4(struct sk_buff *skb, struct net_device *dev)
@@ -795,7 +541,6 @@ static inline int ip6gre_xmit_ipv4(struct sk_buff *skb, struct net_device *dev)
                encap_limit = t->parms.encap_limit;
 
        memcpy(&fl6, &t->fl.u.ip6, sizeof(fl6));
-       fl6.flowi6_proto = IPPROTO_GRE;
 
        dsfield = ipv4_get_dsfield(iph);
 
@@ -805,7 +550,12 @@ static inline int ip6gre_xmit_ipv4(struct sk_buff *skb, struct net_device *dev)
        if (t->parms.flags & IP6_TNL_F_USE_ORIG_FWMARK)
                fl6.flowi6_mark = skb->mark;
 
-       err = ip6gre_xmit2(skb, dev, dsfield, &fl6, encap_limit, &mtu);
+       err = gre_handle_offloads(skb, !!(t->parms.o_flags & TUNNEL_CSUM));
+       if (err)
+               return -1;
+
+       err = __gre6_xmit(skb, dev, dsfield, &fl6, encap_limit, &mtu,
+                         skb->protocol);
        if (err != 0) {
                /* XXX: send ICMP error even if DF is not set. */
                if (err == -EMSGSIZE)
@@ -845,7 +595,6 @@ static inline int ip6gre_xmit_ipv6(struct sk_buff *skb, struct net_device *dev)
                encap_limit = t->parms.encap_limit;
 
        memcpy(&fl6, &t->fl.u.ip6, sizeof(fl6));
-       fl6.flowi6_proto = IPPROTO_GRE;
 
        dsfield = ipv6_get_dsfield(ipv6h);
        if (t->parms.flags & IP6_TNL_F_USE_ORIG_TCLASS)
@@ -855,7 +604,11 @@ static inline int ip6gre_xmit_ipv6(struct sk_buff *skb, struct net_device *dev)
        if (t->parms.flags & IP6_TNL_F_USE_ORIG_FWMARK)
                fl6.flowi6_mark = skb->mark;
 
-       err = ip6gre_xmit2(skb, dev, dsfield, &fl6, encap_limit, &mtu);
+       if (gre_handle_offloads(skb, !!(t->parms.o_flags & TUNNEL_CSUM)))
+               return -1;
+
+       err = __gre6_xmit(skb, dev, dsfield, &fl6, encap_limit,
+                         &mtu, skb->protocol);
        if (err != 0) {
                if (err == -EMSGSIZE)
                        icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
@@ -899,7 +652,11 @@ static int ip6gre_xmit_other(struct sk_buff *skb, struct net_device *dev)
        memcpy(&fl6, &t->fl.u.ip6, sizeof(fl6));
        fl6.flowi6_proto = skb->protocol;
 
-       err = ip6gre_xmit2(skb, dev, 0, &fl6, encap_limit, &mtu);
+       err = gre_handle_offloads(skb, !!(t->parms.o_flags & TUNNEL_CSUM));
+       if (err)
+               return err;
+
+       err = __gre6_xmit(skb, dev, 0, &fl6, encap_limit, &mtu, skb->protocol);
 
        return err;
 }
@@ -1075,6 +832,8 @@ static int ip6gre_tunnel_ioctl(struct net_device *dev,
        struct net *net = t->net;
        struct ip6gre_net *ign = net_generic(net, ip6gre_net_id);
 
+       memset(&p1, 0, sizeof(p1));
+
        switch (cmd) {
        case SIOCGETTUNNEL:
                if (dev == ign->fb_tunnel_dev) {
@@ -1174,15 +933,6 @@ done:
        return err;
 }
 
-static int ip6gre_tunnel_change_mtu(struct net_device *dev, int new_mtu)
-{
-       if (new_mtu < 68 ||
-           new_mtu > 0xFFF8 - dev->hard_header_len)
-               return -EINVAL;
-       dev->mtu = new_mtu;
-       return 0;
-}
-
 static int ip6gre_header(struct sk_buff *skb, struct net_device *dev,
                        unsigned short type,
                        const void *daddr, const void *saddr, unsigned int len)
@@ -1226,7 +976,7 @@ static const struct net_device_ops ip6gre_netdev_ops = {
        .ndo_uninit             = ip6gre_tunnel_uninit,
        .ndo_start_xmit         = ip6gre_tunnel_xmit,
        .ndo_do_ioctl           = ip6gre_tunnel_ioctl,
-       .ndo_change_mtu         = ip6gre_tunnel_change_mtu,
+       .ndo_change_mtu         = ip6_tnl_change_mtu,
        .ndo_get_stats64        = ip_tunnel_get_stats64,
        .ndo_get_iflink         = ip6_tnl_get_iflink,
 };
@@ -1242,17 +992,11 @@ static void ip6gre_dev_free(struct net_device *dev)
 
 static void ip6gre_tunnel_setup(struct net_device *dev)
 {
-       struct ip6_tnl *t;
-
        dev->netdev_ops = &ip6gre_netdev_ops;
        dev->destructor = ip6gre_dev_free;
 
        dev->type = ARPHRD_IP6GRE;
-       dev->hard_header_len = LL_MAX_HEADER + sizeof(struct ipv6hdr) + 4;
-       dev->mtu = ETH_DATA_LEN - sizeof(struct ipv6hdr) - 4;
-       t = netdev_priv(dev);
-       if (!(t->parms.flags & IP6_TNL_F_IGN_ENCAP_LIMIT))
-               dev->mtu -= 8;
+
        dev->flags |= IFF_NOARP;
        dev->addr_len = sizeof(struct in6_addr);
        netif_keep_dst(dev);
@@ -1262,6 +1006,7 @@ static int ip6gre_tunnel_init_common(struct net_device *dev)
 {
        struct ip6_tnl *tunnel;
        int ret;
+       int t_hlen;
 
        tunnel = netdev_priv(dev);
 
@@ -1280,6 +1025,16 @@ static int ip6gre_tunnel_init_common(struct net_device *dev)
                return ret;
        }
 
+       tunnel->tun_hlen = gre_calc_hlen(tunnel->parms.o_flags);
+
+       t_hlen = tunnel->hlen + sizeof(struct ipv6hdr);
+
+       dev->needed_headroom    = LL_MAX_HEADER + t_hlen + 4;
+       dev->mtu                = ETH_DATA_LEN - t_hlen - 4;
+
+       if (!(tunnel->parms.flags & IP6_TNL_F_IGN_ENCAP_LIMIT))
+               dev->mtu -= 8;
+
        return 0;
 }
 
@@ -1318,7 +1073,7 @@ static void ip6gre_fb_tunnel_init(struct net_device *dev)
 
 
 static struct inet6_protocol ip6gre_protocol __read_mostly = {
-       .handler     = ip6gre_rcv,
+       .handler     = gre_rcv,
        .err_handler = ip6gre_err,
        .flags       = INET6_PROTO_NOPOLICY|INET6_PROTO_FINAL,
 };
@@ -1514,7 +1269,7 @@ static const struct net_device_ops ip6gre_tap_netdev_ops = {
        .ndo_start_xmit = ip6gre_tunnel_xmit,
        .ndo_set_mac_address = eth_mac_addr,
        .ndo_validate_addr = eth_validate_addr,
-       .ndo_change_mtu = ip6gre_tunnel_change_mtu,
+       .ndo_change_mtu = ip6_tnl_change_mtu,
        .ndo_get_stats64 = ip_tunnel_get_stats64,
        .ndo_get_iflink = ip6_tnl_get_iflink,
 };
index c05c425..6ed5601 100644 (file)
@@ -78,11 +78,11 @@ int ipv6_rcv(struct sk_buff *skb, struct net_device *dev, struct packet_type *pt
 
        idev = __in6_dev_get(skb->dev);
 
-       IP6_UPD_PO_STATS_BH(net, idev, IPSTATS_MIB_IN, skb->len);
+       __IP6_UPD_PO_STATS(net, idev, IPSTATS_MIB_IN, skb->len);
 
        if ((skb = skb_share_check(skb, GFP_ATOMIC)) == NULL ||
            !idev || unlikely(idev->cnf.disable_ipv6)) {
-               IP6_INC_STATS_BH(net, idev, IPSTATS_MIB_INDISCARDS);
+               __IP6_INC_STATS(net, idev, IPSTATS_MIB_INDISCARDS);
                goto drop;
        }
 
@@ -109,10 +109,10 @@ int ipv6_rcv(struct sk_buff *skb, struct net_device *dev, struct packet_type *pt
        if (hdr->version != 6)
                goto err;
 
-       IP6_ADD_STATS_BH(net, idev,
-                        IPSTATS_MIB_NOECTPKTS +
+       __IP6_ADD_STATS(net, idev,
+                       IPSTATS_MIB_NOECTPKTS +
                                (ipv6_get_dsfield(hdr) & INET_ECN_MASK),
-                        max_t(unsigned short, 1, skb_shinfo(skb)->gso_segs));
+                       max_t(unsigned short, 1, skb_shinfo(skb)->gso_segs));
        /*
         * RFC4291 2.5.3
         * A packet received on an interface with a destination address
@@ -169,12 +169,12 @@ int ipv6_rcv(struct sk_buff *skb, struct net_device *dev, struct packet_type *pt
        /* pkt_len may be zero if Jumbo payload option is present */
        if (pkt_len || hdr->nexthdr != NEXTHDR_HOP) {
                if (pkt_len + sizeof(struct ipv6hdr) > skb->len) {
-                       IP6_INC_STATS_BH(net,
-                                        idev, IPSTATS_MIB_INTRUNCATEDPKTS);
+                       __IP6_INC_STATS(net,
+                                       idev, IPSTATS_MIB_INTRUNCATEDPKTS);
                        goto drop;
                }
                if (pskb_trim_rcsum(skb, pkt_len + sizeof(struct ipv6hdr))) {
-                       IP6_INC_STATS_BH(net, idev, IPSTATS_MIB_INHDRERRORS);
+                       __IP6_INC_STATS(net, idev, IPSTATS_MIB_INHDRERRORS);
                        goto drop;
                }
                hdr = ipv6_hdr(skb);
@@ -182,7 +182,7 @@ int ipv6_rcv(struct sk_buff *skb, struct net_device *dev, struct packet_type *pt
 
        if (hdr->nexthdr == NEXTHDR_HOP) {
                if (ipv6_parse_hopopts(skb) < 0) {
-                       IP6_INC_STATS_BH(net, idev, IPSTATS_MIB_INHDRERRORS);
+                       __IP6_INC_STATS(net, idev, IPSTATS_MIB_INHDRERRORS);
                        rcu_read_unlock();
                        return NET_RX_DROP;
                }
@@ -197,7 +197,7 @@ int ipv6_rcv(struct sk_buff *skb, struct net_device *dev, struct packet_type *pt
                       net, NULL, skb, dev, NULL,
                       ip6_rcv_finish);
 err:
-       IP6_INC_STATS_BH(net, idev, IPSTATS_MIB_INHDRERRORS);
+       __IP6_INC_STATS(net, idev, IPSTATS_MIB_INHDRERRORS);
 drop:
        rcu_read_unlock();
        kfree_skb(skb);
@@ -259,18 +259,18 @@ resubmit:
                if (ret > 0)
                        goto resubmit;
                else if (ret == 0)
-                       IP6_INC_STATS_BH(net, idev, IPSTATS_MIB_INDELIVERS);
+                       __IP6_INC_STATS(net, idev, IPSTATS_MIB_INDELIVERS);
        } else {
                if (!raw) {
                        if (xfrm6_policy_check(NULL, XFRM_POLICY_IN, skb)) {
-                               IP6_INC_STATS_BH(net, idev,
-                                                IPSTATS_MIB_INUNKNOWNPROTOS);
+                               __IP6_INC_STATS(net, idev,
+                                               IPSTATS_MIB_INUNKNOWNPROTOS);
                                icmpv6_send(skb, ICMPV6_PARAMPROB,
                                            ICMPV6_UNK_NEXTHDR, nhoff);
                        }
                        kfree_skb(skb);
                } else {
-                       IP6_INC_STATS_BH(net, idev, IPSTATS_MIB_INDELIVERS);
+                       __IP6_INC_STATS(net, idev, IPSTATS_MIB_INDELIVERS);
                        consume_skb(skb);
                }
        }
@@ -278,7 +278,7 @@ resubmit:
        return 0;
 
 discard:
-       IP6_INC_STATS_BH(net, idev, IPSTATS_MIB_INDISCARDS);
+       __IP6_INC_STATS(net, idev, IPSTATS_MIB_INDISCARDS);
        rcu_read_unlock();
        kfree_skb(skb);
        return 0;
@@ -297,7 +297,7 @@ int ip6_mc_input(struct sk_buff *skb)
        const struct ipv6hdr *hdr;
        bool deliver;
 
-       IP6_UPD_PO_STATS_BH(dev_net(skb_dst(skb)->dev),
+       __IP6_UPD_PO_STATS(dev_net(skb_dst(skb)->dev),
                         ip6_dst_idev(skb_dst(skb)), IPSTATS_MIB_INMCAST,
                         skb->len);
 
index 171518e..cbf127a 100644 (file)
@@ -395,8 +395,8 @@ int ip6_forward(struct sk_buff *skb)
                goto drop;
 
        if (!xfrm6_policy_check(NULL, XFRM_POLICY_FWD, skb)) {
-               IP6_INC_STATS_BH(net, ip6_dst_idev(dst),
-                                IPSTATS_MIB_INDISCARDS);
+               __IP6_INC_STATS(net, ip6_dst_idev(dst),
+                               IPSTATS_MIB_INDISCARDS);
                goto drop;
        }
 
@@ -427,8 +427,8 @@ int ip6_forward(struct sk_buff *skb)
                /* Force OUTPUT device used as source address */
                skb->dev = dst->dev;
                icmpv6_send(skb, ICMPV6_TIME_EXCEED, ICMPV6_EXC_HOPLIMIT, 0);
-               IP6_INC_STATS_BH(net, ip6_dst_idev(dst),
-                                IPSTATS_MIB_INHDRERRORS);
+               __IP6_INC_STATS(net, ip6_dst_idev(dst),
+                               IPSTATS_MIB_INHDRERRORS);
 
                kfree_skb(skb);
                return -ETIMEDOUT;
@@ -441,15 +441,15 @@ int ip6_forward(struct sk_buff *skb)
                if (proxied > 0)
                        return ip6_input(skb);
                else if (proxied < 0) {
-                       IP6_INC_STATS_BH(net, ip6_dst_idev(dst),
-                                        IPSTATS_MIB_INDISCARDS);
+                       __IP6_INC_STATS(net, ip6_dst_idev(dst),
+                                       IPSTATS_MIB_INDISCARDS);
                        goto drop;
                }
        }
 
        if (!xfrm6_route_forward(skb)) {
-               IP6_INC_STATS_BH(net, ip6_dst_idev(dst),
-                                IPSTATS_MIB_INDISCARDS);
+               __IP6_INC_STATS(net, ip6_dst_idev(dst),
+                               IPSTATS_MIB_INDISCARDS);
                goto drop;
        }
        dst = skb_dst(skb);
@@ -505,17 +505,17 @@ int ip6_forward(struct sk_buff *skb)
                /* Again, force OUTPUT device used as source address */
                skb->dev = dst->dev;
                icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
-               IP6_INC_STATS_BH(net, ip6_dst_idev(dst),
-                                IPSTATS_MIB_INTOOBIGERRORS);
-               IP6_INC_STATS_BH(net, ip6_dst_idev(dst),
-                                IPSTATS_MIB_FRAGFAILS);
+               __IP6_INC_STATS(net, ip6_dst_idev(dst),
+                               IPSTATS_MIB_INTOOBIGERRORS);
+               __IP6_INC_STATS(net, ip6_dst_idev(dst),
+                               IPSTATS_MIB_FRAGFAILS);
                kfree_skb(skb);
                return -EMSGSIZE;
        }
 
        if (skb_cow(skb, dst->dev->hard_header_len)) {
-               IP6_INC_STATS_BH(net, ip6_dst_idev(dst),
-                                IPSTATS_MIB_OUTDISCARDS);
+               __IP6_INC_STATS(net, ip6_dst_idev(dst),
+                               IPSTATS_MIB_OUTDISCARDS);
                goto drop;
        }
 
@@ -525,14 +525,14 @@ int ip6_forward(struct sk_buff *skb)
 
        hdr->hop_limit--;
 
-       IP6_INC_STATS_BH(net, ip6_dst_idev(dst), IPSTATS_MIB_OUTFORWDATAGRAMS);
-       IP6_ADD_STATS_BH(net, ip6_dst_idev(dst), IPSTATS_MIB_OUTOCTETS, skb->len);
+       __IP6_INC_STATS(net, ip6_dst_idev(dst), IPSTATS_MIB_OUTFORWDATAGRAMS);
+       __IP6_ADD_STATS(net, ip6_dst_idev(dst), IPSTATS_MIB_OUTOCTETS, skb->len);
        return NF_HOOK(NFPROTO_IPV6, NF_INET_FORWARD,
                       net, NULL, skb, skb->dev, dst->dev,
                       ip6_forward_finish);
 
 error:
-       IP6_INC_STATS_BH(net, ip6_dst_idev(dst), IPSTATS_MIB_INADDRERRORS);
+       __IP6_INC_STATS(net, ip6_dst_idev(dst), IPSTATS_MIB_INADDRERRORS);
 drop:
        kfree_skb(skb);
        return -EINVAL;
@@ -1182,12 +1182,12 @@ static void ip6_append_data_mtu(unsigned int *mtu,
 }
 
 static int ip6_setup_cork(struct sock *sk, struct inet_cork_full *cork,
-                         struct inet6_cork *v6_cork,
-                         int hlimit, int tclass, struct ipv6_txoptions *opt,
+                         struct inet6_cork *v6_cork, struct ipcm6_cookie *ipc6,
                          struct rt6_info *rt, struct flowi6 *fl6)
 {
        struct ipv6_pinfo *np = inet6_sk(sk);
        unsigned int mtu;
+       struct ipv6_txoptions *opt = ipc6->opt;
 
        /*
         * setup for corking
@@ -1229,8 +1229,8 @@ static int ip6_setup_cork(struct sock *sk, struct inet_cork_full *cork,
        dst_hold(&rt->dst);
        cork->base.dst = &rt->dst;
        cork->fl.u.ip6 = *fl6;
-       v6_cork->hop_limit = hlimit;
-       v6_cork->tclass = tclass;
+       v6_cork->hop_limit = ipc6->hlimit;
+       v6_cork->tclass = ipc6->tclass;
        if (rt->dst.flags & DST_XFRM_TUNNEL)
                mtu = np->pmtudisc >= IPV6_PMTUDISC_PROBE ?
                      rt->dst.dev->mtu : dst_mtu(&rt->dst);
@@ -1258,7 +1258,7 @@ static int __ip6_append_data(struct sock *sk,
                             int getfrag(void *from, char *to, int offset,
                                         int len, int odd, struct sk_buff *skb),
                             void *from, int length, int transhdrlen,
-                            unsigned int flags, int dontfrag,
+                            unsigned int flags, struct ipcm6_cookie *ipc6,
                             const struct sockcm_cookie *sockc)
 {
        struct sk_buff *skb, *skb_prev = NULL;
@@ -1298,7 +1298,7 @@ static int __ip6_append_data(struct sock *sk,
                      sizeof(struct frag_hdr) : 0) +
                     rt->rt6i_nfheader_len;
 
-       if (cork->length + length > mtu - headersize && dontfrag &&
+       if (cork->length + length > mtu - headersize && ipc6->dontfrag &&
            (sk->sk_protocol == IPPROTO_UDP ||
             sk->sk_protocol == IPPROTO_RAW)) {
                ipv6_local_rxpmtu(sk, fl6, mtu - headersize +
@@ -1564,9 +1564,9 @@ error:
 int ip6_append_data(struct sock *sk,
                    int getfrag(void *from, char *to, int offset, int len,
                                int odd, struct sk_buff *skb),
-                   void *from, int length, int transhdrlen, int hlimit,
-                   int tclass, struct ipv6_txoptions *opt, struct flowi6 *fl6,
-                   struct rt6_info *rt, unsigned int flags, int dontfrag,
+                   void *from, int length, int transhdrlen,
+                   struct ipcm6_cookie *ipc6, struct flowi6 *fl6,
+                   struct rt6_info *rt, unsigned int flags,
                    const struct sockcm_cookie *sockc)
 {
        struct inet_sock *inet = inet_sk(sk);
@@ -1580,12 +1580,12 @@ int ip6_append_data(struct sock *sk,
                /*
                 * setup for corking
                 */
-               err = ip6_setup_cork(sk, &inet->cork, &np->cork, hlimit,
-                                    tclass, opt, rt, fl6);
+               err = ip6_setup_cork(sk, &inet->cork, &np->cork,
+                                    ipc6, rt, fl6);
                if (err)
                        return err;
 
-               exthdrlen = (opt ? opt->opt_flen : 0);
+               exthdrlen = (ipc6->opt ? ipc6->opt->opt_flen : 0);
                length += exthdrlen;
                transhdrlen += exthdrlen;
        } else {
@@ -1595,8 +1595,7 @@ int ip6_append_data(struct sock *sk,
 
        return __ip6_append_data(sk, fl6, &sk->sk_write_queue, &inet->cork.base,
                                 &np->cork, sk_page_frag(sk), getfrag,
-                                from, length, transhdrlen, flags, dontfrag,
-                                sockc);
+                                from, length, transhdrlen, flags, ipc6, sockc);
 }
 EXPORT_SYMBOL_GPL(ip6_append_data);
 
@@ -1752,15 +1751,14 @@ struct sk_buff *ip6_make_skb(struct sock *sk,
                             int getfrag(void *from, char *to, int offset,
                                         int len, int odd, struct sk_buff *skb),
                             void *from, int length, int transhdrlen,
-                            int hlimit, int tclass,
-                            struct ipv6_txoptions *opt, struct flowi6 *fl6,
+                            struct ipcm6_cookie *ipc6, struct flowi6 *fl6,
                             struct rt6_info *rt, unsigned int flags,
-                            int dontfrag, const struct sockcm_cookie *sockc)
+                            const struct sockcm_cookie *sockc)
 {
        struct inet_cork_full cork;
        struct inet6_cork v6_cork;
        struct sk_buff_head queue;
-       int exthdrlen = (opt ? opt->opt_flen : 0);
+       int exthdrlen = (ipc6->opt ? ipc6->opt->opt_flen : 0);
        int err;
 
        if (flags & MSG_PROBE)
@@ -1772,17 +1770,17 @@ struct sk_buff *ip6_make_skb(struct sock *sk,
        cork.base.addr = 0;
        cork.base.opt = NULL;
        v6_cork.opt = NULL;
-       err = ip6_setup_cork(sk, &cork, &v6_cork, hlimit, tclass, opt, rt, fl6);
+       err = ip6_setup_cork(sk, &cork, &v6_cork, ipc6, rt, fl6);
        if (err)
                return ERR_PTR(err);
 
-       if (dontfrag < 0)
-               dontfrag = inet6_sk(sk)->dontfrag;
+       if (ipc6->dontfrag < 0)
+               ipc6->dontfrag = inet6_sk(sk)->dontfrag;
 
        err = __ip6_append_data(sk, fl6, &queue, &cork.base, &v6_cork,
                                &current->task_frag, getfrag, from,
                                length + exthdrlen, transhdrlen + exthdrlen,
-                               flags, dontfrag, sockc);
+                               flags, ipc6, sockc);
        if (err) {
                __ip6_flush_pending_frames(sk, &queue, &cork, &v6_cork);
                return ERR_PTR(err);
index 1f20345..ade55af 100644 (file)
@@ -238,6 +238,7 @@ static void ip6_dev_free(struct net_device *dev)
 {
        struct ip6_tnl *t = netdev_priv(dev);
 
+       gro_cells_destroy(&t->gro_cells);
        dst_cache_destroy(&t->dst_cache);
        free_percpu(dev->tstats);
        free_netdev(dev);
@@ -753,97 +754,157 @@ int ip6_tnl_rcv_ctl(struct ip6_tnl *t,
 }
 EXPORT_SYMBOL_GPL(ip6_tnl_rcv_ctl);
 
-/**
- * ip6_tnl_rcv - decapsulate IPv6 packet and retransmit it locally
- *   @skb: received socket buffer
- *   @protocol: ethernet protocol ID
- *   @dscp_ecn_decapsulate: the function to decapsulate DSCP code and ECN
- *
- * Return: 0
- **/
-
-static int ip6_tnl_rcv(struct sk_buff *skb, __u16 protocol,
-                      __u8 ipproto,
-                      int (*dscp_ecn_decapsulate)(const struct ip6_tnl *t,
-                                                  const struct ipv6hdr *ipv6h,
-                                                  struct sk_buff *skb))
+static int __ip6_tnl_rcv(struct ip6_tnl *tunnel, struct sk_buff *skb,
+                        const struct tnl_ptk_info *tpi,
+                        struct metadata_dst *tun_dst,
+                        int (*dscp_ecn_decapsulate)(const struct ip6_tnl *t,
+                                               const struct ipv6hdr *ipv6h,
+                                               struct sk_buff *skb),
+                        bool log_ecn_err)
 {
-       struct ip6_tnl *t;
+       struct pcpu_sw_netstats *tstats;
        const struct ipv6hdr *ipv6h = ipv6_hdr(skb);
-       u8 tproto;
        int err;
 
-       rcu_read_lock();
-       t = ip6_tnl_lookup(dev_net(skb->dev), &ipv6h->saddr, &ipv6h->daddr);
-       if (t) {
-               struct pcpu_sw_netstats *tstats;
+       if ((!(tpi->flags & TUNNEL_CSUM) &&
+            (tunnel->parms.i_flags & TUNNEL_CSUM)) ||
+           ((tpi->flags & TUNNEL_CSUM) &&
+            !(tunnel->parms.i_flags & TUNNEL_CSUM))) {
+               tunnel->dev->stats.rx_crc_errors++;
+               tunnel->dev->stats.rx_errors++;
+               goto drop;
+       }
 
-               tproto = ACCESS_ONCE(t->parms.proto);
-               if (tproto != ipproto && tproto != 0) {
-                       rcu_read_unlock();
-                       goto discard;
+       if (tunnel->parms.i_flags & TUNNEL_SEQ) {
+               if (!(tpi->flags & TUNNEL_SEQ) ||
+                   (tunnel->i_seqno &&
+                    (s32)(ntohl(tpi->seq) - tunnel->i_seqno) < 0)) {
+                       tunnel->dev->stats.rx_fifo_errors++;
+                       tunnel->dev->stats.rx_errors++;
+                       goto drop;
                }
+               tunnel->i_seqno = ntohl(tpi->seq) + 1;
+       }
 
-               if (!xfrm6_policy_check(NULL, XFRM_POLICY_IN, skb)) {
-                       rcu_read_unlock();
-                       goto discard;
-               }
+       skb->protocol = tpi->proto;
 
-               if (!ip6_tnl_rcv_ctl(t, &ipv6h->daddr, &ipv6h->saddr)) {
-                       t->dev->stats.rx_dropped++;
-                       rcu_read_unlock();
-                       goto discard;
+       /* Warning: All skb pointers will be invalidated! */
+       if (tunnel->dev->type == ARPHRD_ETHER) {
+               if (!pskb_may_pull(skb, ETH_HLEN)) {
+                       tunnel->dev->stats.rx_length_errors++;
+                       tunnel->dev->stats.rx_errors++;
+                       goto drop;
                }
-               skb->mac_header = skb->network_header;
-               skb_reset_network_header(skb);
-               skb->protocol = htons(protocol);
-               memset(skb->cb, 0, sizeof(struct inet6_skb_parm));
-
-               __skb_tunnel_rx(skb, t->dev, t->net);
-
-               err = dscp_ecn_decapsulate(t, ipv6h, skb);
-               if (unlikely(err)) {
-                       if (log_ecn_error)
-                               net_info_ratelimited("non-ECT from %pI6 with dsfield=%#x\n",
-                                                    &ipv6h->saddr,
-                                                    ipv6_get_dsfield(ipv6h));
-                       if (err > 1) {
-                               ++t->dev->stats.rx_frame_errors;
-                               ++t->dev->stats.rx_errors;
-                               rcu_read_unlock();
-                               goto discard;
-                       }
+
+               ipv6h = ipv6_hdr(skb);
+               skb->protocol = eth_type_trans(skb, tunnel->dev);
+               skb_postpull_rcsum(skb, eth_hdr(skb), ETH_HLEN);
+       } else {
+               skb->dev = tunnel->dev;
+       }
+
+       skb_reset_network_header(skb);
+       memset(skb->cb, 0, sizeof(struct inet6_skb_parm));
+
+       __skb_tunnel_rx(skb, tunnel->dev, tunnel->net);
+
+       err = dscp_ecn_decapsulate(tunnel, ipv6h, skb);
+       if (unlikely(err)) {
+               if (log_ecn_err)
+                       net_info_ratelimited("non-ECT from %pI6 with DS=%#x\n",
+                                            &ipv6h->saddr,
+                                            ipv6_get_dsfield(ipv6h));
+               if (err > 1) {
+                       ++tunnel->dev->stats.rx_frame_errors;
+                       ++tunnel->dev->stats.rx_errors;
+                       goto drop;
                }
+       }
 
-               tstats = this_cpu_ptr(t->dev->tstats);
-               u64_stats_update_begin(&tstats->syncp);
-               tstats->rx_packets++;
-               tstats->rx_bytes += skb->len;
-               u64_stats_update_end(&tstats->syncp);
+       tstats = this_cpu_ptr(tunnel->dev->tstats);
+       u64_stats_update_begin(&tstats->syncp);
+       tstats->rx_packets++;
+       tstats->rx_bytes += skb->len;
+       u64_stats_update_end(&tstats->syncp);
 
-               netif_rx(skb);
+       skb_scrub_packet(skb, !net_eq(tunnel->net, dev_net(tunnel->dev)));
 
-               rcu_read_unlock();
-               return 0;
+       gro_cells_receive(&tunnel->gro_cells, skb);
+       return 0;
+
+drop:
+       kfree_skb(skb);
+       return 0;
+}
+
+int ip6_tnl_rcv(struct ip6_tnl *t, struct sk_buff *skb,
+               const struct tnl_ptk_info *tpi,
+               struct metadata_dst *tun_dst,
+               bool log_ecn_err)
+{
+       return __ip6_tnl_rcv(t, skb, tpi, NULL, ip6ip6_dscp_ecn_decapsulate,
+                            log_ecn_err);
+}
+EXPORT_SYMBOL(ip6_tnl_rcv);
+
+static const struct tnl_ptk_info tpi_v6 = {
+       /* no tunnel info required for ipxip6. */
+       .proto = htons(ETH_P_IPV6),
+};
+
+static const struct tnl_ptk_info tpi_v4 = {
+       /* no tunnel info required for ipxip6. */
+       .proto = htons(ETH_P_IP),
+};
+
+static int ipxip6_rcv(struct sk_buff *skb, u8 ipproto,
+                     const struct tnl_ptk_info *tpi,
+                     int (*dscp_ecn_decapsulate)(const struct ip6_tnl *t,
+                                                 const struct ipv6hdr *ipv6h,
+                                                 struct sk_buff *skb))
+{
+       struct ip6_tnl *t;
+       const struct ipv6hdr *ipv6h = ipv6_hdr(skb);
+       int ret = -1;
+
+       rcu_read_lock();
+       t = ip6_tnl_lookup(dev_net(skb->dev), &ipv6h->saddr, &ipv6h->daddr);
+
+       if (t) {
+               u8 tproto = ACCESS_ONCE(t->parms.proto);
+
+               if (tproto != ipproto && tproto != 0)
+                       goto drop;
+               if (!xfrm6_policy_check(NULL, XFRM_POLICY_IN, skb))
+                       goto drop;
+               if (!ip6_tnl_rcv_ctl(t, &ipv6h->daddr, &ipv6h->saddr))
+                       goto drop;
+               if (iptunnel_pull_header(skb, 0, tpi->proto, false))
+                       goto drop;
+               ret = __ip6_tnl_rcv(t, skb, tpi, NULL, dscp_ecn_decapsulate,
+                                   log_ecn_error);
        }
+
        rcu_read_unlock();
-       return 1;
 
-discard:
+       return ret;
+
+drop:
+       rcu_read_unlock();
        kfree_skb(skb);
        return 0;
 }
 
 static int ip4ip6_rcv(struct sk_buff *skb)
 {
-       return ip6_tnl_rcv(skb, ETH_P_IP, IPPROTO_IPIP,
-                          ip4ip6_dscp_ecn_decapsulate);
+       return ipxip6_rcv(skb, IPPROTO_IP, &tpi_v4,
+                         ip4ip6_dscp_ecn_decapsulate);
 }
 
 static int ip6ip6_rcv(struct sk_buff *skb)
 {
-       return ip6_tnl_rcv(skb, ETH_P_IPV6, IPPROTO_IPV6,
-                          ip6ip6_dscp_ecn_decapsulate);
+       return ipxip6_rcv(skb, IPPROTO_IPV6, &tpi_v6,
+                         ip6ip6_dscp_ecn_decapsulate);
 }
 
 struct ipv6_tel_txoption {
@@ -918,13 +979,14 @@ int ip6_tnl_xmit_ctl(struct ip6_tnl *t,
 EXPORT_SYMBOL_GPL(ip6_tnl_xmit_ctl);
 
 /**
- * ip6_tnl_xmit2 - encapsulate packet and send
+ * ip6_tnl_xmit - encapsulate packet and send
  *   @skb: the outgoing socket buffer
  *   @dev: the outgoing tunnel device
  *   @dsfield: dscp code for outer header
- *   @fl: flow of tunneled packet
+ *   @fl6: flow of tunneled packet
  *   @encap_limit: encapsulation limit
  *   @pmtu: Path MTU is stored if packet is too big
+ *   @proto: next header value
  *
  * Description:
  *   Build new header and do some sanity checks on the packet before sending
@@ -936,12 +998,9 @@ EXPORT_SYMBOL_GPL(ip6_tnl_xmit_ctl);
  *   %-EMSGSIZE message too big. return mtu in this case.
  **/
 
-static int ip6_tnl_xmit2(struct sk_buff *skb,
-                        struct net_device *dev,
-                        __u8 dsfield,
-                        struct flowi6 *fl6,
-                        int encap_limit,
-                        __u32 *pmtu)
+int ip6_tnl_xmit(struct sk_buff *skb, struct net_device *dev, __u8 dsfield,
+                struct flowi6 *fl6, int encap_limit, __u32 *pmtu,
+                __u8 proto)
 {
        struct ip6_tnl *t = netdev_priv(dev);
        struct net *net = t->net;
@@ -952,7 +1011,6 @@ static int ip6_tnl_xmit2(struct sk_buff *skb,
        struct net_device *tdev;
        int mtu;
        unsigned int max_headroom = sizeof(struct ipv6hdr);
-       u8 proto;
        int err = -1;
 
        /* NBMA tunnel */
@@ -1014,12 +1072,23 @@ static int ip6_tnl_xmit2(struct sk_buff *skb,
                mtu = IPV6_MIN_MTU;
        if (skb_dst(skb))
                skb_dst(skb)->ops->update_pmtu(skb_dst(skb), NULL, skb, mtu);
-       if (skb->len > mtu) {
+       if (skb->len > mtu && !skb_is_gso(skb)) {
                *pmtu = mtu;
                err = -EMSGSIZE;
                goto tx_err_dst_release;
        }
 
+       if (t->err_count > 0) {
+               if (time_before(jiffies,
+                               t->err_time + IP6TUNNEL_ERR_TIMEO)) {
+                       t->err_count--;
+
+                       dst_link_failure(skb);
+               } else {
+                       t->err_count = 0;
+               }
+       }
+
        skb_scrub_packet(skb, !net_eq(t->net, dev_net(dev)));
 
        /*
@@ -1047,7 +1116,6 @@ static int ip6_tnl_xmit2(struct sk_buff *skb,
 
        skb->transport_header = skb->network_header;
 
-       proto = fl6->flowi6_proto;
        if (encap_limit >= 0) {
                init_tel_txopt(&opt, encap_limit);
                ipv6_push_nfrag_opts(skb, &opt.ops, &proto, NULL);
@@ -1058,6 +1126,11 @@ static int ip6_tnl_xmit2(struct sk_buff *skb,
                skb->encapsulation = 1;
        }
 
+       max_headroom = LL_RESERVED_SPACE(dst->dev) + sizeof(struct ipv6hdr)
+                       + dst->header_len;
+       if (max_headroom > dev->needed_headroom)
+               dev->needed_headroom = max_headroom;
+
        skb_push(skb, sizeof(struct ipv6hdr));
        skb_reset_network_header(skb);
        ipv6h = ipv6_hdr(skb);
@@ -1076,6 +1149,7 @@ tx_err_dst_release:
        dst_release(dst);
        return err;
 }
+EXPORT_SYMBOL(ip6_tnl_xmit);
 
 static inline int
 ip4ip6_tnl_xmit(struct sk_buff *skb, struct net_device *dev)
@@ -1099,7 +1173,6 @@ ip4ip6_tnl_xmit(struct sk_buff *skb, struct net_device *dev)
                encap_limit = t->parms.encap_limit;
 
        memcpy(&fl6, &t->fl.u.ip6, sizeof(fl6));
-       fl6.flowi6_proto = IPPROTO_IPIP;
 
        dsfield = ipv4_get_dsfield(iph);
 
@@ -1109,7 +1182,8 @@ ip4ip6_tnl_xmit(struct sk_buff *skb, struct net_device *dev)
        if (t->parms.flags & IP6_TNL_F_USE_ORIG_FWMARK)
                fl6.flowi6_mark = skb->mark;
 
-       err = ip6_tnl_xmit2(skb, dev, dsfield, &fl6, encap_limit, &mtu);
+       err = ip6_tnl_xmit(skb, dev, dsfield, &fl6, encap_limit, &mtu,
+                          IPPROTO_IPIP);
        if (err != 0) {
                /* XXX: send ICMP error even if DF is not set. */
                if (err == -EMSGSIZE)
@@ -1153,7 +1227,6 @@ ip6ip6_tnl_xmit(struct sk_buff *skb, struct net_device *dev)
                encap_limit = t->parms.encap_limit;
 
        memcpy(&fl6, &t->fl.u.ip6, sizeof(fl6));
-       fl6.flowi6_proto = IPPROTO_IPV6;
 
        dsfield = ipv6_get_dsfield(ipv6h);
        if (t->parms.flags & IP6_TNL_F_USE_ORIG_TCLASS)
@@ -1163,7 +1236,8 @@ ip6ip6_tnl_xmit(struct sk_buff *skb, struct net_device *dev)
        if (t->parms.flags & IP6_TNL_F_USE_ORIG_FWMARK)
                fl6.flowi6_mark = skb->mark;
 
-       err = ip6_tnl_xmit2(skb, dev, dsfield, &fl6, encap_limit, &mtu);
+       err = ip6_tnl_xmit(skb, dev, dsfield, &fl6, encap_limit, &mtu,
+                          IPPROTO_IPV6);
        if (err != 0) {
                if (err == -EMSGSIZE)
                        icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
@@ -1174,7 +1248,7 @@ ip6ip6_tnl_xmit(struct sk_buff *skb, struct net_device *dev)
 }
 
 static netdev_tx_t
-ip6_tnl_xmit(struct sk_buff *skb, struct net_device *dev)
+ip6_tnl_start_xmit(struct sk_buff *skb, struct net_device *dev)
 {
        struct ip6_tnl *t = netdev_priv(dev);
        struct net_device_stats *stats = &t->dev->stats;
@@ -1370,6 +1444,8 @@ ip6_tnl_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
        struct net *net = t->net;
        struct ip6_tnl_net *ip6n = net_generic(net, ip6_tnl_net_id);
 
+       memset(&p1, 0, sizeof(p1));
+
        switch (cmd) {
        case SIOCGETTUNNEL:
                if (dev == ip6n->fb_tnl_dev) {
@@ -1464,8 +1540,7 @@ ip6_tnl_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
  *   %-EINVAL if mtu too small
  **/
 
-static int
-ip6_tnl_change_mtu(struct net_device *dev, int new_mtu)
+int ip6_tnl_change_mtu(struct net_device *dev, int new_mtu)
 {
        struct ip6_tnl *tnl = netdev_priv(dev);
 
@@ -1481,6 +1556,7 @@ ip6_tnl_change_mtu(struct net_device *dev, int new_mtu)
        dev->mtu = new_mtu;
        return 0;
 }
+EXPORT_SYMBOL(ip6_tnl_change_mtu);
 
 int ip6_tnl_get_iflink(const struct net_device *dev)
 {
@@ -1493,7 +1569,7 @@ EXPORT_SYMBOL(ip6_tnl_get_iflink);
 static const struct net_device_ops ip6_tnl_netdev_ops = {
        .ndo_init       = ip6_tnl_dev_init,
        .ndo_uninit     = ip6_tnl_dev_uninit,
-       .ndo_start_xmit = ip6_tnl_xmit,
+       .ndo_start_xmit = ip6_tnl_start_xmit,
        .ndo_do_ioctl   = ip6_tnl_ioctl,
        .ndo_change_mtu = ip6_tnl_change_mtu,
        .ndo_get_stats  = ip6_get_stats,
@@ -1549,13 +1625,25 @@ ip6_tnl_dev_init_gen(struct net_device *dev)
                return -ENOMEM;
 
        ret = dst_cache_init(&t->dst_cache, GFP_KERNEL);
-       if (ret) {
-               free_percpu(dev->tstats);
-               dev->tstats = NULL;
-               return ret;
-       }
+       if (ret)
+               goto free_stats;
+
+       ret = gro_cells_init(&t->gro_cells, dev);
+       if (ret)
+               goto destroy_dst;
+
+       t->hlen = 0;
+       t->tun_hlen = 0;
 
        return 0;
+
+destroy_dst:
+       dst_cache_destroy(&t->dst_cache);
+free_stats:
+       free_percpu(dev->tstats);
+       dev->tstats = NULL;
+
+       return ret;
 }
 
 /**
index bf67832..f2e2013 100644 (file)
@@ -1984,10 +1984,10 @@ int ip6mr_compat_ioctl(struct sock *sk, unsigned int cmd, void __user *arg)
 
 static inline int ip6mr_forward2_finish(struct net *net, struct sock *sk, struct sk_buff *skb)
 {
-       IP6_INC_STATS_BH(net, ip6_dst_idev(skb_dst(skb)),
-                        IPSTATS_MIB_OUTFORWDATAGRAMS);
-       IP6_ADD_STATS_BH(net, ip6_dst_idev(skb_dst(skb)),
-                        IPSTATS_MIB_OUTOCTETS, skb->len);
+       __IP6_INC_STATS(net, ip6_dst_idev(skb_dst(skb)),
+                       IPSTATS_MIB_OUTFORWDATAGRAMS);
+       __IP6_ADD_STATS(net, ip6_dst_idev(skb_dst(skb)),
+                       IPSTATS_MIB_OUTOCTETS, skb->len);
        return dst_output(net, sk, skb);
 }
 
index 4ff4b29..a9895e1 100644 (file)
@@ -473,7 +473,7 @@ sticky_done:
                struct msghdr msg;
                struct flowi6 fl6;
                struct sockcm_cookie sockc_junk;
-               int junk;
+               struct ipcm6_cookie ipc6;
 
                memset(&fl6, 0, sizeof(fl6));
                fl6.flowi6_oif = sk->sk_bound_dev_if;
@@ -503,9 +503,9 @@ sticky_done:
 
                msg.msg_controllen = optlen;
                msg.msg_control = (void *)(opt+1);
+               ipc6.opt = opt;
 
-               retv = ip6_datagram_send_ctl(net, sk, &msg, &fl6, opt, &junk,
-                                            &junk, &junk, &sockc_junk);
+               retv = ip6_datagram_send_ctl(net, sk, &msg, &fl6, &ipc6, &sockc_junk);
                if (retv)
                        goto done;
 update:
index da1cff7..3ee3e44 100644 (file)
@@ -58,11 +58,11 @@ static int ping_v6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
        int iif = 0;
        struct flowi6 fl6;
        int err;
-       int hlimit;
        struct dst_entry *dst;
        struct rt6_info *rt;
        struct pingfakehdr pfh;
        struct sockcm_cookie junk = {0};
+       struct ipcm6_cookie ipc6;
 
        pr_debug("ping_v6_sendmsg(sk=%p,sk->num=%u)\n", inet, inet->inet_num);
 
@@ -139,13 +139,15 @@ static int ping_v6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
        pfh.wcheck = 0;
        pfh.family = AF_INET6;
 
-       hlimit = ip6_sk_dst_hoplimit(np, &fl6, dst);
+       ipc6.hlimit = ip6_sk_dst_hoplimit(np, &fl6, dst);
+       ipc6.tclass = np->tclass;
+       ipc6.dontfrag = np->dontfrag;
+       ipc6.opt = NULL;
 
        lock_sock(sk);
        err = ip6_append_data(sk, ping_getfrag, &pfh, len,
-                             0, hlimit,
-                             np->tclass, NULL, &fl6, rt,
-                             MSG_DONTWAIT, np->dontfrag, &junk);
+                             0, &ipc6, &fl6, rt,
+                             MSG_DONTWAIT, &junk);
 
        if (err) {
                ICMP6_INC_STATS(sock_net(sk), rt->rt6i_idev,
index b07ce21..896350d 100644 (file)
@@ -746,10 +746,8 @@ static int rawv6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
        struct raw6_frag_vec rfv;
        struct flowi6 fl6;
        struct sockcm_cookie sockc;
+       struct ipcm6_cookie ipc6;
        int addr_len = msg->msg_namelen;
-       int hlimit = -1;
-       int tclass = -1;
-       int dontfrag = -1;
        u16 proto;
        int err;
 
@@ -770,6 +768,11 @@ static int rawv6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
 
        fl6.flowi6_mark = sk->sk_mark;
 
+       ipc6.hlimit = -1;
+       ipc6.tclass = -1;
+       ipc6.dontfrag = -1;
+       ipc6.opt = NULL;
+
        if (sin6) {
                if (addr_len < SIN6_LEN_RFC2133)
                        return -EINVAL;
@@ -827,10 +830,9 @@ static int rawv6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
                opt = &opt_space;
                memset(opt, 0, sizeof(struct ipv6_txoptions));
                opt->tot_len = sizeof(struct ipv6_txoptions);
+               ipc6.opt = opt;
 
-               err = ip6_datagram_send_ctl(sock_net(sk), sk, msg, &fl6, opt,
-                                           &hlimit, &tclass, &dontfrag,
-                                           &sockc);
+               err = ip6_datagram_send_ctl(sock_net(sk), sk, msg, &fl6, &ipc6, &sockc);
                if (err < 0) {
                        fl6_sock_release(flowlabel);
                        return err;
@@ -846,7 +848,7 @@ static int rawv6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
        if (!opt) {
                opt = txopt_get(np);
                opt_to_free = opt;
-               }
+       }
        if (flowlabel)
                opt = fl6_merge_options(&opt_space, flowlabel, opt);
        opt = ipv6_fixup_options(&opt_space, opt);
@@ -881,14 +883,14 @@ static int rawv6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
                err = PTR_ERR(dst);
                goto out;
        }
-       if (hlimit < 0)
-               hlimit = ip6_sk_dst_hoplimit(np, &fl6, dst);
+       if (ipc6.hlimit < 0)
+               ipc6.hlimit = ip6_sk_dst_hoplimit(np, &fl6, dst);
 
-       if (tclass < 0)
-               tclass = np->tclass;
+       if (ipc6.tclass < 0)
+               ipc6.tclass = np->tclass;
 
-       if (dontfrag < 0)
-               dontfrag = np->dontfrag;
+       if (ipc6.dontfrag < 0)
+               ipc6.dontfrag = np->dontfrag;
 
        if (msg->msg_flags&MSG_CONFIRM)
                goto do_confirm;
@@ -897,10 +899,11 @@ back_from_confirm:
        if (inet->hdrincl)
                err = rawv6_send_hdrinc(sk, msg, len, &fl6, &dst, msg->msg_flags);
        else {
+               ipc6.opt = opt;
                lock_sock(sk);
                err = ip6_append_data(sk, raw6_getfrag, &rfv,
-                       len, 0, hlimit, tclass, opt, &fl6, (struct rt6_info *)dst,
-                       msg->msg_flags, dontfrag, &sockc);
+                       len, 0, &ipc6, &fl6, (struct rt6_info *)dst,
+                       msg->msg_flags, &sockc);
 
                if (err)
                        ip6_flush_pending_frames(sk);
index e2ea311..2160d5d 100644 (file)
@@ -145,12 +145,12 @@ void ip6_expire_frag_queue(struct net *net, struct frag_queue *fq,
        if (!dev)
                goto out_rcu_unlock;
 
-       IP6_INC_STATS_BH(net, __in6_dev_get(dev), IPSTATS_MIB_REASMFAILS);
+       __IP6_INC_STATS(net, __in6_dev_get(dev), IPSTATS_MIB_REASMFAILS);
 
        if (inet_frag_evicting(&fq->q))
                goto out_rcu_unlock;
 
-       IP6_INC_STATS_BH(net, __in6_dev_get(dev), IPSTATS_MIB_REASMTIMEOUT);
+       __IP6_INC_STATS(net, __in6_dev_get(dev), IPSTATS_MIB_REASMTIMEOUT);
 
        /* Don't send error if the first segment did not arrive. */
        if (!(fq->q.flags & INET_FRAG_FIRST_IN) || !fq->q.fragments)
@@ -223,8 +223,8 @@ static int ip6_frag_queue(struct frag_queue *fq, struct sk_buff *skb,
                        ((u8 *)(fhdr + 1) - (u8 *)(ipv6_hdr(skb) + 1)));
 
        if ((unsigned int)end > IPV6_MAXPLEN) {
-               IP6_INC_STATS_BH(net, ip6_dst_idev(skb_dst(skb)),
-                                IPSTATS_MIB_INHDRERRORS);
+               __IP6_INC_STATS(net, ip6_dst_idev(skb_dst(skb)),
+                               IPSTATS_MIB_INHDRERRORS);
                icmpv6_param_prob(skb, ICMPV6_HDR_FIELD,
                                  ((u8 *)&fhdr->frag_off -
                                   skb_network_header(skb)));
@@ -258,8 +258,8 @@ static int ip6_frag_queue(struct frag_queue *fq, struct sk_buff *skb,
                        /* RFC2460 says always send parameter problem in
                         * this case. -DaveM
                         */
-                       IP6_INC_STATS_BH(net, ip6_dst_idev(skb_dst(skb)),
-                                        IPSTATS_MIB_INHDRERRORS);
+                       __IP6_INC_STATS(net, ip6_dst_idev(skb_dst(skb)),
+                                       IPSTATS_MIB_INHDRERRORS);
                        icmpv6_param_prob(skb, ICMPV6_HDR_FIELD,
                                          offsetof(struct ipv6hdr, payload_len));
                        return -1;
@@ -361,8 +361,8 @@ found:
 discard_fq:
        inet_frag_kill(&fq->q, &ip6_frags);
 err:
-       IP6_INC_STATS_BH(net, ip6_dst_idev(skb_dst(skb)),
-                        IPSTATS_MIB_REASMFAILS);
+       __IP6_INC_STATS(net, ip6_dst_idev(skb_dst(skb)),
+                       IPSTATS_MIB_REASMFAILS);
        kfree_skb(skb);
        return -1;
 }
@@ -500,7 +500,7 @@ static int ip6_frag_reasm(struct frag_queue *fq, struct sk_buff *prev,
                           skb_network_header_len(head));
 
        rcu_read_lock();
-       IP6_INC_STATS_BH(net, __in6_dev_get(dev), IPSTATS_MIB_REASMOKS);
+       __IP6_INC_STATS(net, __in6_dev_get(dev), IPSTATS_MIB_REASMOKS);
        rcu_read_unlock();
        fq->q.fragments = NULL;
        fq->q.fragments_tail = NULL;
@@ -513,7 +513,7 @@ out_oom:
        net_dbg_ratelimited("ip6_frag_reasm: no memory for reassembly\n");
 out_fail:
        rcu_read_lock();
-       IP6_INC_STATS_BH(net, __in6_dev_get(dev), IPSTATS_MIB_REASMFAILS);
+       __IP6_INC_STATS(net, __in6_dev_get(dev), IPSTATS_MIB_REASMFAILS);
        rcu_read_unlock();
        return -1;
 }
@@ -528,7 +528,7 @@ static int ipv6_frag_rcv(struct sk_buff *skb)
        if (IP6CB(skb)->flags & IP6SKB_FRAGMENTED)
                goto fail_hdr;
 
-       IP6_INC_STATS_BH(net, ip6_dst_idev(skb_dst(skb)), IPSTATS_MIB_REASMREQDS);
+       __IP6_INC_STATS(net, ip6_dst_idev(skb_dst(skb)), IPSTATS_MIB_REASMREQDS);
 
        /* Jumbo payload inhibits frag. header */
        if (hdr->payload_len == 0)
@@ -544,8 +544,8 @@ static int ipv6_frag_rcv(struct sk_buff *skb)
        if (!(fhdr->frag_off & htons(0xFFF9))) {
                /* It is not a fragmented frame */
                skb->transport_header += sizeof(struct frag_hdr);
-               IP6_INC_STATS_BH(net,
-                                ip6_dst_idev(skb_dst(skb)), IPSTATS_MIB_REASMOKS);
+               __IP6_INC_STATS(net,
+                               ip6_dst_idev(skb_dst(skb)), IPSTATS_MIB_REASMOKS);
 
                IP6CB(skb)->nhoff = (u8 *)fhdr - skb_network_header(skb);
                IP6CB(skb)->flags |= IP6SKB_FRAGMENTED;
@@ -566,13 +566,13 @@ static int ipv6_frag_rcv(struct sk_buff *skb)
                return ret;
        }
 
-       IP6_INC_STATS_BH(net, ip6_dst_idev(skb_dst(skb)), IPSTATS_MIB_REASMFAILS);
+       __IP6_INC_STATS(net, ip6_dst_idev(skb_dst(skb)), IPSTATS_MIB_REASMFAILS);
        kfree_skb(skb);
        return -1;
 
 fail_hdr:
-       IP6_INC_STATS_BH(net, ip6_dst_idev(skb_dst(skb)),
-                        IPSTATS_MIB_INHDRERRORS);
+       __IP6_INC_STATS(net, ip6_dst_idev(skb_dst(skb)),
+                       IPSTATS_MIB_INHDRERRORS);
        icmpv6_param_prob(skb, ICMPV6_HDR_FIELD, skb_network_header_len(skb));
        return -1;
 }
index d916d6a..af46e19 100644 (file)
@@ -1769,6 +1769,37 @@ static int ip6_convert_metrics(struct mx6_config *mxc,
        return -EINVAL;
 }
 
+static struct rt6_info *ip6_nh_lookup_table(struct net *net,
+                                           struct fib6_config *cfg,
+                                           const struct in6_addr *gw_addr)
+{
+       struct flowi6 fl6 = {
+               .flowi6_oif = cfg->fc_ifindex,
+               .daddr = *gw_addr,
+               .saddr = cfg->fc_prefsrc,
+       };
+       struct fib6_table *table;
+       struct rt6_info *rt;
+       int flags = 0;
+
+       table = fib6_get_table(net, cfg->fc_table);
+       if (!table)
+               return NULL;
+
+       if (!ipv6_addr_any(&cfg->fc_prefsrc))
+               flags |= RT6_LOOKUP_F_HAS_SADDR;
+
+       rt = ip6_pol_route(net, table, cfg->fc_ifindex, &fl6, flags);
+
+       /* if table lookup failed, fall back to full lookup */
+       if (rt == net->ipv6.ip6_null_entry) {
+               ip6_rt_put(rt);
+               rt = NULL;
+       }
+
+       return rt;
+}
+
 static struct rt6_info *ip6_route_info_create(struct fib6_config *cfg)
 {
        struct net *net = cfg->fc_nlinfo.nl_net;
@@ -1940,7 +1971,7 @@ static struct rt6_info *ip6_route_info_create(struct fib6_config *cfg)
                rt->rt6i_gateway = *gw_addr;
 
                if (gwa_type != (IPV6_ADDR_LINKLOCAL|IPV6_ADDR_UNICAST)) {
-                       struct rt6_info *grt;
+                       struct rt6_info *grt = NULL;
 
                        /* IPv6 strictly inhibits using not link-local
                           addresses as nexthop address.
@@ -1952,7 +1983,12 @@ static struct rt6_info *ip6_route_info_create(struct fib6_config *cfg)
                        if (!(gwa_type & IPV6_ADDR_UNICAST))
                                goto out;
 
-                       grt = rt6_lookup(net, gw_addr, NULL, cfg->fc_ifindex, 1);
+                       if (cfg->fc_table)
+                               grt = ip6_nh_lookup_table(net, cfg, gw_addr);
+
+                       if (!grt)
+                               grt = rt6_lookup(net, gw_addr, NULL,
+                                                cfg->fc_ifindex, 1);
 
                        err = -EHOSTUNREACH;
                        if (!grt)
index aab91fa..59c4839 100644 (file)
@@ -155,11 +155,11 @@ struct sock *cookie_v6_check(struct sock *sk, struct sk_buff *skb)
 
        mss = __cookie_v6_check(ipv6_hdr(skb), th, cookie);
        if (mss == 0) {
-               NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_SYNCOOKIESFAILED);
+               __NET_INC_STATS(sock_net(sk), LINUX_MIB_SYNCOOKIESFAILED);
                goto out;
        }
 
-       NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_SYNCOOKIESRECV);
+       __NET_INC_STATS(sock_net(sk), LINUX_MIB_SYNCOOKIESRECV);
 
        /* check for timestamp cookie support */
        memset(&tcp_opt, 0, sizeof(tcp_opt));
index 800265c..7bdc9c9 100644 (file)
@@ -336,8 +336,8 @@ static void tcp_v6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
                                        skb->dev->ifindex);
 
        if (!sk) {
-               ICMP6_INC_STATS_BH(net, __in6_dev_get(skb->dev),
-                                  ICMP6_MIB_INERRORS);
+               __ICMP6_INC_STATS(net, __in6_dev_get(skb->dev),
+                                 ICMP6_MIB_INERRORS);
                return;
        }
 
@@ -352,13 +352,13 @@ static void tcp_v6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
 
        bh_lock_sock(sk);
        if (sock_owned_by_user(sk) && type != ICMPV6_PKT_TOOBIG)
-               NET_INC_STATS_BH(net, LINUX_MIB_LOCKDROPPEDICMPS);
+               __NET_INC_STATS(net, LINUX_MIB_LOCKDROPPEDICMPS);
 
        if (sk->sk_state == TCP_CLOSE)
                goto out;
 
        if (ipv6_hdr(skb)->hop_limit < inet6_sk(sk)->min_hopcount) {
-               NET_INC_STATS_BH(net, LINUX_MIB_TCPMINTTLDROP);
+               __NET_INC_STATS(net, LINUX_MIB_TCPMINTTLDROP);
                goto out;
        }
 
@@ -368,7 +368,7 @@ static void tcp_v6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
        snd_una = fastopen ? tcp_rsk(fastopen)->snt_isn : tp->snd_una;
        if (sk->sk_state != TCP_LISTEN &&
            !between(seq, snd_una, tp->snd_nxt)) {
-               NET_INC_STATS_BH(net, LINUX_MIB_OUTOFWINDOWICMPS);
+               __NET_INC_STATS(net, LINUX_MIB_OUTOFWINDOWICMPS);
                goto out;
        }
 
@@ -649,12 +649,12 @@ static bool tcp_v6_inbound_md5_hash(const struct sock *sk,
                return false;
 
        if (hash_expected && !hash_location) {
-               NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPMD5NOTFOUND);
+               NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPMD5NOTFOUND);
                return true;
        }
 
        if (!hash_expected && hash_location) {
-               NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPMD5UNEXPECTED);
+               NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPMD5UNEXPECTED);
                return true;
        }
 
@@ -825,9 +825,9 @@ static void tcp_v6_send_response(const struct sock *sk, struct sk_buff *skb, u32
        if (!IS_ERR(dst)) {
                skb_dst_set(buff, dst);
                ip6_xmit(ctl_sk, buff, &fl6, NULL, tclass);
-               TCP_INC_STATS_BH(net, TCP_MIB_OUTSEGS);
+               TCP_INC_STATS(net, TCP_MIB_OUTSEGS);
                if (rst)
-                       TCP_INC_STATS_BH(net, TCP_MIB_OUTRSTS);
+                       TCP_INC_STATS(net, TCP_MIB_OUTRSTS);
                return;
        }
 
@@ -1165,7 +1165,7 @@ static struct sock *tcp_v6_syn_recv_sock(const struct sock *sk, struct sk_buff *
        return newsk;
 
 out_overflow:
-       NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_LISTENOVERFLOWS);
+       __NET_INC_STATS(sock_net(sk), LINUX_MIB_LISTENOVERFLOWS);
 out_nonewsk:
        dst_release(dst);
 out:
@@ -1276,8 +1276,8 @@ discard:
        kfree_skb(skb);
        return 0;
 csum_err:
-       TCP_INC_STATS_BH(sock_net(sk), TCP_MIB_CSUMERRORS);
-       TCP_INC_STATS_BH(sock_net(sk), TCP_MIB_INERRS);
+       TCP_INC_STATS(sock_net(sk), TCP_MIB_CSUMERRORS);
+       TCP_INC_STATS(sock_net(sk), TCP_MIB_INERRS);
        goto discard;
 
 
@@ -1359,7 +1359,7 @@ static int tcp_v6_rcv(struct sk_buff *skb)
        /*
         *      Count it even if it's bad.
         */
-       TCP_INC_STATS_BH(net, TCP_MIB_INSEGS);
+       __TCP_INC_STATS(net, TCP_MIB_INSEGS);
 
        if (!pskb_may_pull(skb, sizeof(struct tcphdr)))
                goto discard_it;
@@ -1421,7 +1421,7 @@ process:
                }
        }
        if (hdr->hop_limit < inet6_sk(sk)->min_hopcount) {
-               NET_INC_STATS_BH(net, LINUX_MIB_TCPMINTTLDROP);
+               __NET_INC_STATS(net, LINUX_MIB_TCPMINTTLDROP);
                goto discard_and_relse;
        }
 
@@ -1454,7 +1454,7 @@ process:
        } else if (unlikely(sk_add_backlog(sk, skb,
                                           sk->sk_rcvbuf + sk->sk_sndbuf))) {
                bh_unlock_sock(sk);
-               NET_INC_STATS_BH(net, LINUX_MIB_TCPBACKLOGDROP);
+               __NET_INC_STATS(net, LINUX_MIB_TCPBACKLOGDROP);
                goto discard_and_relse;
        }
        bh_unlock_sock(sk);
@@ -1472,9 +1472,9 @@ no_tcp_socket:
 
        if (tcp_checksum_complete(skb)) {
 csum_error:
-               TCP_INC_STATS_BH(net, TCP_MIB_CSUMERRORS);
+               __TCP_INC_STATS(net, TCP_MIB_CSUMERRORS);
 bad_packet:
-               TCP_INC_STATS_BH(net, TCP_MIB_INERRS);
+               __TCP_INC_STATS(net, TCP_MIB_INERRS);
        } else {
                tcp_v6_send_reset(NULL, skb);
        }
index 8d8b2cd..aca0609 100644 (file)
@@ -423,24 +423,22 @@ try_again:
                if (!peeked) {
                        atomic_inc(&sk->sk_drops);
                        if (is_udp4)
-                               UDP_INC_STATS_USER(sock_net(sk),
-                                                  UDP_MIB_INERRORS,
-                                                  is_udplite);
+                               UDP_INC_STATS(sock_net(sk), UDP_MIB_INERRORS,
+                                             is_udplite);
                        else
-                               UDP6_INC_STATS_USER(sock_net(sk),
-                                                   UDP_MIB_INERRORS,
-                                                   is_udplite);
+                               UDP6_INC_STATS(sock_net(sk), UDP_MIB_INERRORS,
+                                              is_udplite);
                }
                skb_free_datagram_locked(sk, skb);
                return err;
        }
        if (!peeked) {
                if (is_udp4)
-                       UDP_INC_STATS_USER(sock_net(sk),
-                                       UDP_MIB_INDATAGRAMS, is_udplite);
+                       UDP_INC_STATS(sock_net(sk), UDP_MIB_INDATAGRAMS,
+                                     is_udplite);
                else
-                       UDP6_INC_STATS_USER(sock_net(sk),
-                                       UDP_MIB_INDATAGRAMS, is_udplite);
+                       UDP6_INC_STATS(sock_net(sk), UDP_MIB_INDATAGRAMS,
+                                      is_udplite);
        }
 
        sock_recv_ts_and_drops(msg, sk, skb);
@@ -487,15 +485,15 @@ csum_copy_err:
        slow = lock_sock_fast(sk);
        if (!skb_kill_datagram(sk, skb, flags)) {
                if (is_udp4) {
-                       UDP_INC_STATS_USER(sock_net(sk),
-                                       UDP_MIB_CSUMERRORS, is_udplite);
-                       UDP_INC_STATS_USER(sock_net(sk),
-                                       UDP_MIB_INERRORS, is_udplite);
+                       UDP_INC_STATS(sock_net(sk),
+                                     UDP_MIB_CSUMERRORS, is_udplite);
+                       UDP_INC_STATS(sock_net(sk),
+                                     UDP_MIB_INERRORS, is_udplite);
                } else {
-                       UDP6_INC_STATS_USER(sock_net(sk),
-                                       UDP_MIB_CSUMERRORS, is_udplite);
-                       UDP6_INC_STATS_USER(sock_net(sk),
-                                       UDP_MIB_INERRORS, is_udplite);
+                       UDP6_INC_STATS(sock_net(sk),
+                                      UDP_MIB_CSUMERRORS, is_udplite);
+                       UDP6_INC_STATS(sock_net(sk),
+                                      UDP_MIB_INERRORS, is_udplite);
                }
        }
        unlock_sock_fast(sk, slow);
@@ -523,8 +521,8 @@ void __udp6_lib_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
        sk = __udp6_lib_lookup(net, daddr, uh->dest, saddr, uh->source,
                               inet6_iif(skb), udptable, skb);
        if (!sk) {
-               ICMP6_INC_STATS_BH(net, __in6_dev_get(skb->dev),
-                                  ICMP6_MIB_INERRORS);
+               __ICMP6_INC_STATS(net, __in6_dev_get(skb->dev),
+                                 ICMP6_MIB_INERRORS);
                return;
        }
 
@@ -572,9 +570,9 @@ static int __udpv6_queue_rcv_skb(struct sock *sk, struct sk_buff *skb)
 
                /* Note that an ENOMEM error is charged twice */
                if (rc == -ENOMEM)
-                       UDP6_INC_STATS_BH(sock_net(sk),
-                                       UDP_MIB_RCVBUFERRORS, is_udplite);
-               UDP6_INC_STATS_BH(sock_net(sk), UDP_MIB_INERRORS, is_udplite);
+                       UDP6_INC_STATS(sock_net(sk),
+                                        UDP_MIB_RCVBUFERRORS, is_udplite);
+               UDP6_INC_STATS(sock_net(sk), UDP_MIB_INERRORS, is_udplite);
                kfree_skb(skb);
                return -1;
        }
@@ -630,9 +628,9 @@ int udpv6_queue_rcv_skb(struct sock *sk, struct sk_buff *skb)
 
                        ret = encap_rcv(sk, skb);
                        if (ret <= 0) {
-                               UDP_INC_STATS_BH(sock_net(sk),
-                                                UDP_MIB_INDATAGRAMS,
-                                                is_udplite);
+                               __UDP_INC_STATS(sock_net(sk),
+                                               UDP_MIB_INDATAGRAMS,
+                                               is_udplite);
                                return -ret;
                        }
                }
@@ -666,8 +664,8 @@ int udpv6_queue_rcv_skb(struct sock *sk, struct sk_buff *skb)
 
        udp_csum_pull_header(skb);
        if (sk_rcvqueues_full(sk, sk->sk_rcvbuf)) {
-               UDP6_INC_STATS_BH(sock_net(sk),
-                                 UDP_MIB_RCVBUFERRORS, is_udplite);
+               __UDP6_INC_STATS(sock_net(sk),
+                                UDP_MIB_RCVBUFERRORS, is_udplite);
                goto drop;
        }
 
@@ -686,9 +684,9 @@ int udpv6_queue_rcv_skb(struct sock *sk, struct sk_buff *skb)
        return rc;
 
 csum_error:
-       UDP6_INC_STATS_BH(sock_net(sk), UDP_MIB_CSUMERRORS, is_udplite);
+       __UDP6_INC_STATS(sock_net(sk), UDP_MIB_CSUMERRORS, is_udplite);
 drop:
-       UDP6_INC_STATS_BH(sock_net(sk), UDP_MIB_INERRORS, is_udplite);
+       __UDP6_INC_STATS(sock_net(sk), UDP_MIB_INERRORS, is_udplite);
        atomic_inc(&sk->sk_drops);
        kfree_skb(skb);
        return -1;
@@ -771,10 +769,10 @@ start_lookup:
                nskb = skb_clone(skb, GFP_ATOMIC);
                if (unlikely(!nskb)) {
                        atomic_inc(&sk->sk_drops);
-                       UDP6_INC_STATS_BH(net, UDP_MIB_RCVBUFERRORS,
-                                         IS_UDPLITE(sk));
-                       UDP6_INC_STATS_BH(net, UDP_MIB_INERRORS,
-                                         IS_UDPLITE(sk));
+                       __UDP6_INC_STATS(net, UDP_MIB_RCVBUFERRORS,
+                                        IS_UDPLITE(sk));
+                       __UDP6_INC_STATS(net, UDP_MIB_INERRORS,
+                                        IS_UDPLITE(sk));
                        continue;
                }
 
@@ -793,8 +791,8 @@ start_lookup:
                        consume_skb(skb);
        } else {
                kfree_skb(skb);
-               UDP6_INC_STATS_BH(net, UDP_MIB_IGNOREDMULTI,
-                                 proto == IPPROTO_UDPLITE);
+               __UDP6_INC_STATS(net, UDP_MIB_IGNOREDMULTI,
+                                proto == IPPROTO_UDPLITE);
        }
        return 0;
 }
@@ -887,7 +885,7 @@ int __udp6_lib_rcv(struct sk_buff *skb, struct udp_table *udptable,
        if (udp_lib_checksum_complete(skb))
                goto csum_error;
 
-       UDP6_INC_STATS_BH(net, UDP_MIB_NOPORTS, proto == IPPROTO_UDPLITE);
+       __UDP6_INC_STATS(net, UDP_MIB_NOPORTS, proto == IPPROTO_UDPLITE);
        icmpv6_send(skb, ICMPV6_DEST_UNREACH, ICMPV6_PORT_UNREACH, 0);
 
        kfree_skb(skb);
@@ -901,9 +899,9 @@ short_packet:
                            daddr, ntohs(uh->dest));
        goto discard;
 csum_error:
-       UDP6_INC_STATS_BH(net, UDP_MIB_CSUMERRORS, proto == IPPROTO_UDPLITE);
+       __UDP6_INC_STATS(net, UDP_MIB_CSUMERRORS, proto == IPPROTO_UDPLITE);
 discard:
-       UDP6_INC_STATS_BH(net, UDP_MIB_INERRORS, proto == IPPROTO_UDPLITE);
+       __UDP6_INC_STATS(net, UDP_MIB_INERRORS, proto == IPPROTO_UDPLITE);
        kfree_skb(skb);
        return 0;
 }
@@ -1015,13 +1013,14 @@ send:
        err = ip6_send_skb(skb);
        if (err) {
                if (err == -ENOBUFS && !inet6_sk(sk)->recverr) {
-                       UDP6_INC_STATS_USER(sock_net(sk),
-                                           UDP_MIB_SNDBUFERRORS, is_udplite);
+                       UDP6_INC_STATS(sock_net(sk),
+                                      UDP_MIB_SNDBUFERRORS, is_udplite);
                        err = 0;
                }
-       } else
-               UDP6_INC_STATS_USER(sock_net(sk),
-                                   UDP_MIB_OUTDATAGRAMS, is_udplite);
+       } else {
+               UDP6_INC_STATS(sock_net(sk),
+                              UDP_MIB_OUTDATAGRAMS, is_udplite);
+       }
        return err;
 }
 
@@ -1065,11 +1064,9 @@ int udpv6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
        struct ip6_flowlabel *flowlabel = NULL;
        struct flowi6 fl6;
        struct dst_entry *dst;
+       struct ipcm6_cookie ipc6;
        int addr_len = msg->msg_namelen;
        int ulen = len;
-       int hlimit = -1;
-       int tclass = -1;
-       int dontfrag = -1;
        int corkreq = up->corkflag || msg->msg_flags&MSG_MORE;
        int err;
        int connected = 0;
@@ -1077,6 +1074,10 @@ int udpv6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
        int (*getfrag)(void *, char *, int, int, int, struct sk_buff *);
        struct sockcm_cookie sockc;
 
+       ipc6.hlimit = -1;
+       ipc6.tclass = -1;
+       ipc6.dontfrag = -1;
+
        /* destination address check */
        if (sin6) {
                if (addr_len < offsetof(struct sockaddr, sa_data))
@@ -1201,10 +1202,9 @@ do_udp_sendmsg:
                opt = &opt_space;
                memset(opt, 0, sizeof(struct ipv6_txoptions));
                opt->tot_len = sizeof(*opt);
+               ipc6.opt = opt;
 
-               err = ip6_datagram_send_ctl(sock_net(sk), sk, msg, &fl6, opt,
-                                           &hlimit, &tclass, &dontfrag,
-                                           &sockc);
+               err = ip6_datagram_send_ctl(sock_net(sk), sk, msg, &fl6, &ipc6, &sockc);
                if (err < 0) {
                        fl6_sock_release(flowlabel);
                        return err;
@@ -1225,6 +1225,7 @@ do_udp_sendmsg:
        if (flowlabel)
                opt = fl6_merge_options(&opt_space, flowlabel, opt);
        opt = ipv6_fixup_options(&opt_space, opt);
+       ipc6.opt = opt;
 
        fl6.flowi6_proto = sk->sk_protocol;
        if (!ipv6_addr_any(daddr))
@@ -1254,11 +1255,11 @@ do_udp_sendmsg:
                goto out;
        }
 
-       if (hlimit < 0)
-               hlimit = ip6_sk_dst_hoplimit(np, &fl6, dst);
+       if (ipc6.hlimit < 0)
+               ipc6.hlimit = ip6_sk_dst_hoplimit(np, &fl6, dst);
 
-       if (tclass < 0)
-               tclass = np->tclass;
+       if (ipc6.tclass < 0)
+               ipc6.tclass = np->tclass;
 
        if (msg->msg_flags&MSG_CONFIRM)
                goto do_confirm;
@@ -1269,9 +1270,9 @@ back_from_confirm:
                struct sk_buff *skb;
 
                skb = ip6_make_skb(sk, getfrag, msg, ulen,
-                                  sizeof(struct udphdr), hlimit, tclass, opt,
+                                  sizeof(struct udphdr), &ipc6,
                                   &fl6, (struct rt6_info *)dst,
-                                  msg->msg_flags, dontfrag, &sockc);
+                                  msg->msg_flags, &sockc);
                err = PTR_ERR(skb);
                if (!IS_ERR_OR_NULL(skb))
                        err = udp_v6_send_skb(skb, &fl6);
@@ -1292,14 +1293,12 @@ back_from_confirm:
        up->pending = AF_INET6;
 
 do_append_data:
-       if (dontfrag < 0)
-               dontfrag = np->dontfrag;
+       if (ipc6.dontfrag < 0)
+               ipc6.dontfrag = np->dontfrag;
        up->len += ulen;
-       err = ip6_append_data(sk, getfrag, msg, ulen,
-               sizeof(struct udphdr), hlimit, tclass, opt, &fl6,
-               (struct rt6_info *)dst,
-               corkreq ? msg->msg_flags|MSG_MORE : msg->msg_flags, dontfrag,
-               &sockc);
+       err = ip6_append_data(sk, getfrag, msg, ulen, sizeof(struct udphdr),
+                             &ipc6, &fl6, (struct rt6_info *)dst,
+                             corkreq ? msg->msg_flags|MSG_MORE : msg->msg_flags, &sockc);
        if (err)
                udp_v6_flush_pending_frames(sk);
        else if (!corkreq)
@@ -1342,8 +1341,8 @@ out:
         * seems like overkill.
         */
        if (err == -ENOBUFS || test_bit(SOCK_NOSPACE, &sk->sk_socket->flags)) {
-               UDP6_INC_STATS_USER(sock_net(sk),
-                               UDP_MIB_SNDBUFERRORS, is_udplite);
+               UDP6_INC_STATS(sock_net(sk),
+                              UDP_MIB_SNDBUFERRORS, is_udplite);
        }
        return err;
 
index fcfbe57..d8b7267 100644 (file)
@@ -181,7 +181,7 @@ static netdev_tx_t irlan_eth_xmit(struct sk_buff *skb,
                skb = new_skb;
        }
 
-       dev->trans_start = jiffies;
+       netif_trans_update(dev);
 
        len = skb->len;
        /* Now queue the packet in the transport layer */
index afca2eb..6edfa99 100644 (file)
@@ -1376,9 +1376,9 @@ static int l2tp_tunnel_sock_create(struct net *net,
                        memcpy(&udp_conf.peer_ip6, cfg->peer_ip6,
                               sizeof(udp_conf.peer_ip6));
                        udp_conf.use_udp6_tx_checksums =
-                           cfg->udp6_zero_tx_checksums;
+                         ! cfg->udp6_zero_tx_checksums;
                        udp_conf.use_udp6_rx_checksums =
-                           cfg->udp6_zero_rx_checksums;
+                         ! cfg->udp6_zero_rx_checksums;
                } else
 #endif
                {
index 46e0726..c6f5df1 100644 (file)
@@ -495,10 +495,8 @@ static int l2tp_ip6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
        struct dst_entry *dst = NULL;
        struct flowi6 fl6;
        struct sockcm_cookie sockc_unused = {0};
+       struct ipcm6_cookie ipc6;
        int addr_len = msg->msg_namelen;
-       int hlimit = -1;
-       int tclass = -1;
-       int dontfrag = -1;
        int transhdrlen = 4; /* zero session-id */
        int ulen = len + transhdrlen;
        int err;
@@ -520,6 +518,10 @@ static int l2tp_ip6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
 
        fl6.flowi6_mark = sk->sk_mark;
 
+       ipc6.hlimit = -1;
+       ipc6.tclass = -1;
+       ipc6.dontfrag = -1;
+
        if (lsa) {
                if (addr_len < SIN6_LEN_RFC2133)
                        return -EINVAL;
@@ -564,11 +566,11 @@ static int l2tp_ip6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
                opt = &opt_space;
                memset(opt, 0, sizeof(struct ipv6_txoptions));
                opt->tot_len = sizeof(struct ipv6_txoptions);
+               ipc6.opt = opt;
 
-                err = ip6_datagram_send_ctl(sock_net(sk), sk, msg, &fl6, opt,
-                                            &hlimit, &tclass, &dontfrag,
-                                            &sockc_unused);
-                if (err < 0) {
+               err = ip6_datagram_send_ctl(sock_net(sk), sk, msg, &fl6, &ipc6,
+                                           &sockc_unused);
+               if (err < 0) {
                        fl6_sock_release(flowlabel);
                        return err;
                }
@@ -588,6 +590,7 @@ static int l2tp_ip6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
        if (flowlabel)
                opt = fl6_merge_options(&opt_space, flowlabel, opt);
        opt = ipv6_fixup_options(&opt_space, opt);
+       ipc6.opt = opt;
 
        fl6.flowi6_proto = sk->sk_protocol;
        if (!ipv6_addr_any(daddr))
@@ -612,14 +615,14 @@ static int l2tp_ip6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
                goto out;
        }
 
-       if (hlimit < 0)
-               hlimit = ip6_sk_dst_hoplimit(np, &fl6, dst);
+       if (ipc6.hlimit < 0)
+               ipc6.hlimit = ip6_sk_dst_hoplimit(np, &fl6, dst);
 
-       if (tclass < 0)
-               tclass = np->tclass;
+       if (ipc6.tclass < 0)
+               ipc6.tclass = np->tclass;
 
-       if (dontfrag < 0)
-               dontfrag = np->dontfrag;
+       if (ipc6.dontfrag < 0)
+               ipc6.dontfrag = np->dontfrag;
 
        if (msg->msg_flags & MSG_CONFIRM)
                goto do_confirm;
@@ -627,9 +630,9 @@ static int l2tp_ip6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
 back_from_confirm:
        lock_sock(sk);
        err = ip6_append_data(sk, ip_generic_getfrag, msg,
-                             ulen, transhdrlen, hlimit, tclass, opt,
+                             ulen, transhdrlen, &ipc6,
                              &fl6, (struct rt6_info *)dst,
-                             msg->msg_flags, dontfrag, &sockc_unused);
+                             msg->msg_flags, &sockc_unused);
        if (err)
                ip6_flush_pending_frames(sk);
        else if (!(msg->msg_flags & MSG_MORE))
index 24ed2e8..1d02e8d 100644 (file)
@@ -346,22 +346,30 @@ static int l2tp_nl_tunnel_send(struct sk_buff *skb, u32 portid, u32 seq, int fla
        if (nest == NULL)
                goto nla_put_failure;
 
-       if (nla_put_u64(skb, L2TP_ATTR_TX_PACKETS,
-                   atomic_long_read(&tunnel->stats.tx_packets)) ||
-           nla_put_u64(skb, L2TP_ATTR_TX_BYTES,
-                   atomic_long_read(&tunnel->stats.tx_bytes)) ||
-           nla_put_u64(skb, L2TP_ATTR_TX_ERRORS,
-                   atomic_long_read(&tunnel->stats.tx_errors)) ||
-           nla_put_u64(skb, L2TP_ATTR_RX_PACKETS,
-                   atomic_long_read(&tunnel->stats.rx_packets)) ||
-           nla_put_u64(skb, L2TP_ATTR_RX_BYTES,
-                   atomic_long_read(&tunnel->stats.rx_bytes)) ||
-           nla_put_u64(skb, L2TP_ATTR_RX_SEQ_DISCARDS,
-                   atomic_long_read(&tunnel->stats.rx_seq_discards)) ||
-           nla_put_u64(skb, L2TP_ATTR_RX_OOS_PACKETS,
-                   atomic_long_read(&tunnel->stats.rx_oos_packets)) ||
-           nla_put_u64(skb, L2TP_ATTR_RX_ERRORS,
-                   atomic_long_read(&tunnel->stats.rx_errors)))
+       if (nla_put_u64_64bit(skb, L2TP_ATTR_TX_PACKETS,
+                             atomic_long_read(&tunnel->stats.tx_packets),
+                             L2TP_ATTR_STATS_PAD) ||
+           nla_put_u64_64bit(skb, L2TP_ATTR_TX_BYTES,
+                             atomic_long_read(&tunnel->stats.tx_bytes),
+                             L2TP_ATTR_STATS_PAD) ||
+           nla_put_u64_64bit(skb, L2TP_ATTR_TX_ERRORS,
+                             atomic_long_read(&tunnel->stats.tx_errors),
+                             L2TP_ATTR_STATS_PAD) ||
+           nla_put_u64_64bit(skb, L2TP_ATTR_RX_PACKETS,
+                             atomic_long_read(&tunnel->stats.rx_packets),
+                             L2TP_ATTR_STATS_PAD) ||
+           nla_put_u64_64bit(skb, L2TP_ATTR_RX_BYTES,
+                             atomic_long_read(&tunnel->stats.rx_bytes),
+                             L2TP_ATTR_STATS_PAD) ||
+           nla_put_u64_64bit(skb, L2TP_ATTR_RX_SEQ_DISCARDS,
+                             atomic_long_read(&tunnel->stats.rx_seq_discards),
+                             L2TP_ATTR_STATS_PAD) ||
+           nla_put_u64_64bit(skb, L2TP_ATTR_RX_OOS_PACKETS,
+                             atomic_long_read(&tunnel->stats.rx_oos_packets),
+                             L2TP_ATTR_STATS_PAD) ||
+           nla_put_u64_64bit(skb, L2TP_ATTR_RX_ERRORS,
+                             atomic_long_read(&tunnel->stats.rx_errors),
+                             L2TP_ATTR_STATS_PAD))
                goto nla_put_failure;
        nla_nest_end(skb, nest);
 
@@ -754,22 +762,30 @@ static int l2tp_nl_session_send(struct sk_buff *skb, u32 portid, u32 seq, int fl
        if (nest == NULL)
                goto nla_put_failure;
 
-       if (nla_put_u64(skb, L2TP_ATTR_TX_PACKETS,
-               atomic_long_read(&session->stats.tx_packets)) ||
-           nla_put_u64(skb, L2TP_ATTR_TX_BYTES,
-               atomic_long_read(&session->stats.tx_bytes)) ||
-           nla_put_u64(skb, L2TP_ATTR_TX_ERRORS,
-               atomic_long_read(&session->stats.tx_errors)) ||
-           nla_put_u64(skb, L2TP_ATTR_RX_PACKETS,
-               atomic_long_read(&session->stats.rx_packets)) ||
-           nla_put_u64(skb, L2TP_ATTR_RX_BYTES,
-               atomic_long_read(&session->stats.rx_bytes)) ||
-           nla_put_u64(skb, L2TP_ATTR_RX_SEQ_DISCARDS,
-               atomic_long_read(&session->stats.rx_seq_discards)) ||
-           nla_put_u64(skb, L2TP_ATTR_RX_OOS_PACKETS,
-               atomic_long_read(&session->stats.rx_oos_packets)) ||
-           nla_put_u64(skb, L2TP_ATTR_RX_ERRORS,
-               atomic_long_read(&session->stats.rx_errors)))
+       if (nla_put_u64_64bit(skb, L2TP_ATTR_TX_PACKETS,
+                             atomic_long_read(&session->stats.tx_packets),
+                             L2TP_ATTR_STATS_PAD) ||
+           nla_put_u64_64bit(skb, L2TP_ATTR_TX_BYTES,
+                             atomic_long_read(&session->stats.tx_bytes),
+                             L2TP_ATTR_STATS_PAD) ||
+           nla_put_u64_64bit(skb, L2TP_ATTR_TX_ERRORS,
+                             atomic_long_read(&session->stats.tx_errors),
+                             L2TP_ATTR_STATS_PAD) ||
+           nla_put_u64_64bit(skb, L2TP_ATTR_RX_PACKETS,
+                             atomic_long_read(&session->stats.rx_packets),
+                             L2TP_ATTR_STATS_PAD) ||
+           nla_put_u64_64bit(skb, L2TP_ATTR_RX_BYTES,
+                             atomic_long_read(&session->stats.rx_bytes),
+                             L2TP_ATTR_STATS_PAD) ||
+           nla_put_u64_64bit(skb, L2TP_ATTR_RX_SEQ_DISCARDS,
+                             atomic_long_read(&session->stats.rx_seq_discards),
+                             L2TP_ATTR_STATS_PAD) ||
+           nla_put_u64_64bit(skb, L2TP_ATTR_RX_OOS_PACKETS,
+                             atomic_long_read(&session->stats.rx_oos_packets),
+                             L2TP_ATTR_STATS_PAD) ||
+           nla_put_u64_64bit(skb, L2TP_ATTR_RX_ERRORS,
+                             atomic_long_read(&session->stats.rx_errors),
+                             L2TP_ATTR_STATS_PAD))
                goto nla_put_failure;
        nla_nest_end(skb, nest);
 
index 6a33f0b..c59af3e 100644 (file)
@@ -1761,7 +1761,7 @@ int ieee80211_if_add(struct ieee80211_local *local, const char *name,
 
                ret = dev_alloc_name(ndev, ndev->name);
                if (ret < 0) {
-                       free_netdev(ndev);
+                       ieee80211_if_free(ndev);
                        return ret;
                }
 
@@ -1847,7 +1847,7 @@ int ieee80211_if_add(struct ieee80211_local *local, const char *name,
 
                ret = register_netdevice(ndev);
                if (ret) {
-                       free_netdev(ndev);
+                       ieee80211_if_free(ndev);
                        return ret;
                }
        }
index 6794391..c3c809b 100644 (file)
@@ -2918,8 +2918,10 @@ static int ip_vs_genl_fill_stats(struct sk_buff *skb, int container_type,
        if (nla_put_u32(skb, IPVS_STATS_ATTR_CONNS, (u32)kstats->conns) ||
            nla_put_u32(skb, IPVS_STATS_ATTR_INPKTS, (u32)kstats->inpkts) ||
            nla_put_u32(skb, IPVS_STATS_ATTR_OUTPKTS, (u32)kstats->outpkts) ||
-           nla_put_u64(skb, IPVS_STATS_ATTR_INBYTES, kstats->inbytes) ||
-           nla_put_u64(skb, IPVS_STATS_ATTR_OUTBYTES, kstats->outbytes) ||
+           nla_put_u64_64bit(skb, IPVS_STATS_ATTR_INBYTES, kstats->inbytes,
+                             IPVS_STATS_ATTR_PAD) ||
+           nla_put_u64_64bit(skb, IPVS_STATS_ATTR_OUTBYTES, kstats->outbytes,
+                             IPVS_STATS_ATTR_PAD) ||
            nla_put_u32(skb, IPVS_STATS_ATTR_CPS, (u32)kstats->cps) ||
            nla_put_u32(skb, IPVS_STATS_ATTR_INPPS, (u32)kstats->inpps) ||
            nla_put_u32(skb, IPVS_STATS_ATTR_OUTPPS, (u32)kstats->outpps) ||
@@ -2943,16 +2945,26 @@ static int ip_vs_genl_fill_stats64(struct sk_buff *skb, int container_type,
        if (!nl_stats)
                return -EMSGSIZE;
 
-       if (nla_put_u64(skb, IPVS_STATS_ATTR_CONNS, kstats->conns) ||
-           nla_put_u64(skb, IPVS_STATS_ATTR_INPKTS, kstats->inpkts) ||
-           nla_put_u64(skb, IPVS_STATS_ATTR_OUTPKTS, kstats->outpkts) ||
-           nla_put_u64(skb, IPVS_STATS_ATTR_INBYTES, kstats->inbytes) ||
-           nla_put_u64(skb, IPVS_STATS_ATTR_OUTBYTES, kstats->outbytes) ||
-           nla_put_u64(skb, IPVS_STATS_ATTR_CPS, kstats->cps) ||
-           nla_put_u64(skb, IPVS_STATS_ATTR_INPPS, kstats->inpps) ||
-           nla_put_u64(skb, IPVS_STATS_ATTR_OUTPPS, kstats->outpps) ||
-           nla_put_u64(skb, IPVS_STATS_ATTR_INBPS, kstats->inbps) ||
-           nla_put_u64(skb, IPVS_STATS_ATTR_OUTBPS, kstats->outbps))
+       if (nla_put_u64_64bit(skb, IPVS_STATS_ATTR_CONNS, kstats->conns,
+                             IPVS_STATS_ATTR_PAD) ||
+           nla_put_u64_64bit(skb, IPVS_STATS_ATTR_INPKTS, kstats->inpkts,
+                             IPVS_STATS_ATTR_PAD) ||
+           nla_put_u64_64bit(skb, IPVS_STATS_ATTR_OUTPKTS, kstats->outpkts,
+                             IPVS_STATS_ATTR_PAD) ||
+           nla_put_u64_64bit(skb, IPVS_STATS_ATTR_INBYTES, kstats->inbytes,
+                             IPVS_STATS_ATTR_PAD) ||
+           nla_put_u64_64bit(skb, IPVS_STATS_ATTR_OUTBYTES, kstats->outbytes,
+                             IPVS_STATS_ATTR_PAD) ||
+           nla_put_u64_64bit(skb, IPVS_STATS_ATTR_CPS, kstats->cps,
+                             IPVS_STATS_ATTR_PAD) ||
+           nla_put_u64_64bit(skb, IPVS_STATS_ATTR_INPPS, kstats->inpps,
+                             IPVS_STATS_ATTR_PAD) ||
+           nla_put_u64_64bit(skb, IPVS_STATS_ATTR_OUTPPS, kstats->outpps,
+                             IPVS_STATS_ATTR_PAD) ||
+           nla_put_u64_64bit(skb, IPVS_STATS_ATTR_INBPS, kstats->inbps,
+                             IPVS_STATS_ATTR_PAD) ||
+           nla_put_u64_64bit(skb, IPVS_STATS_ATTR_OUTBPS, kstats->outbps,
+                             IPVS_STATS_ATTR_PAD))
                goto nla_put_failure;
        nla_nest_end(skb, nl_stats);
 
index 0cc66a4..856bd8d 100644 (file)
@@ -738,9 +738,9 @@ static size_t ovs_flow_cmd_msg_size(const struct sw_flow_actions *acts,
                len += nla_total_size(acts->orig_len);
 
        return len
-               + nla_total_size(sizeof(struct ovs_flow_stats)) /* OVS_FLOW_ATTR_STATS */
+               + nla_total_size_64bit(sizeof(struct ovs_flow_stats)) /* OVS_FLOW_ATTR_STATS */
                + nla_total_size(1) /* OVS_FLOW_ATTR_TCP_FLAGS */
-               + nla_total_size(8); /* OVS_FLOW_ATTR_USED */
+               + nla_total_size_64bit(8); /* OVS_FLOW_ATTR_USED */
 }
 
 /* Called with ovs_mutex or RCU read lock. */
@@ -754,11 +754,14 @@ static int ovs_flow_cmd_fill_stats(const struct sw_flow *flow,
        ovs_flow_stats_get(flow, &stats, &used, &tcp_flags);
 
        if (used &&
-           nla_put_u64(skb, OVS_FLOW_ATTR_USED, ovs_flow_used_time(used)))
+           nla_put_u64_64bit(skb, OVS_FLOW_ATTR_USED, ovs_flow_used_time(used),
+                             OVS_FLOW_ATTR_PAD))
                return -EMSGSIZE;
 
        if (stats.n_packets &&
-           nla_put(skb, OVS_FLOW_ATTR_STATS, sizeof(struct ovs_flow_stats), &stats))
+           nla_put_64bit(skb, OVS_FLOW_ATTR_STATS,
+                         sizeof(struct ovs_flow_stats), &stats,
+                         OVS_FLOW_ATTR_PAD))
                return -EMSGSIZE;
 
        if ((u8)ntohs(tcp_flags) &&
@@ -1434,8 +1437,8 @@ static size_t ovs_dp_cmd_msg_size(void)
        size_t msgsize = NLMSG_ALIGN(sizeof(struct ovs_header));
 
        msgsize += nla_total_size(IFNAMSIZ);
-       msgsize += nla_total_size(sizeof(struct ovs_dp_stats));
-       msgsize += nla_total_size(sizeof(struct ovs_dp_megaflow_stats));
+       msgsize += nla_total_size_64bit(sizeof(struct ovs_dp_stats));
+       msgsize += nla_total_size_64bit(sizeof(struct ovs_dp_megaflow_stats));
        msgsize += nla_total_size(sizeof(u32)); /* OVS_DP_ATTR_USER_FEATURES */
 
        return msgsize;
@@ -1462,13 +1465,13 @@ static int ovs_dp_cmd_fill_info(struct datapath *dp, struct sk_buff *skb,
                goto nla_put_failure;
 
        get_dp_stats(dp, &dp_stats, &dp_megaflow_stats);
-       if (nla_put(skb, OVS_DP_ATTR_STATS, sizeof(struct ovs_dp_stats),
-                       &dp_stats))
+       if (nla_put_64bit(skb, OVS_DP_ATTR_STATS, sizeof(struct ovs_dp_stats),
+                         &dp_stats, OVS_DP_ATTR_PAD))
                goto nla_put_failure;
 
-       if (nla_put(skb, OVS_DP_ATTR_MEGAFLOW_STATS,
-                       sizeof(struct ovs_dp_megaflow_stats),
-                       &dp_megaflow_stats))
+       if (nla_put_64bit(skb, OVS_DP_ATTR_MEGAFLOW_STATS,
+                         sizeof(struct ovs_dp_megaflow_stats),
+                         &dp_megaflow_stats, OVS_DP_ATTR_PAD))
                goto nla_put_failure;
 
        if (nla_put_u32(skb, OVS_DP_ATTR_USER_FEATURES, dp->user_features))
@@ -1837,8 +1840,9 @@ static int ovs_vport_cmd_fill_info(struct vport *vport, struct sk_buff *skb,
                goto nla_put_failure;
 
        ovs_vport_get_stats(vport, &vport_stats);
-       if (nla_put(skb, OVS_VPORT_ATTR_STATS, sizeof(struct ovs_vport_stats),
-                   &vport_stats))
+       if (nla_put_64bit(skb, OVS_VPORT_ATTR_STATS,
+                         sizeof(struct ovs_vport_stats), &vport_stats,
+                         OVS_VPORT_ATTR_PAD))
                goto nla_put_failure;
 
        if (ovs_vport_get_upcall_portids(vport, skb))
diff --git a/net/qrtr/Kconfig b/net/qrtr/Kconfig
new file mode 100644 (file)
index 0000000..673fd1f
--- /dev/null
@@ -0,0 +1,24 @@
+# Qualcomm IPC Router configuration
+#
+
+config QRTR
+       tristate "Qualcomm IPC Router support"
+       depends on ARCH_QCOM || COMPILE_TEST
+       ---help---
+         Say Y if you intend to use Qualcomm IPC router protocol.  The
+         protocol is used to communicate with services provided by other
+         hardware blocks in the system.
+
+         In order to do service lookups, a userspace daemon is required to
+         maintain a service listing.
+
+if QRTR
+
+config QRTR_SMD
+       tristate "SMD IPC Router channels"
+       depends on QCOM_SMD || COMPILE_TEST
+       ---help---
+         Say Y here to support SMD based ipcrouter channels.  SMD is the
+         most common transport for IPC Router.
+
+endif # QRTR
diff --git a/net/qrtr/Makefile b/net/qrtr/Makefile
new file mode 100644 (file)
index 0000000..6c00dc6
--- /dev/null
@@ -0,0 +1,2 @@
+obj-$(CONFIG_QRTR) := qrtr.o
+obj-$(CONFIG_QRTR_SMD) += smd.o
diff --git a/net/qrtr/qrtr.c b/net/qrtr/qrtr.c
new file mode 100644 (file)
index 0000000..c985ecb
--- /dev/null
@@ -0,0 +1,1007 @@
+/*
+ * Copyright (c) 2015, Sony Mobile Communications Inc.
+ * Copyright (c) 2013, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+#include <linux/module.h>
+#include <linux/netlink.h>
+#include <linux/qrtr.h>
+#include <linux/termios.h>     /* For TIOCINQ/OUTQ */
+
+#include <net/sock.h>
+
+#include "qrtr.h"
+
+#define QRTR_PROTO_VER 1
+
+/* auto-bind range */
+#define QRTR_MIN_EPH_SOCKET 0x4000
+#define QRTR_MAX_EPH_SOCKET 0x7fff
+
+enum qrtr_pkt_type {
+       QRTR_TYPE_DATA          = 1,
+       QRTR_TYPE_HELLO         = 2,
+       QRTR_TYPE_BYE           = 3,
+       QRTR_TYPE_NEW_SERVER    = 4,
+       QRTR_TYPE_DEL_SERVER    = 5,
+       QRTR_TYPE_DEL_CLIENT    = 6,
+       QRTR_TYPE_RESUME_TX     = 7,
+       QRTR_TYPE_EXIT          = 8,
+       QRTR_TYPE_PING          = 9,
+};
+
+/**
+ * struct qrtr_hdr - (I|R)PCrouter packet header
+ * @version: protocol version
+ * @type: packet type; one of QRTR_TYPE_*
+ * @src_node_id: source node
+ * @src_port_id: source port
+ * @confirm_rx: boolean; whether a resume-tx packet should be send in reply
+ * @size: length of packet, excluding this header
+ * @dst_node_id: destination node
+ * @dst_port_id: destination port
+ */
+struct qrtr_hdr {
+       __le32 version;
+       __le32 type;
+       __le32 src_node_id;
+       __le32 src_port_id;
+       __le32 confirm_rx;
+       __le32 size;
+       __le32 dst_node_id;
+       __le32 dst_port_id;
+} __packed;
+
+#define QRTR_HDR_SIZE sizeof(struct qrtr_hdr)
+#define QRTR_NODE_BCAST ((unsigned int)-1)
+#define QRTR_PORT_CTRL ((unsigned int)-2)
+
+struct qrtr_sock {
+       /* WARNING: sk must be the first member */
+       struct sock sk;
+       struct sockaddr_qrtr us;
+       struct sockaddr_qrtr peer;
+};
+
+static inline struct qrtr_sock *qrtr_sk(struct sock *sk)
+{
+       BUILD_BUG_ON(offsetof(struct qrtr_sock, sk) != 0);
+       return container_of(sk, struct qrtr_sock, sk);
+}
+
+static unsigned int qrtr_local_nid = -1;
+
+/* for node ids */
+static RADIX_TREE(qrtr_nodes, GFP_KERNEL);
+/* broadcast list */
+static LIST_HEAD(qrtr_all_nodes);
+/* lock for qrtr_nodes, qrtr_all_nodes and node reference */
+static DEFINE_MUTEX(qrtr_node_lock);
+
+/* local port allocation management */
+static DEFINE_IDR(qrtr_ports);
+static DEFINE_MUTEX(qrtr_port_lock);
+
+/**
+ * struct qrtr_node - endpoint node
+ * @ep_lock: lock for endpoint management and callbacks
+ * @ep: endpoint
+ * @ref: reference count for node
+ * @nid: node id
+ * @rx_queue: receive queue
+ * @work: scheduled work struct for recv work
+ * @item: list item for broadcast list
+ */
+struct qrtr_node {
+       struct mutex ep_lock;
+       struct qrtr_endpoint *ep;
+       struct kref ref;
+       unsigned int nid;
+
+       struct sk_buff_head rx_queue;
+       struct work_struct work;
+       struct list_head item;
+};
+
+/* Release node resources and free the node.
+ *
+ * Do not call directly, use qrtr_node_release.  To be used with
+ * kref_put_mutex.  As such, the node mutex is expected to be locked on call.
+ */
+static void __qrtr_node_release(struct kref *kref)
+{
+       struct qrtr_node *node = container_of(kref, struct qrtr_node, ref);
+
+       if (node->nid != QRTR_EP_NID_AUTO)
+               radix_tree_delete(&qrtr_nodes, node->nid);
+
+       list_del(&node->item);
+       mutex_unlock(&qrtr_node_lock);
+
+       skb_queue_purge(&node->rx_queue);
+       kfree(node);
+}
+
+/* Increment reference to node. */
+static struct qrtr_node *qrtr_node_acquire(struct qrtr_node *node)
+{
+       if (node)
+               kref_get(&node->ref);
+       return node;
+}
+
+/* Decrement reference to node and release as necessary. */
+static void qrtr_node_release(struct qrtr_node *node)
+{
+       if (!node)
+               return;
+       kref_put_mutex(&node->ref, __qrtr_node_release, &qrtr_node_lock);
+}
+
+/* Pass an outgoing packet socket buffer to the endpoint driver. */
+static int qrtr_node_enqueue(struct qrtr_node *node, struct sk_buff *skb)
+{
+       int rc = -ENODEV;
+
+       mutex_lock(&node->ep_lock);
+       if (node->ep)
+               rc = node->ep->xmit(node->ep, skb);
+       else
+               kfree_skb(skb);
+       mutex_unlock(&node->ep_lock);
+
+       return rc;
+}
+
+/* Lookup node by id.
+ *
+ * callers must release with qrtr_node_release()
+ */
+static struct qrtr_node *qrtr_node_lookup(unsigned int nid)
+{
+       struct qrtr_node *node;
+
+       mutex_lock(&qrtr_node_lock);
+       node = radix_tree_lookup(&qrtr_nodes, nid);
+       node = qrtr_node_acquire(node);
+       mutex_unlock(&qrtr_node_lock);
+
+       return node;
+}
+
+/* Assign node id to node.
+ *
+ * This is mostly useful for automatic node id assignment, based on
+ * the source id in the incoming packet.
+ */
+static void qrtr_node_assign(struct qrtr_node *node, unsigned int nid)
+{
+       if (node->nid != QRTR_EP_NID_AUTO || nid == QRTR_EP_NID_AUTO)
+               return;
+
+       mutex_lock(&qrtr_node_lock);
+       radix_tree_insert(&qrtr_nodes, nid, node);
+       node->nid = nid;
+       mutex_unlock(&qrtr_node_lock);
+}
+
+/**
+ * qrtr_endpoint_post() - post incoming data
+ * @ep: endpoint handle
+ * @data: data pointer
+ * @len: size of data in bytes
+ *
+ * Return: 0 on success; negative error code on failure
+ */
+int qrtr_endpoint_post(struct qrtr_endpoint *ep, const void *data, size_t len)
+{
+       struct qrtr_node *node = ep->node;
+       const struct qrtr_hdr *phdr = data;
+       struct sk_buff *skb;
+       unsigned int psize;
+       unsigned int size;
+       unsigned int type;
+       unsigned int ver;
+       unsigned int dst;
+
+       if (len < QRTR_HDR_SIZE || len & 3)
+               return -EINVAL;
+
+       ver = le32_to_cpu(phdr->version);
+       size = le32_to_cpu(phdr->size);
+       type = le32_to_cpu(phdr->type);
+       dst = le32_to_cpu(phdr->dst_port_id);
+
+       psize = (size + 3) & ~3;
+
+       if (ver != QRTR_PROTO_VER)
+               return -EINVAL;
+
+       if (len != psize + QRTR_HDR_SIZE)
+               return -EINVAL;
+
+       if (dst != QRTR_PORT_CTRL && type != QRTR_TYPE_DATA)
+               return -EINVAL;
+
+       skb = netdev_alloc_skb(NULL, len);
+       if (!skb)
+               return -ENOMEM;
+
+       skb_reset_transport_header(skb);
+       memcpy(skb_put(skb, len), data, len);
+
+       skb_queue_tail(&node->rx_queue, skb);
+       schedule_work(&node->work);
+
+       return 0;
+}
+EXPORT_SYMBOL_GPL(qrtr_endpoint_post);
+
+/* Allocate and construct a resume-tx packet. */
+static struct sk_buff *qrtr_alloc_resume_tx(u32 src_node,
+                                           u32 dst_node, u32 port)
+{
+       const int pkt_len = 20;
+       struct qrtr_hdr *hdr;
+       struct sk_buff *skb;
+       u32 *buf;
+
+       skb = alloc_skb(QRTR_HDR_SIZE + pkt_len, GFP_KERNEL);
+       if (!skb)
+               return NULL;
+       skb_reset_transport_header(skb);
+
+       hdr = (struct qrtr_hdr *)skb_put(skb, QRTR_HDR_SIZE);
+       hdr->version = cpu_to_le32(QRTR_PROTO_VER);
+       hdr->type = cpu_to_le32(QRTR_TYPE_RESUME_TX);
+       hdr->src_node_id = cpu_to_le32(src_node);
+       hdr->src_port_id = cpu_to_le32(QRTR_PORT_CTRL);
+       hdr->confirm_rx = cpu_to_le32(0);
+       hdr->size = cpu_to_le32(pkt_len);
+       hdr->dst_node_id = cpu_to_le32(dst_node);
+       hdr->dst_port_id = cpu_to_le32(QRTR_PORT_CTRL);
+
+       buf = (u32 *)skb_put(skb, pkt_len);
+       memset(buf, 0, pkt_len);
+       buf[0] = cpu_to_le32(QRTR_TYPE_RESUME_TX);
+       buf[1] = cpu_to_le32(src_node);
+       buf[2] = cpu_to_le32(port);
+
+       return skb;
+}
+
+static struct qrtr_sock *qrtr_port_lookup(int port);
+static void qrtr_port_put(struct qrtr_sock *ipc);
+
+/* Handle and route a received packet.
+ *
+ * This will auto-reply with resume-tx packet as necessary.
+ */
+static void qrtr_node_rx_work(struct work_struct *work)
+{
+       struct qrtr_node *node = container_of(work, struct qrtr_node, work);
+       struct sk_buff *skb;
+
+       while ((skb = skb_dequeue(&node->rx_queue)) != NULL) {
+               const struct qrtr_hdr *phdr;
+               u32 dst_node, dst_port;
+               struct qrtr_sock *ipc;
+               u32 src_node;
+               int confirm;
+
+               phdr = (const struct qrtr_hdr *)skb_transport_header(skb);
+               src_node = le32_to_cpu(phdr->src_node_id);
+               dst_node = le32_to_cpu(phdr->dst_node_id);
+               dst_port = le32_to_cpu(phdr->dst_port_id);
+               confirm = !!phdr->confirm_rx;
+
+               qrtr_node_assign(node, src_node);
+
+               ipc = qrtr_port_lookup(dst_port);
+               if (!ipc) {
+                       kfree_skb(skb);
+               } else {
+                       if (sock_queue_rcv_skb(&ipc->sk, skb))
+                               kfree_skb(skb);
+
+                       qrtr_port_put(ipc);
+               }
+
+               if (confirm) {
+                       skb = qrtr_alloc_resume_tx(dst_node, node->nid, dst_port);
+                       if (!skb)
+                               break;
+                       if (qrtr_node_enqueue(node, skb))
+                               break;
+               }
+       }
+}
+
+/**
+ * qrtr_endpoint_register() - register a new endpoint
+ * @ep: endpoint to register
+ * @nid: desired node id; may be QRTR_EP_NID_AUTO for auto-assignment
+ * Return: 0 on success; negative error code on failure
+ *
+ * The specified endpoint must have the xmit function pointer set on call.
+ */
+int qrtr_endpoint_register(struct qrtr_endpoint *ep, unsigned int nid)
+{
+       struct qrtr_node *node;
+
+       if (!ep || !ep->xmit)
+               return -EINVAL;
+
+       node = kzalloc(sizeof(*node), GFP_KERNEL);
+       if (!node)
+               return -ENOMEM;
+
+       INIT_WORK(&node->work, qrtr_node_rx_work);
+       kref_init(&node->ref);
+       mutex_init(&node->ep_lock);
+       skb_queue_head_init(&node->rx_queue);
+       node->nid = QRTR_EP_NID_AUTO;
+       node->ep = ep;
+
+       qrtr_node_assign(node, nid);
+
+       mutex_lock(&qrtr_node_lock);
+       list_add(&node->item, &qrtr_all_nodes);
+       mutex_unlock(&qrtr_node_lock);
+       ep->node = node;
+
+       return 0;
+}
+EXPORT_SYMBOL_GPL(qrtr_endpoint_register);
+
+/**
+ * qrtr_endpoint_unregister - unregister endpoint
+ * @ep: endpoint to unregister
+ */
+void qrtr_endpoint_unregister(struct qrtr_endpoint *ep)
+{
+       struct qrtr_node *node = ep->node;
+
+       mutex_lock(&node->ep_lock);
+       node->ep = NULL;
+       mutex_unlock(&node->ep_lock);
+
+       qrtr_node_release(node);
+       ep->node = NULL;
+}
+EXPORT_SYMBOL_GPL(qrtr_endpoint_unregister);
+
+/* Lookup socket by port.
+ *
+ * Callers must release with qrtr_port_put()
+ */
+static struct qrtr_sock *qrtr_port_lookup(int port)
+{
+       struct qrtr_sock *ipc;
+
+       if (port == QRTR_PORT_CTRL)
+               port = 0;
+
+       mutex_lock(&qrtr_port_lock);
+       ipc = idr_find(&qrtr_ports, port);
+       if (ipc)
+               sock_hold(&ipc->sk);
+       mutex_unlock(&qrtr_port_lock);
+
+       return ipc;
+}
+
+/* Release acquired socket. */
+static void qrtr_port_put(struct qrtr_sock *ipc)
+{
+       sock_put(&ipc->sk);
+}
+
+/* Remove port assignment. */
+static void qrtr_port_remove(struct qrtr_sock *ipc)
+{
+       int port = ipc->us.sq_port;
+
+       if (port == QRTR_PORT_CTRL)
+               port = 0;
+
+       __sock_put(&ipc->sk);
+
+       mutex_lock(&qrtr_port_lock);
+       idr_remove(&qrtr_ports, port);
+       mutex_unlock(&qrtr_port_lock);
+}
+
+/* Assign port number to socket.
+ *
+ * Specify port in the integer pointed to by port, and it will be adjusted
+ * on return as necesssary.
+ *
+ * Port may be:
+ *   0: Assign ephemeral port in [QRTR_MIN_EPH_SOCKET, QRTR_MAX_EPH_SOCKET]
+ *   <QRTR_MIN_EPH_SOCKET: Specified; requires CAP_NET_ADMIN
+ *   >QRTR_MIN_EPH_SOCKET: Specified; available to all
+ */
+static int qrtr_port_assign(struct qrtr_sock *ipc, int *port)
+{
+       int rc;
+
+       mutex_lock(&qrtr_port_lock);
+       if (!*port) {
+               rc = idr_alloc(&qrtr_ports, ipc,
+                              QRTR_MIN_EPH_SOCKET, QRTR_MAX_EPH_SOCKET + 1,
+                              GFP_ATOMIC);
+               if (rc >= 0)
+                       *port = rc;
+       } else if (*port < QRTR_MIN_EPH_SOCKET && !capable(CAP_NET_ADMIN)) {
+               rc = -EACCES;
+       } else if (*port == QRTR_PORT_CTRL) {
+               rc = idr_alloc(&qrtr_ports, ipc, 0, 1, GFP_ATOMIC);
+       } else {
+               rc = idr_alloc(&qrtr_ports, ipc, *port, *port + 1, GFP_ATOMIC);
+               if (rc >= 0)
+                       *port = rc;
+       }
+       mutex_unlock(&qrtr_port_lock);
+
+       if (rc == -ENOSPC)
+               return -EADDRINUSE;
+       else if (rc < 0)
+               return rc;
+
+       sock_hold(&ipc->sk);
+
+       return 0;
+}
+
+/* Bind socket to address.
+ *
+ * Socket should be locked upon call.
+ */
+static int __qrtr_bind(struct socket *sock,
+                      const struct sockaddr_qrtr *addr, int zapped)
+{
+       struct qrtr_sock *ipc = qrtr_sk(sock->sk);
+       struct sock *sk = sock->sk;
+       int port;
+       int rc;
+
+       /* rebinding ok */
+       if (!zapped && addr->sq_port == ipc->us.sq_port)
+               return 0;
+
+       port = addr->sq_port;
+       rc = qrtr_port_assign(ipc, &port);
+       if (rc)
+               return rc;
+
+       /* unbind previous, if any */
+       if (!zapped)
+               qrtr_port_remove(ipc);
+       ipc->us.sq_port = port;
+
+       sock_reset_flag(sk, SOCK_ZAPPED);
+
+       return 0;
+}
+
+/* Auto bind to an ephemeral port. */
+static int qrtr_autobind(struct socket *sock)
+{
+       struct sock *sk = sock->sk;
+       struct sockaddr_qrtr addr;
+
+       if (!sock_flag(sk, SOCK_ZAPPED))
+               return 0;
+
+       addr.sq_family = AF_QIPCRTR;
+       addr.sq_node = qrtr_local_nid;
+       addr.sq_port = 0;
+
+       return __qrtr_bind(sock, &addr, 1);
+}
+
+/* Bind socket to specified sockaddr. */
+static int qrtr_bind(struct socket *sock, struct sockaddr *saddr, int len)
+{
+       DECLARE_SOCKADDR(struct sockaddr_qrtr *, addr, saddr);
+       struct qrtr_sock *ipc = qrtr_sk(sock->sk);
+       struct sock *sk = sock->sk;
+       int rc;
+
+       if (len < sizeof(*addr) || addr->sq_family != AF_QIPCRTR)
+               return -EINVAL;
+
+       if (addr->sq_node != ipc->us.sq_node)
+               return -EINVAL;
+
+       lock_sock(sk);
+       rc = __qrtr_bind(sock, addr, sock_flag(sk, SOCK_ZAPPED));
+       release_sock(sk);
+
+       return rc;
+}
+
+/* Queue packet to local peer socket. */
+static int qrtr_local_enqueue(struct qrtr_node *node, struct sk_buff *skb)
+{
+       const struct qrtr_hdr *phdr;
+       struct qrtr_sock *ipc;
+
+       phdr = (const struct qrtr_hdr *)skb_transport_header(skb);
+
+       ipc = qrtr_port_lookup(le32_to_cpu(phdr->dst_port_id));
+       if (!ipc || &ipc->sk == skb->sk) { /* do not send to self */
+               kfree_skb(skb);
+               return -ENODEV;
+       }
+
+       if (sock_queue_rcv_skb(&ipc->sk, skb)) {
+               qrtr_port_put(ipc);
+               kfree_skb(skb);
+               return -ENOSPC;
+       }
+
+       qrtr_port_put(ipc);
+
+       return 0;
+}
+
+/* Queue packet for broadcast. */
+static int qrtr_bcast_enqueue(struct qrtr_node *node, struct sk_buff *skb)
+{
+       struct sk_buff *skbn;
+
+       mutex_lock(&qrtr_node_lock);
+       list_for_each_entry(node, &qrtr_all_nodes, item) {
+               skbn = skb_clone(skb, GFP_KERNEL);
+               if (!skbn)
+                       break;
+               skb_set_owner_w(skbn, skb->sk);
+               qrtr_node_enqueue(node, skbn);
+       }
+       mutex_unlock(&qrtr_node_lock);
+
+       qrtr_local_enqueue(node, skb);
+
+       return 0;
+}
+
+static int qrtr_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
+{
+       DECLARE_SOCKADDR(struct sockaddr_qrtr *, addr, msg->msg_name);
+       int (*enqueue_fn)(struct qrtr_node *, struct sk_buff *);
+       struct qrtr_sock *ipc = qrtr_sk(sock->sk);
+       struct sock *sk = sock->sk;
+       struct qrtr_node *node;
+       struct qrtr_hdr *hdr;
+       struct sk_buff *skb;
+       size_t plen;
+       int rc;
+
+       if (msg->msg_flags & ~(MSG_DONTWAIT))
+               return -EINVAL;
+
+       if (len > 65535)
+               return -EMSGSIZE;
+
+       lock_sock(sk);
+
+       if (addr) {
+               if (msg->msg_namelen < sizeof(*addr)) {
+                       release_sock(sk);
+                       return -EINVAL;
+               }
+
+               if (addr->sq_family != AF_QIPCRTR) {
+                       release_sock(sk);
+                       return -EINVAL;
+               }
+
+               rc = qrtr_autobind(sock);
+               if (rc) {
+                       release_sock(sk);
+                       return rc;
+               }
+       } else if (sk->sk_state == TCP_ESTABLISHED) {
+               addr = &ipc->peer;
+       } else {
+               release_sock(sk);
+               return -ENOTCONN;
+       }
+
+       node = NULL;
+       if (addr->sq_node == QRTR_NODE_BCAST) {
+               enqueue_fn = qrtr_bcast_enqueue;
+       } else if (addr->sq_node == ipc->us.sq_node) {
+               enqueue_fn = qrtr_local_enqueue;
+       } else {
+               enqueue_fn = qrtr_node_enqueue;
+               node = qrtr_node_lookup(addr->sq_node);
+               if (!node) {
+                       release_sock(sk);
+                       return -ECONNRESET;
+               }
+       }
+
+       plen = (len + 3) & ~3;
+       skb = sock_alloc_send_skb(sk, plen + QRTR_HDR_SIZE,
+                                 msg->msg_flags & MSG_DONTWAIT, &rc);
+       if (!skb)
+               goto out_node;
+
+       skb_reset_transport_header(skb);
+       skb_put(skb, len + QRTR_HDR_SIZE);
+
+       hdr = (struct qrtr_hdr *)skb_transport_header(skb);
+       hdr->version = cpu_to_le32(QRTR_PROTO_VER);
+       hdr->src_node_id = cpu_to_le32(ipc->us.sq_node);
+       hdr->src_port_id = cpu_to_le32(ipc->us.sq_port);
+       hdr->confirm_rx = cpu_to_le32(0);
+       hdr->size = cpu_to_le32(len);
+       hdr->dst_node_id = cpu_to_le32(addr->sq_node);
+       hdr->dst_port_id = cpu_to_le32(addr->sq_port);
+
+       rc = skb_copy_datagram_from_iter(skb, QRTR_HDR_SIZE,
+                                        &msg->msg_iter, len);
+       if (rc) {
+               kfree_skb(skb);
+               goto out_node;
+       }
+
+       if (plen != len) {
+               skb_pad(skb, plen - len);
+               skb_put(skb, plen - len);
+       }
+
+       if (ipc->us.sq_port == QRTR_PORT_CTRL) {
+               if (len < 4) {
+                       rc = -EINVAL;
+                       kfree_skb(skb);
+                       goto out_node;
+               }
+
+               /* control messages already require the type as 'command' */
+               skb_copy_bits(skb, QRTR_HDR_SIZE, &hdr->type, 4);
+       } else {
+               hdr->type = cpu_to_le32(QRTR_TYPE_DATA);
+       }
+
+       rc = enqueue_fn(node, skb);
+       if (rc >= 0)
+               rc = len;
+
+out_node:
+       qrtr_node_release(node);
+       release_sock(sk);
+
+       return rc;
+}
+
+static int qrtr_recvmsg(struct socket *sock, struct msghdr *msg,
+                       size_t size, int flags)
+{
+       DECLARE_SOCKADDR(struct sockaddr_qrtr *, addr, msg->msg_name);
+       const struct qrtr_hdr *phdr;
+       struct sock *sk = sock->sk;
+       struct sk_buff *skb;
+       int copied, rc;
+
+       lock_sock(sk);
+
+       if (sock_flag(sk, SOCK_ZAPPED)) {
+               release_sock(sk);
+               return -EADDRNOTAVAIL;
+       }
+
+       skb = skb_recv_datagram(sk, flags & ~MSG_DONTWAIT,
+                               flags & MSG_DONTWAIT, &rc);
+       if (!skb) {
+               release_sock(sk);
+               return rc;
+       }
+
+       phdr = (const struct qrtr_hdr *)skb_transport_header(skb);
+       copied = le32_to_cpu(phdr->size);
+       if (copied > size) {
+               copied = size;
+               msg->msg_flags |= MSG_TRUNC;
+       }
+
+       rc = skb_copy_datagram_msg(skb, QRTR_HDR_SIZE, msg, copied);
+       if (rc < 0)
+               goto out;
+       rc = copied;
+
+       if (addr) {
+               addr->sq_family = AF_QIPCRTR;
+               addr->sq_node = le32_to_cpu(phdr->src_node_id);
+               addr->sq_port = le32_to_cpu(phdr->src_port_id);
+               msg->msg_namelen = sizeof(*addr);
+       }
+
+out:
+       skb_free_datagram(sk, skb);
+       release_sock(sk);
+
+       return rc;
+}
+
+static int qrtr_connect(struct socket *sock, struct sockaddr *saddr,
+                       int len, int flags)
+{
+       DECLARE_SOCKADDR(struct sockaddr_qrtr *, addr, saddr);
+       struct qrtr_sock *ipc = qrtr_sk(sock->sk);
+       struct sock *sk = sock->sk;
+       int rc;
+
+       if (len < sizeof(*addr) || addr->sq_family != AF_QIPCRTR)
+               return -EINVAL;
+
+       lock_sock(sk);
+
+       sk->sk_state = TCP_CLOSE;
+       sock->state = SS_UNCONNECTED;
+
+       rc = qrtr_autobind(sock);
+       if (rc) {
+               release_sock(sk);
+               return rc;
+       }
+
+       ipc->peer = *addr;
+       sock->state = SS_CONNECTED;
+       sk->sk_state = TCP_ESTABLISHED;
+
+       release_sock(sk);
+
+       return 0;
+}
+
+static int qrtr_getname(struct socket *sock, struct sockaddr *saddr,
+                       int *len, int peer)
+{
+       struct qrtr_sock *ipc = qrtr_sk(sock->sk);
+       struct sockaddr_qrtr qaddr;
+       struct sock *sk = sock->sk;
+
+       lock_sock(sk);
+       if (peer) {
+               if (sk->sk_state != TCP_ESTABLISHED) {
+                       release_sock(sk);
+                       return -ENOTCONN;
+               }
+
+               qaddr = ipc->peer;
+       } else {
+               qaddr = ipc->us;
+       }
+       release_sock(sk);
+
+       *len = sizeof(qaddr);
+       qaddr.sq_family = AF_QIPCRTR;
+
+       memcpy(saddr, &qaddr, sizeof(qaddr));
+
+       return 0;
+}
+
+static int qrtr_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
+{
+       void __user *argp = (void __user *)arg;
+       struct qrtr_sock *ipc = qrtr_sk(sock->sk);
+       struct sock *sk = sock->sk;
+       struct sockaddr_qrtr *sq;
+       struct sk_buff *skb;
+       struct ifreq ifr;
+       long len = 0;
+       int rc = 0;
+
+       lock_sock(sk);
+
+       switch (cmd) {
+       case TIOCOUTQ:
+               len = sk->sk_sndbuf - sk_wmem_alloc_get(sk);
+               if (len < 0)
+                       len = 0;
+               rc = put_user(len, (int __user *)argp);
+               break;
+       case TIOCINQ:
+               skb = skb_peek(&sk->sk_receive_queue);
+               if (skb)
+                       len = skb->len - QRTR_HDR_SIZE;
+               rc = put_user(len, (int __user *)argp);
+               break;
+       case SIOCGIFADDR:
+               if (copy_from_user(&ifr, argp, sizeof(ifr))) {
+                       rc = -EFAULT;
+                       break;
+               }
+
+               sq = (struct sockaddr_qrtr *)&ifr.ifr_addr;
+               *sq = ipc->us;
+               if (copy_to_user(argp, &ifr, sizeof(ifr))) {
+                       rc = -EFAULT;
+                       break;
+               }
+               break;
+       case SIOCGSTAMP:
+               rc = sock_get_timestamp(sk, argp);
+               break;
+       case SIOCADDRT:
+       case SIOCDELRT:
+       case SIOCSIFADDR:
+       case SIOCGIFDSTADDR:
+       case SIOCSIFDSTADDR:
+       case SIOCGIFBRDADDR:
+       case SIOCSIFBRDADDR:
+       case SIOCGIFNETMASK:
+       case SIOCSIFNETMASK:
+               rc = -EINVAL;
+               break;
+       default:
+               rc = -ENOIOCTLCMD;
+               break;
+       }
+
+       release_sock(sk);
+
+       return rc;
+}
+
+static int qrtr_release(struct socket *sock)
+{
+       struct sock *sk = sock->sk;
+       struct qrtr_sock *ipc;
+
+       if (!sk)
+               return 0;
+
+       lock_sock(sk);
+
+       ipc = qrtr_sk(sk);
+       sk->sk_shutdown = SHUTDOWN_MASK;
+       if (!sock_flag(sk, SOCK_DEAD))
+               sk->sk_state_change(sk);
+
+       sock_set_flag(sk, SOCK_DEAD);
+       sock->sk = NULL;
+
+       if (!sock_flag(sk, SOCK_ZAPPED))
+               qrtr_port_remove(ipc);
+
+       skb_queue_purge(&sk->sk_receive_queue);
+
+       release_sock(sk);
+       sock_put(sk);
+
+       return 0;
+}
+
+static const struct proto_ops qrtr_proto_ops = {
+       .owner          = THIS_MODULE,
+       .family         = AF_QIPCRTR,
+       .bind           = qrtr_bind,
+       .connect        = qrtr_connect,
+       .socketpair     = sock_no_socketpair,
+       .accept         = sock_no_accept,
+       .listen         = sock_no_listen,
+       .sendmsg        = qrtr_sendmsg,
+       .recvmsg        = qrtr_recvmsg,
+       .getname        = qrtr_getname,
+       .ioctl          = qrtr_ioctl,
+       .poll           = datagram_poll,
+       .shutdown       = sock_no_shutdown,
+       .setsockopt     = sock_no_setsockopt,
+       .getsockopt     = sock_no_getsockopt,
+       .release        = qrtr_release,
+       .mmap           = sock_no_mmap,
+       .sendpage       = sock_no_sendpage,
+};
+
+static struct proto qrtr_proto = {
+       .name           = "QIPCRTR",
+       .owner          = THIS_MODULE,
+       .obj_size       = sizeof(struct qrtr_sock),
+};
+
+static int qrtr_create(struct net *net, struct socket *sock,
+                      int protocol, int kern)
+{
+       struct qrtr_sock *ipc;
+       struct sock *sk;
+
+       if (sock->type != SOCK_DGRAM)
+               return -EPROTOTYPE;
+
+       sk = sk_alloc(net, AF_QIPCRTR, GFP_KERNEL, &qrtr_proto, kern);
+       if (!sk)
+               return -ENOMEM;
+
+       sock_set_flag(sk, SOCK_ZAPPED);
+
+       sock_init_data(sock, sk);
+       sock->ops = &qrtr_proto_ops;
+
+       ipc = qrtr_sk(sk);
+       ipc->us.sq_family = AF_QIPCRTR;
+       ipc->us.sq_node = qrtr_local_nid;
+       ipc->us.sq_port = 0;
+
+       return 0;
+}
+
+static const struct nla_policy qrtr_policy[IFA_MAX + 1] = {
+       [IFA_LOCAL] = { .type = NLA_U32 },
+};
+
+static int qrtr_addr_doit(struct sk_buff *skb, struct nlmsghdr *nlh)
+{
+       struct nlattr *tb[IFA_MAX + 1];
+       struct ifaddrmsg *ifm;
+       int rc;
+
+       if (!netlink_capable(skb, CAP_NET_ADMIN))
+               return -EPERM;
+
+       if (!netlink_capable(skb, CAP_SYS_ADMIN))
+               return -EPERM;
+
+       ASSERT_RTNL();
+
+       rc = nlmsg_parse(nlh, sizeof(*ifm), tb, IFA_MAX, qrtr_policy);
+       if (rc < 0)
+               return rc;
+
+       ifm = nlmsg_data(nlh);
+       if (!tb[IFA_LOCAL])
+               return -EINVAL;
+
+       qrtr_local_nid = nla_get_u32(tb[IFA_LOCAL]);
+       return 0;
+}
+
+static const struct net_proto_family qrtr_family = {
+       .owner  = THIS_MODULE,
+       .family = AF_QIPCRTR,
+       .create = qrtr_create,
+};
+
+static int __init qrtr_proto_init(void)
+{
+       int rc;
+
+       rc = proto_register(&qrtr_proto, 1);
+       if (rc)
+               return rc;
+
+       rc = sock_register(&qrtr_family);
+       if (rc) {
+               proto_unregister(&qrtr_proto);
+               return rc;
+       }
+
+       rtnl_register(PF_QIPCRTR, RTM_NEWADDR, qrtr_addr_doit, NULL, NULL);
+
+       return 0;
+}
+module_init(qrtr_proto_init);
+
+static void __exit qrtr_proto_fini(void)
+{
+       rtnl_unregister(PF_QIPCRTR, RTM_NEWADDR);
+       sock_unregister(qrtr_family.family);
+       proto_unregister(&qrtr_proto);
+}
+module_exit(qrtr_proto_fini);
+
+MODULE_DESCRIPTION("Qualcomm IPC-router driver");
+MODULE_LICENSE("GPL v2");
diff --git a/net/qrtr/qrtr.h b/net/qrtr/qrtr.h
new file mode 100644 (file)
index 0000000..2b84871
--- /dev/null
@@ -0,0 +1,31 @@
+#ifndef __QRTR_H_
+#define __QRTR_H_
+
+#include <linux/types.h>
+
+struct sk_buff;
+
+/* endpoint node id auto assignment */
+#define QRTR_EP_NID_AUTO (-1)
+
+/**
+ * struct qrtr_endpoint - endpoint handle
+ * @xmit: Callback for outgoing packets
+ *
+ * The socket buffer passed to the xmit function becomes owned by the endpoint
+ * driver.  As such, when the driver is done with the buffer, it should
+ * call kfree_skb() on failure, or consume_skb() on success.
+ */
+struct qrtr_endpoint {
+       int (*xmit)(struct qrtr_endpoint *ep, struct sk_buff *skb);
+       /* private: not for endpoint use */
+       struct qrtr_node *node;
+};
+
+int qrtr_endpoint_register(struct qrtr_endpoint *ep, unsigned int nid);
+
+void qrtr_endpoint_unregister(struct qrtr_endpoint *ep);
+
+int qrtr_endpoint_post(struct qrtr_endpoint *ep, const void *data, size_t len);
+
+#endif
diff --git a/net/qrtr/smd.c b/net/qrtr/smd.c
new file mode 100644 (file)
index 0000000..84ebce7
--- /dev/null
@@ -0,0 +1,117 @@
+/*
+ * Copyright (c) 2015, Sony Mobile Communications Inc.
+ * Copyright (c) 2013, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/module.h>
+#include <linux/skbuff.h>
+#include <linux/soc/qcom/smd.h>
+
+#include "qrtr.h"
+
+struct qrtr_smd_dev {
+       struct qrtr_endpoint ep;
+       struct qcom_smd_channel *channel;
+};
+
+/* from smd to qrtr */
+static int qcom_smd_qrtr_callback(struct qcom_smd_device *sdev,
+                                 const void *data, size_t len)
+{
+       struct qrtr_smd_dev *qdev = dev_get_drvdata(&sdev->dev);
+       int rc;
+
+       if (!qdev)
+               return -EAGAIN;
+
+       rc = qrtr_endpoint_post(&qdev->ep, data, len);
+       if (rc == -EINVAL) {
+               dev_err(&sdev->dev, "invalid ipcrouter packet\n");
+               /* return 0 to let smd drop the packet */
+               rc = 0;
+       }
+
+       return rc;
+}
+
+/* from qrtr to smd */
+static int qcom_smd_qrtr_send(struct qrtr_endpoint *ep, struct sk_buff *skb)
+{
+       struct qrtr_smd_dev *qdev = container_of(ep, struct qrtr_smd_dev, ep);
+       int rc;
+
+       rc = skb_linearize(skb);
+       if (rc)
+               goto out;
+
+       rc = qcom_smd_send(qdev->channel, skb->data, skb->len);
+
+out:
+       if (rc)
+               kfree_skb(skb);
+       else
+               consume_skb(skb);
+       return rc;
+}
+
+static int qcom_smd_qrtr_probe(struct qcom_smd_device *sdev)
+{
+       struct qrtr_smd_dev *qdev;
+       int rc;
+
+       qdev = devm_kzalloc(&sdev->dev, sizeof(*qdev), GFP_KERNEL);
+       if (!qdev)
+               return -ENOMEM;
+
+       qdev->channel = sdev->channel;
+       qdev->ep.xmit = qcom_smd_qrtr_send;
+
+       rc = qrtr_endpoint_register(&qdev->ep, QRTR_EP_NID_AUTO);
+       if (rc)
+               return rc;
+
+       dev_set_drvdata(&sdev->dev, qdev);
+
+       dev_dbg(&sdev->dev, "Qualcomm SMD QRTR driver probed\n");
+
+       return 0;
+}
+
+static void qcom_smd_qrtr_remove(struct qcom_smd_device *sdev)
+{
+       struct qrtr_smd_dev *qdev = dev_get_drvdata(&sdev->dev);
+
+       qrtr_endpoint_unregister(&qdev->ep);
+
+       dev_set_drvdata(&sdev->dev, NULL);
+}
+
+static const struct qcom_smd_id qcom_smd_qrtr_smd_match[] = {
+       { "IPCRTR" },
+       {}
+};
+
+static struct qcom_smd_driver qcom_smd_qrtr_driver = {
+       .probe = qcom_smd_qrtr_probe,
+       .remove = qcom_smd_qrtr_remove,
+       .callback = qcom_smd_qrtr_callback,
+       .smd_match_table = qcom_smd_qrtr_smd_match,
+       .driver = {
+               .name = "qcom_smd_qrtr",
+               .owner = THIS_MODULE,
+       },
+};
+
+module_qcom_smd_driver(qcom_smd_qrtr_driver);
+
+MODULE_DESCRIPTION("Qualcomm IPC-Router SMD interface driver");
+MODULE_LICENSE("GPL v2");
index 61ed2a8..86187da 100644 (file)
@@ -127,7 +127,7 @@ void rds_tcp_restore_callbacks(struct socket *sock,
 
 /*
  * This is the only path that sets tc->t_sock.  Send and receive trust that
- * it is set.  The RDS_CONN_CONNECTED bit protects those paths from being
+ * it is set.  The RDS_CONN_UP bit protects those paths from being
  * called while it isn't set.
  */
 void rds_tcp_set_callbacks(struct socket *sock, struct rds_connection *conn)
@@ -216,6 +216,7 @@ static int rds_tcp_conn_alloc(struct rds_connection *conn, gfp_t gfp)
        if (!tc)
                return -ENOMEM;
 
+       mutex_init(&tc->t_conn_lock);
        tc->t_sock = NULL;
        tc->t_tinc = NULL;
        tc->t_tinc_hdr_rem = sizeof(struct rds_header);
index 64f873c..41c2283 100644 (file)
@@ -12,6 +12,10 @@ struct rds_tcp_connection {
 
        struct list_head        t_tcp_node;
        struct rds_connection   *conn;
+       /* t_conn_lock synchronizes the connection establishment between
+        * rds_tcp_accept_one and rds_tcp_conn_connect
+        */
+       struct mutex            t_conn_lock;
        struct socket           *t_sock;
        void                    *t_orig_write_space;
        void                    *t_orig_data_ready;
index 5cb1687..49a3fcf 100644 (file)
@@ -78,7 +78,14 @@ int rds_tcp_conn_connect(struct rds_connection *conn)
        struct socket *sock = NULL;
        struct sockaddr_in src, dest;
        int ret;
+       struct rds_tcp_connection *tc = conn->c_transport_data;
+
+       mutex_lock(&tc->t_conn_lock);
 
+       if (rds_conn_up(conn)) {
+               mutex_unlock(&tc->t_conn_lock);
+               return 0;
+       }
        ret = sock_create_kern(rds_conn_net(conn), PF_INET,
                               SOCK_STREAM, IPPROTO_TCP, &sock);
        if (ret < 0)
@@ -120,6 +127,7 @@ int rds_tcp_conn_connect(struct rds_connection *conn)
        }
 
 out:
+       mutex_unlock(&tc->t_conn_lock);
        if (sock)
                sock_release(sock);
        return ret;
index 0936a4a..be263cd 100644 (file)
@@ -76,7 +76,9 @@ int rds_tcp_accept_one(struct socket *sock)
        struct rds_connection *conn;
        int ret;
        struct inet_sock *inet;
-       struct rds_tcp_connection *rs_tcp;
+       struct rds_tcp_connection *rs_tcp = NULL;
+       int conn_state;
+       struct sock *nsk;
 
        ret = sock_create_kern(sock_net(sock->sk), sock->sk->sk_family,
                               sock->sk->sk_type, sock->sk->sk_protocol,
@@ -115,28 +117,44 @@ int rds_tcp_accept_one(struct socket *sock)
         * rds_tcp_state_change() will do that cleanup
         */
        rs_tcp = (struct rds_tcp_connection *)conn->c_transport_data;
-       if (rs_tcp->t_sock &&
-           ntohl(inet->inet_saddr) < ntohl(inet->inet_daddr)) {
-               struct sock *nsk = new_sock->sk;
-
-               nsk->sk_user_data = NULL;
-               nsk->sk_prot->disconnect(nsk, 0);
-               tcp_done(nsk);
-               new_sock = NULL;
-               ret = 0;
-               goto out;
-       } else if (rs_tcp->t_sock) {
-               rds_tcp_restore_callbacks(rs_tcp->t_sock, rs_tcp);
-               conn->c_outgoing = 0;
-       }
-
        rds_conn_transition(conn, RDS_CONN_DOWN, RDS_CONN_CONNECTING);
+       mutex_lock(&rs_tcp->t_conn_lock);
+       conn_state = rds_conn_state(conn);
+       if (conn_state != RDS_CONN_CONNECTING && conn_state != RDS_CONN_UP)
+               goto rst_nsk;
+       if (rs_tcp->t_sock) {
+               /* Need to resolve a duelling SYN between peers.
+                * We have an outstanding SYN to this peer, which may
+                * potentially have transitioned to the RDS_CONN_UP state,
+                * so we must quiesce any send threads before resetting
+                * c_transport_data.
+                */
+               wait_event(conn->c_waitq,
+                          !test_bit(RDS_IN_XMIT, &conn->c_flags));
+               if (ntohl(inet->inet_saddr) < ntohl(inet->inet_daddr)) {
+                       goto rst_nsk;
+               } else if (rs_tcp->t_sock) {
+                       rds_tcp_restore_callbacks(rs_tcp->t_sock, rs_tcp);
+                       conn->c_outgoing = 0;
+               }
+       }
        rds_tcp_set_callbacks(new_sock, conn);
-       rds_connect_complete(conn);
+       rds_connect_complete(conn); /* marks RDS_CONN_UP */
+       new_sock = NULL;
+       ret = 0;
+       goto out;
+rst_nsk:
+       /* reset the newly returned accept sock and bail */
+       nsk = new_sock->sk;
+       rds_tcp_stats_inc(s_tcp_listen_closed_stale);
+       nsk->sk_user_data = NULL;
+       nsk->sk_prot->disconnect(nsk, 0);
+       tcp_done(nsk);
        new_sock = NULL;
        ret = 0;
-
 out:
+       if (rs_tcp)
+               mutex_unlock(&rs_tcp->t_conn_lock);
        if (new_sock)
                sock_release(new_sock);
        return ret;
index 27a9921..d75d8b5 100644 (file)
@@ -207,22 +207,14 @@ static int rds_tcp_data_recv(read_descriptor_t *desc, struct sk_buff *skb,
                }
 
                if (left && tc->t_tinc_data_rem) {
-                       clone = skb_clone(skb, arg->gfp);
+                       to_copy = min(tc->t_tinc_data_rem, left);
+
+                       clone = pskb_extract(skb, offset, to_copy, arg->gfp);
                        if (!clone) {
                                desc->error = -ENOMEM;
                                goto out;
                        }
 
-                       to_copy = min(tc->t_tinc_data_rem, left);
-                       if (!pskb_pull(clone, offset) ||
-                           pskb_trim(clone, to_copy)) {
-                               pr_warn("rds_tcp_data_recv: pull/trim failed "
-                                       "left %zu data_rem %zu skb_len %d\n",
-                                       left, tc->t_tinc_data_rem, skb->len);
-                               kfree_skb(clone);
-                               desc->error = -ENOMEM;
-                               goto out;
-                       }
                        skb_queue_tail(&tinc->ti_skb_list, clone);
 
                        rdsdebug("skb %p data %p len %d off %u to_copy %zu -> "
index 01e0381..6ff9741 100644 (file)
@@ -698,12 +698,12 @@ void rxrpc_data_ready(struct sock *sk)
        if (skb_checksum_complete(skb)) {
                rxrpc_free_skb(skb);
                rxrpc_put_local(local);
-               UDP_INC_STATS_BH(&init_net, UDP_MIB_INERRORS, 0);
+               __UDP_INC_STATS(&init_net, UDP_MIB_INERRORS, 0);
                _leave(" [CSUM failed]");
                return;
        }
 
-       UDP_INC_STATS_BH(&init_net, UDP_MIB_INDATAGRAMS, 0);
+       __UDP_INC_STATS(&init_net, UDP_MIB_INDATAGRAMS, 0);
 
        /* The socket buffer we have is owned by UDP, with UDP's data all over
         * it, but we really want our own data there.
index 9606666..336774a 100644 (file)
@@ -657,12 +657,15 @@ int tcf_action_copy_stats(struct sk_buff *skb, struct tc_action *a,
        if (compat_mode) {
                if (a->type == TCA_OLD_COMPAT)
                        err = gnet_stats_start_copy_compat(skb, 0,
-                               TCA_STATS, TCA_XSTATS, &p->tcfc_lock, &d);
+                                                          TCA_STATS,
+                                                          TCA_XSTATS,
+                                                          &p->tcfc_lock, &d,
+                                                          TCA_PAD);
                else
                        return 0;
        } else
                err = gnet_stats_start_copy(skb, TCA_ACT_STATS,
-                                           &p->tcfc_lock, &d);
+                                           &p->tcfc_lock, &d, TCA_ACT_PAD);
 
        if (err < 0)
                goto errout;
index 8c9f1f0..c7123e0 100644 (file)
@@ -53,9 +53,11 @@ static int tcf_bpf(struct sk_buff *skb, const struct tc_action *act,
        filter = rcu_dereference(prog->filter);
        if (at_ingress) {
                __skb_push(skb, skb->mac_len);
+               bpf_compute_data_end(skb);
                filter_res = BPF_PROG_RUN(filter, skb);
                __skb_pull(skb, skb->mac_len);
        } else {
+               bpf_compute_data_end(skb);
                filter_res = BPF_PROG_RUN(filter, skb);
        }
        rcu_read_unlock();
@@ -156,7 +158,8 @@ static int tcf_bpf_dump(struct sk_buff *skb, struct tc_action *act,
        tm.lastuse = jiffies_to_clock_t(jiffies - prog->tcf_tm.lastuse);
        tm.expires = jiffies_to_clock_t(prog->tcf_tm.expires);
 
-       if (nla_put(skb, TCA_ACT_BPF_TM, sizeof(tm), &tm))
+       if (nla_put_64bit(skb, TCA_ACT_BPF_TM, sizeof(tm), &tm,
+                         TCA_ACT_BPF_PAD))
                goto nla_put_failure;
 
        return skb->len;
index c0ed93c..2ba700c 100644 (file)
@@ -163,7 +163,8 @@ static inline int tcf_connmark_dump(struct sk_buff *skb, struct tc_action *a,
        t.install = jiffies_to_clock_t(jiffies - ci->tcf_tm.install);
        t.lastuse = jiffies_to_clock_t(jiffies - ci->tcf_tm.lastuse);
        t.expires = jiffies_to_clock_t(ci->tcf_tm.expires);
-       if (nla_put(skb, TCA_CONNMARK_TM, sizeof(t), &t))
+       if (nla_put_64bit(skb, TCA_CONNMARK_TM, sizeof(t), &t,
+                         TCA_CONNMARK_PAD))
                goto nla_put_failure;
 
        return skb->len;
index d22426c..28e934e 100644 (file)
@@ -549,7 +549,7 @@ static int tcf_csum_dump(struct sk_buff *skb,
        t.install = jiffies_to_clock_t(jiffies - p->tcf_tm.install);
        t.lastuse = jiffies_to_clock_t(jiffies - p->tcf_tm.lastuse);
        t.expires = jiffies_to_clock_t(p->tcf_tm.expires);
-       if (nla_put(skb, TCA_CSUM_TM, sizeof(t), &t))
+       if (nla_put_64bit(skb, TCA_CSUM_TM, sizeof(t), &t, TCA_CSUM_PAD))
                goto nla_put_failure;
 
        return skb->len;
index 887fc1f..1a6e09f 100644 (file)
@@ -177,7 +177,7 @@ static int tcf_gact_dump(struct sk_buff *skb, struct tc_action *a, int bind, int
        t.install = jiffies_to_clock_t(jiffies - gact->tcf_tm.install);
        t.lastuse = jiffies_to_clock_t(jiffies - gact->tcf_tm.lastuse);
        t.expires = jiffies_to_clock_t(gact->tcf_tm.expires);
-       if (nla_put(skb, TCA_GACT_TM, sizeof(t), &t))
+       if (nla_put_64bit(skb, TCA_GACT_TM, sizeof(t), &t, TCA_GACT_PAD))
                goto nla_put_failure;
        return skb->len;
 
index c589a9b..556f44c 100644 (file)
@@ -550,7 +550,7 @@ static int tcf_ife_dump(struct sk_buff *skb, struct tc_action *a, int bind,
        t.install = jiffies_to_clock_t(jiffies - ife->tcf_tm.install);
        t.lastuse = jiffies_to_clock_t(jiffies - ife->tcf_tm.lastuse);
        t.expires = jiffies_to_clock_t(ife->tcf_tm.expires);
-       if (nla_put(skb, TCA_IFE_TM, sizeof(t), &t))
+       if (nla_put_64bit(skb, TCA_IFE_TM, sizeof(t), &t, TCA_IFE_PAD))
                goto nla_put_failure;
 
        if (!is_zero_ether_addr(ife->eth_dst)) {
index 350e134..1464f6a 100644 (file)
@@ -275,7 +275,7 @@ static int tcf_ipt_dump(struct sk_buff *skb, struct tc_action *a, int bind, int
        tm.install = jiffies_to_clock_t(jiffies - ipt->tcf_tm.install);
        tm.lastuse = jiffies_to_clock_t(jiffies - ipt->tcf_tm.lastuse);
        tm.expires = jiffies_to_clock_t(ipt->tcf_tm.expires);
-       if (nla_put(skb, TCA_IPT_TM, sizeof (tm), &tm))
+       if (nla_put_64bit(skb, TCA_IPT_TM, sizeof(tm), &tm, TCA_IPT_PAD))
                goto nla_put_failure;
        kfree(t);
        return skb->len;
index e8a760c..dea57c1 100644 (file)
@@ -214,7 +214,7 @@ static int tcf_mirred_dump(struct sk_buff *skb, struct tc_action *a, int bind, i
        t.install = jiffies_to_clock_t(jiffies - m->tcf_tm.install);
        t.lastuse = jiffies_to_clock_t(jiffies - m->tcf_tm.lastuse);
        t.expires = jiffies_to_clock_t(m->tcf_tm.expires);
-       if (nla_put(skb, TCA_MIRRED_TM, sizeof(t), &t))
+       if (nla_put_64bit(skb, TCA_MIRRED_TM, sizeof(t), &t, TCA_MIRRED_PAD))
                goto nla_put_failure;
        return skb->len;
 
index 0f65cdf..c0a879f 100644 (file)
@@ -267,7 +267,7 @@ static int tcf_nat_dump(struct sk_buff *skb, struct tc_action *a,
        t.install = jiffies_to_clock_t(jiffies - p->tcf_tm.install);
        t.lastuse = jiffies_to_clock_t(jiffies - p->tcf_tm.lastuse);
        t.expires = jiffies_to_clock_t(p->tcf_tm.expires);
-       if (nla_put(skb, TCA_NAT_TM, sizeof(t), &t))
+       if (nla_put_64bit(skb, TCA_NAT_TM, sizeof(t), &t, TCA_NAT_PAD))
                goto nla_put_failure;
 
        return skb->len;
index 429c3ab..c6e18f2 100644 (file)
@@ -203,7 +203,7 @@ static int tcf_pedit_dump(struct sk_buff *skb, struct tc_action *a,
        t.install = jiffies_to_clock_t(jiffies - p->tcf_tm.install);
        t.lastuse = jiffies_to_clock_t(jiffies - p->tcf_tm.lastuse);
        t.expires = jiffies_to_clock_t(p->tcf_tm.expires);
-       if (nla_put(skb, TCA_PEDIT_TM, sizeof(t), &t))
+       if (nla_put_64bit(skb, TCA_PEDIT_TM, sizeof(t), &t, TCA_PEDIT_PAD))
                goto nla_put_failure;
        kfree(opt);
        return skb->len;
index 75b2be1..2057fd5 100644 (file)
@@ -155,7 +155,7 @@ static int tcf_simp_dump(struct sk_buff *skb, struct tc_action *a,
        t.install = jiffies_to_clock_t(jiffies - d->tcf_tm.install);
        t.lastuse = jiffies_to_clock_t(jiffies - d->tcf_tm.lastuse);
        t.expires = jiffies_to_clock_t(d->tcf_tm.expires);
-       if (nla_put(skb, TCA_DEF_TM, sizeof(t), &t))
+       if (nla_put_64bit(skb, TCA_DEF_TM, sizeof(t), &t, TCA_DEF_PAD))
                goto nla_put_failure;
        return skb->len;
 
index cfcdbdc..51b2499 100644 (file)
@@ -167,7 +167,7 @@ static int tcf_skbedit_dump(struct sk_buff *skb, struct tc_action *a,
        t.install = jiffies_to_clock_t(jiffies - d->tcf_tm.install);
        t.lastuse = jiffies_to_clock_t(jiffies - d->tcf_tm.lastuse);
        t.expires = jiffies_to_clock_t(d->tcf_tm.expires);
-       if (nla_put(skb, TCA_SKBEDIT_TM, sizeof(t), &t))
+       if (nla_put_64bit(skb, TCA_SKBEDIT_TM, sizeof(t), &t, TCA_SKBEDIT_PAD))
                goto nla_put_failure;
        return skb->len;
 
index bab8ae0..c1682ab 100644 (file)
@@ -175,7 +175,7 @@ static int tcf_vlan_dump(struct sk_buff *skb, struct tc_action *a,
        t.install = jiffies_to_clock_t(jiffies - v->tcf_tm.install);
        t.lastuse = jiffies_to_clock_t(jiffies - v->tcf_tm.lastuse);
        t.expires = jiffies_to_clock_t(v->tcf_tm.expires);
-       if (nla_put(skb, TCA_VLAN_TM, sizeof(t), &t))
+       if (nla_put_64bit(skb, TCA_VLAN_TM, sizeof(t), &t, TCA_VLAN_PAD))
                goto nla_put_failure;
        return skb->len;
 
index 425fe6a..7b342c7 100644 (file)
@@ -96,9 +96,11 @@ static int cls_bpf_classify(struct sk_buff *skb, const struct tcf_proto *tp,
                if (at_ingress) {
                        /* It is safe to push/pull even if skb_shared() */
                        __skb_push(skb, skb->mac_len);
+                       bpf_compute_data_end(skb);
                        filter_res = BPF_PROG_RUN(prog->filter, skb);
                        __skb_pull(skb, skb->mac_len);
                } else {
+                       bpf_compute_data_end(skb);
                        filter_res = BPF_PROG_RUN(prog->filter, skb);
                }
 
index 563cdad..e64877a 100644 (file)
@@ -1140,9 +1140,10 @@ static int u32_dump(struct net *net, struct tcf_proto *tp, unsigned long fh,
                                gpf->kcnts[i] += pf->kcnts[i];
                }
 
-               if (nla_put(skb, TCA_U32_PCNT,
-                           sizeof(struct tc_u32_pcnt) + n->sel.nkeys*sizeof(u64),
-                           gpf)) {
+               if (nla_put_64bit(skb, TCA_U32_PCNT,
+                                 sizeof(struct tc_u32_pcnt) +
+                                 n->sel.nkeys * sizeof(u64),
+                                 gpf, TCA_U32_PAD)) {
                        kfree(gpf);
                        goto nla_put_failure;
                }
index 3b180ff..64f71a2 100644 (file)
@@ -1365,7 +1365,8 @@ static int tc_fill_qdisc(struct sk_buff *skb, struct Qdisc *q, u32 clid,
                goto nla_put_failure;
 
        if (gnet_stats_start_copy_compat(skb, TCA_STATS2, TCA_STATS, TCA_XSTATS,
-                                        qdisc_root_sleeping_lock(q), &d) < 0)
+                                        qdisc_root_sleeping_lock(q), &d,
+                                        TCA_PAD) < 0)
                goto nla_put_failure;
 
        if (q->ops->dump_stats && q->ops->dump_stats(q, &d) < 0)
@@ -1679,7 +1680,8 @@ static int tc_fill_tclass(struct sk_buff *skb, struct Qdisc *q,
                goto nla_put_failure;
 
        if (gnet_stats_start_copy_compat(skb, TCA_STATS2, TCA_STATS, TCA_XSTATS,
-                                        qdisc_root_sleeping_lock(q), &d) < 0)
+                                        qdisc_root_sleeping_lock(q), &d,
+                                        TCA_PAD) < 0)
                goto nla_put_failure;
 
        if (cl_ops->dump_stats && cl_ops->dump_stats(q, cl, &d) < 0)
index 9b7e298..dddf3bb 100644 (file)
@@ -49,6 +49,8 @@
 #include <linux/prefetch.h>
 #include <net/pkt_sched.h>
 #include <net/codel.h>
+#include <net/codel_impl.h>
+#include <net/codel_qdisc.h>
 
 
 #define DEFAULT_CODEL_LIMIT 1000
@@ -64,20 +66,33 @@ struct codel_sched_data {
  * to dequeue a packet from queue. Note: backlog is handled in
  * codel, we dont need to reduce it here.
  */
-static struct sk_buff *dequeue(struct codel_vars *vars, struct Qdisc *sch)
+static struct sk_buff *dequeue_func(struct codel_vars *vars, void *ctx)
 {
+       struct Qdisc *sch = ctx;
        struct sk_buff *skb = __skb_dequeue(&sch->q);
 
+       if (skb)
+               sch->qstats.backlog -= qdisc_pkt_len(skb);
+
        prefetch(&skb->end); /* we'll need skb_shinfo() */
        return skb;
 }
 
+static void drop_func(struct sk_buff *skb, void *ctx)
+{
+       struct Qdisc *sch = ctx;
+
+       qdisc_drop(skb, sch);
+}
+
 static struct sk_buff *codel_qdisc_dequeue(struct Qdisc *sch)
 {
        struct codel_sched_data *q = qdisc_priv(sch);
        struct sk_buff *skb;
 
-       skb = codel_dequeue(sch, &q->params, &q->vars, &q->stats, dequeue);
+       skb = codel_dequeue(sch, &sch->qstats.backlog, &q->params, &q->vars,
+                           &q->stats, qdisc_pkt_len, codel_get_enqueue_time,
+                           drop_func, dequeue_func);
 
        /* We cant call qdisc_tree_reduce_backlog() if our qlen is 0,
         * or HTB crashes. Defer it for next round.
@@ -173,9 +188,10 @@ static int codel_init(struct Qdisc *sch, struct nlattr *opt)
 
        sch->limit = DEFAULT_CODEL_LIMIT;
 
-       codel_params_init(&q->params, sch);
+       codel_params_init(&q->params);
        codel_vars_init(&q->vars);
        codel_stats_init(&q->stats);
+       q->params.mtu = psched_mtu(qdisc_dev(sch));
 
        if (opt) {
                int err = codel_change(sch, opt);
index d3fc8f9..bb8bd93 100644 (file)
@@ -24,6 +24,8 @@
 #include <net/netlink.h>
 #include <net/pkt_sched.h>
 #include <net/codel.h>
+#include <net/codel_impl.h>
+#include <net/codel_qdisc.h>
 
 /*     Fair Queue CoDel.
  *
@@ -57,8 +59,12 @@ struct fq_codel_sched_data {
        u32             flows_cnt;      /* number of flows */
        u32             perturbation;   /* hash perturbation */
        u32             quantum;        /* psched_mtu(qdisc_dev(sch)); */
+       u32             drop_batch_size;
+       u32             memory_limit;
        struct codel_params cparams;
        struct codel_stats cstats;
+       u32             memory_usage;
+       u32             drop_overmemory;
        u32             drop_overlimit;
        u32             new_flow_count;
 
@@ -133,17 +139,21 @@ static inline void flow_queue_add(struct fq_codel_flow *flow,
        skb->next = NULL;
 }
 
-static unsigned int fq_codel_drop(struct Qdisc *sch)
+static unsigned int fq_codel_drop(struct Qdisc *sch, unsigned int max_packets)
 {
        struct fq_codel_sched_data *q = qdisc_priv(sch);
        struct sk_buff *skb;
        unsigned int maxbacklog = 0, idx = 0, i, len;
        struct fq_codel_flow *flow;
+       unsigned int threshold;
+       unsigned int mem = 0;
 
-       /* Queue is full! Find the fat flow and drop packet from it.
+       /* Queue is full! Find the fat flow and drop packet(s) from it.
         * This might sound expensive, but with 1024 flows, we scan
         * 4KB of memory, and we dont need to handle a complex tree
         * in fast path (packet queue/enqueue) with many cache misses.
+        * In stress mode, we'll try to drop 64 packets from the flow,
+        * amortizing this linear lookup to one cache line per drop.
         */
        for (i = 0; i < q->flows_cnt; i++) {
                if (q->backlogs[i] > maxbacklog) {
@@ -151,15 +161,26 @@ static unsigned int fq_codel_drop(struct Qdisc *sch)
                        idx = i;
                }
        }
+
+       /* Our goal is to drop half of this fat flow backlog */
+       threshold = maxbacklog >> 1;
+
        flow = &q->flows[idx];
-       skb = dequeue_head(flow);
-       len = qdisc_pkt_len(skb);
+       len = 0;
+       i = 0;
+       do {
+               skb = dequeue_head(flow);
+               len += qdisc_pkt_len(skb);
+               mem += skb->truesize;
+               kfree_skb(skb);
+       } while (++i < max_packets && len < threshold);
+
+       flow->dropped += i;
        q->backlogs[idx] -= len;
-       sch->q.qlen--;
-       qdisc_qstats_drop(sch);
-       qdisc_qstats_backlog_dec(sch, skb);
-       kfree_skb(skb);
-       flow->dropped++;
+       q->memory_usage -= mem;
+       sch->qstats.drops += i;
+       sch->qstats.backlog -= len;
+       sch->q.qlen -= i;
        return idx;
 }
 
@@ -168,16 +189,17 @@ static unsigned int fq_codel_qdisc_drop(struct Qdisc *sch)
        unsigned int prev_backlog;
 
        prev_backlog = sch->qstats.backlog;
-       fq_codel_drop(sch);
+       fq_codel_drop(sch, 1U);
        return prev_backlog - sch->qstats.backlog;
 }
 
 static int fq_codel_enqueue(struct sk_buff *skb, struct Qdisc *sch)
 {
        struct fq_codel_sched_data *q = qdisc_priv(sch);
-       unsigned int idx, prev_backlog;
+       unsigned int idx, prev_backlog, prev_qlen;
        struct fq_codel_flow *flow;
        int uninitialized_var(ret);
+       bool memory_limited;
 
        idx = fq_codel_classify(skb, sch, &ret);
        if (idx == 0) {
@@ -200,28 +222,38 @@ static int fq_codel_enqueue(struct sk_buff *skb, struct Qdisc *sch)
                flow->deficit = q->quantum;
                flow->dropped = 0;
        }
-       if (++sch->q.qlen <= sch->limit)
+       q->memory_usage += skb->truesize;
+       memory_limited = q->memory_usage > q->memory_limit;
+       if (++sch->q.qlen <= sch->limit && !memory_limited)
                return NET_XMIT_SUCCESS;
 
        prev_backlog = sch->qstats.backlog;
-       q->drop_overlimit++;
-       /* Return Congestion Notification only if we dropped a packet
-        * from this flow.
+       prev_qlen = sch->q.qlen;
+
+       /* fq_codel_drop() is quite expensive, as it performs a linear search
+        * in q->backlogs[] to find a fat flow.
+        * So instead of dropping a single packet, drop half of its backlog
+        * with a 64 packets limit to not add a too big cpu spike here.
         */
-       if (fq_codel_drop(sch) == idx)
-               return NET_XMIT_CN;
+       ret = fq_codel_drop(sch, q->drop_batch_size);
+
+       q->drop_overlimit += prev_qlen - sch->q.qlen;
+       if (memory_limited)
+               q->drop_overmemory += prev_qlen - sch->q.qlen;
+       /* As we dropped packet(s), better let upper stack know this */
+       qdisc_tree_reduce_backlog(sch, prev_qlen - sch->q.qlen,
+                                 prev_backlog - sch->qstats.backlog);
 
-       /* As we dropped a packet, better let upper stack know this */
-       qdisc_tree_reduce_backlog(sch, 1, prev_backlog - sch->qstats.backlog);
-       return NET_XMIT_SUCCESS;
+       return ret == idx ? NET_XMIT_CN : NET_XMIT_SUCCESS;
 }
 
 /* This is the specific function called from codel_dequeue()
  * to dequeue a packet from queue. Note: backlog is handled in
  * codel, we dont need to reduce it here.
  */
-static struct sk_buff *dequeue(struct codel_vars *vars, struct Qdisc *sch)
+static struct sk_buff *dequeue_func(struct codel_vars *vars, void *ctx)
 {
+       struct Qdisc *sch = ctx;
        struct fq_codel_sched_data *q = qdisc_priv(sch);
        struct fq_codel_flow *flow;
        struct sk_buff *skb = NULL;
@@ -231,10 +263,18 @@ static struct sk_buff *dequeue(struct codel_vars *vars, struct Qdisc *sch)
                skb = dequeue_head(flow);
                q->backlogs[flow - q->flows] -= qdisc_pkt_len(skb);
                sch->q.qlen--;
+               sch->qstats.backlog -= qdisc_pkt_len(skb);
        }
        return skb;
 }
 
+static void drop_func(struct sk_buff *skb, void *ctx)
+{
+       struct Qdisc *sch = ctx;
+
+       qdisc_drop(skb, sch);
+}
+
 static struct sk_buff *fq_codel_dequeue(struct Qdisc *sch)
 {
        struct fq_codel_sched_data *q = qdisc_priv(sch);
@@ -263,8 +303,9 @@ begin:
        prev_ecn_mark = q->cstats.ecn_mark;
        prev_backlog = sch->qstats.backlog;
 
-       skb = codel_dequeue(sch, &q->cparams, &flow->cvars, &q->cstats,
-                           dequeue);
+       skb = codel_dequeue(sch, &sch->qstats.backlog, &q->cparams,
+                           &flow->cvars, &q->cstats, qdisc_pkt_len,
+                           codel_get_enqueue_time, drop_func, dequeue_func);
 
        flow->dropped += q->cstats.drop_count - prev_drop_count;
        flow->dropped += q->cstats.ecn_mark - prev_ecn_mark;
@@ -277,6 +318,7 @@ begin:
                        list_del_init(&flow->flowchain);
                goto begin;
        }
+       q->memory_usage -= skb->truesize;
        qdisc_bstats_update(sch, skb);
        flow->deficit -= qdisc_pkt_len(skb);
        /* We cant call qdisc_tree_reduce_backlog() if our qlen is 0,
@@ -323,6 +365,8 @@ static const struct nla_policy fq_codel_policy[TCA_FQ_CODEL_MAX + 1] = {
        [TCA_FQ_CODEL_FLOWS]    = { .type = NLA_U32 },
        [TCA_FQ_CODEL_QUANTUM]  = { .type = NLA_U32 },
        [TCA_FQ_CODEL_CE_THRESHOLD] = { .type = NLA_U32 },
+       [TCA_FQ_CODEL_DROP_BATCH_SIZE] = { .type = NLA_U32 },
+       [TCA_FQ_CODEL_MEMORY_LIMIT] = { .type = NLA_U32 },
 };
 
 static int fq_codel_change(struct Qdisc *sch, struct nlattr *opt)
@@ -374,7 +418,14 @@ static int fq_codel_change(struct Qdisc *sch, struct nlattr *opt)
        if (tb[TCA_FQ_CODEL_QUANTUM])
                q->quantum = max(256U, nla_get_u32(tb[TCA_FQ_CODEL_QUANTUM]));
 
-       while (sch->q.qlen > sch->limit) {
+       if (tb[TCA_FQ_CODEL_DROP_BATCH_SIZE])
+               q->drop_batch_size = min(1U, nla_get_u32(tb[TCA_FQ_CODEL_DROP_BATCH_SIZE]));
+
+       if (tb[TCA_FQ_CODEL_MEMORY_LIMIT])
+               q->memory_limit = min(1U << 31, nla_get_u32(tb[TCA_FQ_CODEL_MEMORY_LIMIT]));
+
+       while (sch->q.qlen > sch->limit ||
+              q->memory_usage > q->memory_limit) {
                struct sk_buff *skb = fq_codel_dequeue(sch);
 
                q->cstats.drop_len += qdisc_pkt_len(skb);
@@ -419,13 +470,16 @@ static int fq_codel_init(struct Qdisc *sch, struct nlattr *opt)
 
        sch->limit = 10*1024;
        q->flows_cnt = 1024;
+       q->memory_limit = 32 << 20; /* 32 MBytes */
+       q->drop_batch_size = 64;
        q->quantum = psched_mtu(qdisc_dev(sch));
        q->perturbation = prandom_u32();
        INIT_LIST_HEAD(&q->new_flows);
        INIT_LIST_HEAD(&q->old_flows);
-       codel_params_init(&q->cparams, sch);
+       codel_params_init(&q->cparams);
        codel_stats_init(&q->cstats);
        q->cparams.ecn = true;
+       q->cparams.mtu = psched_mtu(qdisc_dev(sch));
 
        if (opt) {
                int err = fq_codel_change(sch, opt);
@@ -476,6 +530,10 @@ static int fq_codel_dump(struct Qdisc *sch, struct sk_buff *skb)
                        q->cparams.ecn) ||
            nla_put_u32(skb, TCA_FQ_CODEL_QUANTUM,
                        q->quantum) ||
+           nla_put_u32(skb, TCA_FQ_CODEL_DROP_BATCH_SIZE,
+                       q->drop_batch_size) ||
+           nla_put_u32(skb, TCA_FQ_CODEL_MEMORY_LIMIT,
+                       q->memory_limit) ||
            nla_put_u32(skb, TCA_FQ_CODEL_FLOWS,
                        q->flows_cnt))
                goto nla_put_failure;
@@ -504,6 +562,8 @@ static int fq_codel_dump_stats(struct Qdisc *sch, struct gnet_dump *d)
        st.qdisc_stats.ecn_mark = q->cstats.ecn_mark;
        st.qdisc_stats.new_flow_count = q->new_flow_count;
        st.qdisc_stats.ce_mark = q->cstats.ce_mark;
+       st.qdisc_stats.memory_usage  = q->memory_usage;
+       st.qdisc_stats.drop_overmemory = q->drop_overmemory;
 
        list_for_each(pos, &q->new_flows)
                st.qdisc_stats.new_flows_len++;
index 80742ed..269dd71 100644 (file)
@@ -108,35 +108,6 @@ static struct sk_buff *dequeue_skb(struct Qdisc *q, bool *validate,
        return skb;
 }
 
-static inline int handle_dev_cpu_collision(struct sk_buff *skb,
-                                          struct netdev_queue *dev_queue,
-                                          struct Qdisc *q)
-{
-       int ret;
-
-       if (unlikely(dev_queue->xmit_lock_owner == smp_processor_id())) {
-               /*
-                * Same CPU holding the lock. It may be a transient
-                * configuration error, when hard_start_xmit() recurses. We
-                * detect it by checking xmit owner and drop the packet when
-                * deadloop is detected. Return OK to try the next skb.
-                */
-               kfree_skb_list(skb);
-               net_warn_ratelimited("Dead loop on netdevice %s, fix it urgently!\n",
-                                    dev_queue->dev->name);
-               ret = qdisc_qlen(q);
-       } else {
-               /*
-                * Another cpu is holding lock, requeue & delay xmits for
-                * some time.
-                */
-               __this_cpu_inc(softnet_data.cpu_collision);
-               ret = dev_requeue_skb(skb, q);
-       }
-
-       return ret;
-}
-
 /*
  * Transmit possibly several skbs, and handle the return status as
  * required. Holding the __QDISC___STATE_RUNNING bit guarantees that
@@ -174,9 +145,6 @@ int sch_direct_xmit(struct sk_buff *skb, struct Qdisc *q,
        if (dev_xmit_complete(ret)) {
                /* Driver sent out skb successfully or skb was consumed */
                ret = qdisc_qlen(q);
-       } else if (ret == NETDEV_TX_LOCKED) {
-               /* Driver try lock failed */
-               ret = handle_dev_cpu_collision(skb, txq, q);
        } else {
                /* Driver returned NETDEV_TX_BUSY - requeue skb */
                if (unlikely(ret != NETDEV_TX_BUSY))
@@ -259,13 +227,12 @@ unsigned long dev_trans_start(struct net_device *dev)
 
        if (is_vlan_dev(dev))
                dev = vlan_dev_real_dev(dev);
-       res = dev->trans_start;
-       for (i = 0; i < dev->num_tx_queues; i++) {
+       res = netdev_get_tx_queue(dev, 0)->trans_start;
+       for (i = 1; i < dev->num_tx_queues; i++) {
                val = netdev_get_tx_queue(dev, i)->trans_start;
                if (val && time_after(val, res))
                        res = val;
        }
-       dev->trans_start = res;
 
        return res;
 }
@@ -288,10 +255,7 @@ static void dev_watchdog(unsigned long arg)
                                struct netdev_queue *txq;
 
                                txq = netdev_get_tx_queue(dev, i);
-                               /*
-                                * old device drivers set dev->trans_start
-                                */
-                               trans_start = txq->trans_start ? : dev->trans_start;
+                               trans_start = txq->trans_start;
                                if (netif_xmit_stopped(txq) &&
                                    time_after(jiffies, (trans_start +
                                                         dev->watchdog_timeo))) {
@@ -807,7 +771,7 @@ void dev_activate(struct net_device *dev)
                transition_one_qdisc(dev, dev_ingress_queue(dev), NULL);
 
        if (need_watchdog) {
-               dev->trans_start = jiffies;
+               netif_trans_update(dev);
                dev_watchdog_up(dev);
        }
 }
index 87b02ed..f6bf581 100644 (file)
@@ -1122,10 +1122,12 @@ static int htb_dump_class(struct Qdisc *sch, unsigned long arg,
        if (nla_put(skb, TCA_HTB_PARMS, sizeof(opt), &opt))
                goto nla_put_failure;
        if ((cl->rate.rate_bytes_ps >= (1ULL << 32)) &&
-           nla_put_u64(skb, TCA_HTB_RATE64, cl->rate.rate_bytes_ps))
+           nla_put_u64_64bit(skb, TCA_HTB_RATE64, cl->rate.rate_bytes_ps,
+                             TCA_HTB_PAD))
                goto nla_put_failure;
        if ((cl->ceil.rate_bytes_ps >= (1ULL << 32)) &&
-           nla_put_u64(skb, TCA_HTB_CEIL64, cl->ceil.rate_bytes_ps))
+           nla_put_u64_64bit(skb, TCA_HTB_CEIL64, cl->ceil.rate_bytes_ps,
+                             TCA_HTB_PAD))
                goto nla_put_failure;
 
        return nla_nest_end(skb, nest);
index 9640bb3..205bed0 100644 (file)
@@ -395,6 +395,25 @@ static void tfifo_enqueue(struct sk_buff *nskb, struct Qdisc *sch)
        sch->q.qlen++;
 }
 
+/* netem can't properly corrupt a megapacket (like we get from GSO), so instead
+ * when we statistically choose to corrupt one, we instead segment it, returning
+ * the first packet to be corrupted, and re-enqueue the remaining frames
+ */
+static struct sk_buff *netem_segment(struct sk_buff *skb, struct Qdisc *sch)
+{
+       struct sk_buff *segs;
+       netdev_features_t features = netif_skb_features(skb);
+
+       segs = skb_gso_segment(skb, features & ~NETIF_F_GSO_MASK);
+
+       if (IS_ERR_OR_NULL(segs)) {
+               qdisc_reshape_fail(skb, sch);
+               return NULL;
+       }
+       consume_skb(skb);
+       return segs;
+}
+
 /*
  * Insert one skb into qdisc.
  * Note: parent depends on return value to account for queue length.
@@ -407,7 +426,11 @@ static int netem_enqueue(struct sk_buff *skb, struct Qdisc *sch)
        /* We don't fill cb now as skb_unshare() may invalidate it */
        struct netem_skb_cb *cb;
        struct sk_buff *skb2;
+       struct sk_buff *segs = NULL;
+       unsigned int len = 0, last_len, prev_len = qdisc_pkt_len(skb);
+       int nb = 0;
        int count = 1;
+       int rc = NET_XMIT_SUCCESS;
 
        /* Random duplication */
        if (q->duplicate && q->duplicate >= get_crandom(&q->dup_cor))
@@ -453,10 +476,23 @@ static int netem_enqueue(struct sk_buff *skb, struct Qdisc *sch)
         * do it now in software before we mangle it.
         */
        if (q->corrupt && q->corrupt >= get_crandom(&q->corrupt_cor)) {
+               if (skb_is_gso(skb)) {
+                       segs = netem_segment(skb, sch);
+                       if (!segs)
+                               return NET_XMIT_DROP;
+               } else {
+                       segs = skb;
+               }
+
+               skb = segs;
+               segs = segs->next;
+
                if (!(skb = skb_unshare(skb, GFP_ATOMIC)) ||
                    (skb->ip_summed == CHECKSUM_PARTIAL &&
-                    skb_checksum_help(skb)))
-                       return qdisc_drop(skb, sch);
+                    skb_checksum_help(skb))) {
+                       rc = qdisc_drop(skb, sch);
+                       goto finish_segs;
+               }
 
                skb->data[prandom_u32() % skb_headlen(skb)] ^=
                        1<<(prandom_u32() % 8);
@@ -516,6 +552,27 @@ static int netem_enqueue(struct sk_buff *skb, struct Qdisc *sch)
                sch->qstats.requeues++;
        }
 
+finish_segs:
+       if (segs) {
+               while (segs) {
+                       skb2 = segs->next;
+                       segs->next = NULL;
+                       qdisc_skb_cb(segs)->pkt_len = segs->len;
+                       last_len = segs->len;
+                       rc = qdisc_enqueue(segs, sch);
+                       if (rc != NET_XMIT_SUCCESS) {
+                               if (net_xmit_drop_count(rc))
+                                       qdisc_qstats_drop(sch);
+                       } else {
+                               nb++;
+                               len += last_len;
+                       }
+                       segs = skb2;
+               }
+               sch->q.qlen += nb;
+               if (nb > 1)
+                       qdisc_tree_reduce_backlog(sch, 1 - nb, prev_len - len);
+       }
        return NET_XMIT_SUCCESS;
 }
 
@@ -994,7 +1051,8 @@ static int netem_dump(struct Qdisc *sch, struct sk_buff *skb)
                goto nla_put_failure;
 
        if (q->rate >= (1ULL << 32)) {
-               if (nla_put_u64(skb, TCA_NETEM_RATE64, q->rate))
+               if (nla_put_u64_64bit(skb, TCA_NETEM_RATE64, q->rate,
+                                     TCA_NETEM_PAD))
                        goto nla_put_failure;
                rate.rate = ~0U;
        } else {
index c2fbde7..83b90b5 100644 (file)
@@ -472,11 +472,13 @@ static int tbf_dump(struct Qdisc *sch, struct sk_buff *skb)
        if (nla_put(skb, TCA_TBF_PARMS, sizeof(opt), &opt))
                goto nla_put_failure;
        if (q->rate.rate_bytes_ps >= (1ULL << 32) &&
-           nla_put_u64(skb, TCA_TBF_RATE64, q->rate.rate_bytes_ps))
+           nla_put_u64_64bit(skb, TCA_TBF_RATE64, q->rate.rate_bytes_ps,
+                             TCA_TBF_PAD))
                goto nla_put_failure;
        if (tbf_peak_present(q) &&
            q->peak.rate_bytes_ps >= (1ULL << 32) &&
-           nla_put_u64(skb, TCA_TBF_PRATE64, q->peak.rate_bytes_ps))
+           nla_put_u64_64bit(skb, TCA_TBF_PRATE64, q->peak.rate_bytes_ps,
+                             TCA_TBF_PAD))
                goto nla_put_failure;
 
        return nla_nest_end(skb, nest);
index 958ef5f..1eb94bf 100644 (file)
@@ -239,7 +239,7 @@ struct sctp_datamsg *sctp_datamsg_from_user(struct sctp_association *asoc,
        offset = 0;
 
        if ((whole > 1) || (whole && over))
-               SCTP_INC_STATS_USER(sock_net(asoc->base.sk), SCTP_MIB_FRAGUSRMSGS);
+               SCTP_INC_STATS(sock_net(asoc->base.sk), SCTP_MIB_FRAGUSRMSGS);
 
        /* Create chunks for all the full sized DATA chunks. */
        for (i = 0, len = first_len; i < whole; i++) {
index 00b8445..a701527 100644 (file)
@@ -84,7 +84,7 @@ static inline int sctp_rcv_checksum(struct net *net, struct sk_buff *skb)
 
        if (val != cmp) {
                /* CRC failure, dump it. */
-               SCTP_INC_STATS_BH(net, SCTP_MIB_CHECKSUMERRORS);
+               __SCTP_INC_STATS(net, SCTP_MIB_CHECKSUMERRORS);
                return -1;
        }
        return 0;
@@ -122,7 +122,7 @@ int sctp_rcv(struct sk_buff *skb)
        if (skb->pkt_type != PACKET_HOST)
                goto discard_it;
 
-       SCTP_INC_STATS_BH(net, SCTP_MIB_INSCTPPACKS);
+       __SCTP_INC_STATS(net, SCTP_MIB_INSCTPPACKS);
 
        if (skb_linearize(skb))
                goto discard_it;
@@ -208,7 +208,7 @@ int sctp_rcv(struct sk_buff *skb)
         */
        if (!asoc) {
                if (sctp_rcv_ootb(skb)) {
-                       SCTP_INC_STATS_BH(net, SCTP_MIB_OUTOFBLUES);
+                       __SCTP_INC_STATS(net, SCTP_MIB_OUTOFBLUES);
                        goto discard_release;
                }
        }
@@ -264,9 +264,9 @@ int sctp_rcv(struct sk_buff *skb)
                        skb = NULL; /* sctp_chunk_free already freed the skb */
                        goto discard_release;
                }
-               SCTP_INC_STATS_BH(net, SCTP_MIB_IN_PKT_BACKLOG);
+               __SCTP_INC_STATS(net, SCTP_MIB_IN_PKT_BACKLOG);
        } else {
-               SCTP_INC_STATS_BH(net, SCTP_MIB_IN_PKT_SOFTIRQ);
+               __SCTP_INC_STATS(net, SCTP_MIB_IN_PKT_SOFTIRQ);
                sctp_inq_push(&chunk->rcvr->inqueue, chunk);
        }
 
@@ -281,7 +281,7 @@ int sctp_rcv(struct sk_buff *skb)
        return 0;
 
 discard_it:
-       SCTP_INC_STATS_BH(net, SCTP_MIB_IN_PKT_DISCARDS);
+       __SCTP_INC_STATS(net, SCTP_MIB_IN_PKT_DISCARDS);
        kfree_skb(skb);
        return 0;
 
@@ -532,7 +532,7 @@ struct sock *sctp_err_lookup(struct net *net, int family, struct sk_buff *skb,
         * servers this needs to be solved differently.
         */
        if (sock_owned_by_user(sk))
-               NET_INC_STATS_BH(net, LINUX_MIB_LOCKDROPPEDICMPS);
+               __NET_INC_STATS(net, LINUX_MIB_LOCKDROPPEDICMPS);
 
        *app = asoc;
        *tpp = transport;
@@ -589,7 +589,7 @@ void sctp_v4_err(struct sk_buff *skb, __u32 info)
        skb->network_header = saveip;
        skb->transport_header = savesctp;
        if (!sk) {
-               ICMP_INC_STATS_BH(net, ICMP_MIB_INERRORS);
+               __ICMP_INC_STATS(net, ICMP_MIB_INERRORS);
                return;
        }
        /* Warning:  The sock lock is held.  Remember to call
index b335ffc..9d87bba 100644 (file)
@@ -89,10 +89,12 @@ void sctp_inq_push(struct sctp_inq *q, struct sctp_chunk *chunk)
         * Eventually, we should clean up inqueue to not rely
         * on the BH related data structures.
         */
+       local_bh_disable();
        list_add_tail(&chunk->list, &q->in_chunk_list);
        if (chunk->asoc)
                chunk->asoc->stats.ipackets++;
        q->immediate.func(&q->immediate);
+       local_bh_enable();
 }
 
 /* Peek at the next chunk on the inqeue. */
index ce46f1c..0657d18 100644 (file)
@@ -162,7 +162,7 @@ static void sctp_v6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
        skb->network_header   = saveip;
        skb->transport_header = savesctp;
        if (!sk) {
-               ICMP6_INC_STATS_BH(net, idev, ICMP6_MIB_INERRORS);
+               __ICMP6_INC_STATS(net, idev, ICMP6_MIB_INERRORS);
                goto out;
        }
 
index bb2d8d9..8e3e769 100644 (file)
@@ -145,7 +145,11 @@ static int inet_sctp_diag_fill(struct sock *sk, struct sctp_association *asoc,
                else
                        amt = sk_wmem_alloc_get(sk);
                mem[SK_MEMINFO_WMEM_ALLOC] = amt;
-               mem[SK_MEMINFO_RMEM_ALLOC] = sk_rmem_alloc_get(sk);
+               if (asoc && asoc->ep->rcvbuf_policy)
+                       amt = atomic_read(&asoc->rmem_alloc);
+               else
+                       amt = sk_rmem_alloc_get(sk);
+               mem[SK_MEMINFO_RMEM_ALLOC] = amt;
                mem[SK_MEMINFO_RCVBUF] = sk->sk_rcvbuf;
                mem[SK_MEMINFO_SNDBUF] = sk->sk_sndbuf;
                mem[SK_MEMINFO_FWD_ALLOC] = sk->sk_forward_alloc;
@@ -161,8 +165,9 @@ static int inet_sctp_diag_fill(struct sock *sk, struct sctp_association *asoc,
        if (ext & (1 << (INET_DIAG_INFO - 1))) {
                struct nlattr *attr;
 
-               attr = nla_reserve(skb, INET_DIAG_INFO,
-                                  sizeof(struct sctp_info));
+               attr = nla_reserve_64bit(skb, INET_DIAG_INFO,
+                                        sizeof(struct sctp_info),
+                                        INET_DIAG_PAD);
                if (!attr)
                        goto errout;
 
index e8f0112..aa37122 100644 (file)
@@ -1741,10 +1741,9 @@ out:
        } else if (local_cork)
                error = sctp_outq_uncork(&asoc->outqueue, gfp);
 
-       if (sp->pending_data_ready) {
-               sk->sk_data_ready(sk);
-               sp->pending_data_ready = 0;
-       }
+       if (sp->data_ready_signalled)
+               sp->data_ready_signalled = 0;
+
        return error;
 nomem:
        error = -ENOMEM;
index ec12a89..ec166d2 100644 (file)
@@ -194,6 +194,7 @@ static int sctp_ulpq_clear_pd(struct sctp_ulpq *ulpq)
 int sctp_ulpq_tail_event(struct sctp_ulpq *ulpq, struct sctp_ulpevent *event)
 {
        struct sock *sk = ulpq->asoc->base.sk;
+       struct sctp_sock *sp = sctp_sk(sk);
        struct sk_buff_head *queue, *skb_list;
        struct sk_buff *skb = sctp_event2skb(event);
        int clear_pd = 0;
@@ -211,7 +212,7 @@ int sctp_ulpq_tail_event(struct sctp_ulpq *ulpq, struct sctp_ulpevent *event)
                sk_incoming_cpu_update(sk);
        }
        /* Check if the user wishes to receive this event.  */
-       if (!sctp_ulpevent_is_enabled(event, &sctp_sk(sk)->subscribe))
+       if (!sctp_ulpevent_is_enabled(event, &sp->subscribe))
                goto out_free;
 
        /* If we are in partial delivery mode, post to the lobby until
@@ -219,7 +220,7 @@ int sctp_ulpq_tail_event(struct sctp_ulpq *ulpq, struct sctp_ulpevent *event)
         * the association the cause of the partial delivery.
         */
 
-       if (atomic_read(&sctp_sk(sk)->pd_mode) == 0) {
+       if (atomic_read(&sp->pd_mode) == 0) {
                queue = &sk->sk_receive_queue;
        } else {
                if (ulpq->pd_mode) {
@@ -231,7 +232,7 @@ int sctp_ulpq_tail_event(struct sctp_ulpq *ulpq, struct sctp_ulpevent *event)
                        if ((event->msg_flags & MSG_NOTIFICATION) ||
                            (SCTP_DATA_NOT_FRAG ==
                                    (event->msg_flags & SCTP_DATA_FRAG_MASK)))
-                               queue = &sctp_sk(sk)->pd_lobby;
+                               queue = &sp->pd_lobby;
                        else {
                                clear_pd = event->msg_flags & MSG_EOR;
                                queue = &sk->sk_receive_queue;
@@ -242,10 +243,10 @@ int sctp_ulpq_tail_event(struct sctp_ulpq *ulpq, struct sctp_ulpevent *event)
                         * can queue this to the receive queue instead
                         * of the lobby.
                         */
-                       if (sctp_sk(sk)->frag_interleave)
+                       if (sp->frag_interleave)
                                queue = &sk->sk_receive_queue;
                        else
-                               queue = &sctp_sk(sk)->pd_lobby;
+                               queue = &sp->pd_lobby;
                }
        }
 
@@ -264,8 +265,10 @@ int sctp_ulpq_tail_event(struct sctp_ulpq *ulpq, struct sctp_ulpevent *event)
        if (clear_pd)
                sctp_ulpq_clear_pd(ulpq);
 
-       if (queue == &sk->sk_receive_queue)
-               sctp_sk(sk)->pending_data_ready = 1;
+       if (queue == &sk->sk_receive_queue && !sp->data_ready_signalled) {
+               sp->data_ready_signalled = 1;
+               sk->sk_data_ready(sk);
+       }
        return 1;
 
 out_free:
@@ -1126,11 +1129,13 @@ void sctp_ulpq_abort_pd(struct sctp_ulpq *ulpq, gfp_t gfp)
 {
        struct sctp_ulpevent *ev = NULL;
        struct sock *sk;
+       struct sctp_sock *sp;
 
        if (!ulpq->pd_mode)
                return;
 
        sk = ulpq->asoc->base.sk;
+       sp = sctp_sk(sk);
        if (sctp_ulpevent_type_enabled(SCTP_PARTIAL_DELIVERY_EVENT,
                                       &sctp_sk(sk)->subscribe))
                ev = sctp_ulpevent_make_pdapi(ulpq->asoc,
@@ -1140,6 +1145,8 @@ void sctp_ulpq_abort_pd(struct sctp_ulpq *ulpq, gfp_t gfp)
                __skb_queue_tail(&sk->sk_receive_queue, sctp_event2skb(ev));
 
        /* If there is data waiting, send it up the socket now. */
-       if (sctp_ulpq_clear_pd(ulpq) || ev)
-               sctp_sk(sk)->pending_data_ready = 1;
+       if ((sctp_ulpq_clear_pd(ulpq) || ev) && !sp->data_ready_signalled) {
+               sp->data_ready_signalled = 1;
+               sk->sk_data_ready(sk);
+       }
 }
index 5dbb0bb..7789d79 100644 (file)
@@ -600,9 +600,6 @@ void __sock_tx_timestamp(__u16 tsflags, __u8 *tx_flags)
        if (tsflags & SOF_TIMESTAMPING_TX_SCHED)
                flags |= SKBTX_SCHED_TSTAMP;
 
-       if (tsflags & SOF_TIMESTAMPING_TX_ACK)
-               flags |= SKBTX_ACK_TSTAMP;
-
        *tx_flags = flags;
 }
 EXPORT_SYMBOL(__sock_tx_timestamp);
index d0756ac..a6c68dc 100644 (file)
@@ -1018,11 +1018,11 @@ static void xs_udp_data_read_skb(struct rpc_xprt *xprt,
 
        /* Suck it into the iovec, verify checksum if not done by hw. */
        if (csum_partial_copy_to_xdr(&rovr->rq_private_buf, skb)) {
-               UDPX_INC_STATS_BH(sk, UDP_MIB_INERRORS);
+               __UDPX_INC_STATS(sk, UDP_MIB_INERRORS);
                goto out_unlock;
        }
 
-       UDPX_INC_STATS_BH(sk, UDP_MIB_INDATAGRAMS);
+       __UDPX_INC_STATS(sk, UDP_MIB_INDATAGRAMS);
 
        xprt_adjust_cwnd(xprt, task, copied);
        xprt_complete_rqst(task, copied);
index 2b9b98f..b7e01d8 100644 (file)
@@ -305,6 +305,8 @@ static void switchdev_port_attr_set_deferred(struct net_device *dev,
        if (err && err != -EOPNOTSUPP)
                netdev_err(dev, "failed (err=%d) to set attribute (id=%d)\n",
                           err, attr->id);
+       if (attr->complete)
+               attr->complete(dev, err, attr->complete_priv);
 }
 
 static int switchdev_port_attr_set_defer(struct net_device *dev,
@@ -434,6 +436,8 @@ static void switchdev_port_obj_add_deferred(struct net_device *dev,
        if (err && err != -EOPNOTSUPP)
                netdev_err(dev, "failed (err=%d) to add object (id=%d)\n",
                           err, obj->id);
+       if (obj->complete)
+               obj->complete(dev, err, obj->complete_priv);
 }
 
 static int switchdev_port_obj_add_defer(struct net_device *dev,
@@ -502,6 +506,8 @@ static void switchdev_port_obj_del_deferred(struct net_device *dev,
        if (err && err != -EOPNOTSUPP)
                netdev_err(dev, "failed (err=%d) to del object (id=%d)\n",
                           err, obj->id);
+       if (obj->complete)
+               obj->complete(dev, err, obj->complete_priv);
 }
 
 static int switchdev_port_obj_del_defer(struct net_device *dev,
index e2bdb07..fe1b062 100644 (file)
@@ -112,11 +112,9 @@ static int __init tipc_init(void)
 
        pr_info("Activated (version " TIPC_MOD_VER ")\n");
 
-       sysctl_tipc_rmem[0] = TIPC_CONN_OVERLOAD_LIMIT >> 4 <<
-                             TIPC_LOW_IMPORTANCE;
-       sysctl_tipc_rmem[1] = TIPC_CONN_OVERLOAD_LIMIT >> 4 <<
-                             TIPC_CRITICAL_IMPORTANCE;
-       sysctl_tipc_rmem[2] = TIPC_CONN_OVERLOAD_LIMIT;
+       sysctl_tipc_rmem[0] = RCVBUF_MIN;
+       sysctl_tipc_rmem[1] = RCVBUF_DEF;
+       sysctl_tipc_rmem[2] = RCVBUF_MAX;
 
        err = tipc_netlink_start();
        if (err)
index 58bf515..024da8a 100644 (file)
@@ -743,16 +743,26 @@ static inline void msg_set_msgcnt(struct tipc_msg *m, u16 n)
        msg_set_bits(m, 9, 16, 0xffff, n);
 }
 
-static inline u32 msg_bcast_tag(struct tipc_msg *m)
+static inline u32 msg_conn_ack(struct tipc_msg *m)
 {
        return msg_bits(m, 9, 16, 0xffff);
 }
 
-static inline void msg_set_bcast_tag(struct tipc_msg *m, u32 n)
+static inline void msg_set_conn_ack(struct tipc_msg *m, u32 n)
 {
        msg_set_bits(m, 9, 16, 0xffff, n);
 }
 
+static inline u32 msg_adv_win(struct tipc_msg *m)
+{
+       return msg_bits(m, 9, 0, 0xffff);
+}
+
+static inline void msg_set_adv_win(struct tipc_msg *m, u32 n)
+{
+       msg_set_bits(m, 9, 0, 0xffff, n);
+}
+
 static inline u32 msg_max_pkt(struct tipc_msg *m)
 {
        return msg_bits(m, 9, 16, 0xffff) * 4;
index 68d9f7b..d903f56 100644 (file)
@@ -1,7 +1,7 @@
 /*
  * net/tipc/node.c: TIPC node management routines
  *
- * Copyright (c) 2000-2006, 2012-2015, Ericsson AB
+ * Copyright (c) 2000-2006, 2012-2016, Ericsson AB
  * Copyright (c) 2005-2006, 2010-2014, Wind River Systems
  * All rights reserved.
  *
@@ -191,6 +191,20 @@ int tipc_node_get_mtu(struct net *net, u32 addr, u32 sel)
        tipc_node_put(n);
        return mtu;
 }
+
+u16 tipc_node_get_capabilities(struct net *net, u32 addr)
+{
+       struct tipc_node *n;
+       u16 caps;
+
+       n = tipc_node_find(net, addr);
+       if (unlikely(!n))
+               return TIPC_NODE_CAPABILITIES;
+       caps = n->capabilities;
+       tipc_node_put(n);
+       return caps;
+}
+
 /*
  * A trivial power-of-two bitmask technique is used for speed, since this
  * operation is done for every incoming TIPC packet. The number of hash table
@@ -304,8 +318,11 @@ struct tipc_node *tipc_node_create(struct net *net, u32 addr, u16 capabilities)
 
        spin_lock_bh(&tn->node_list_lock);
        n = tipc_node_find(net, addr);
-       if (n)
+       if (n) {
+               /* Same node may come back with new capabilities */
+               n->capabilities = capabilities;
                goto exit;
+       }
        n = kzalloc(sizeof(*n), GFP_ATOMIC);
        if (!n) {
                pr_warn("Node creation failed, no memory\n");
@@ -554,6 +571,7 @@ static void __tipc_node_link_up(struct tipc_node *n, int bearer_id,
                *slot1 = bearer_id;
                tipc_node_fsm_evt(n, SELF_ESTABL_CONTACT_EVT);
                n->action_flags |= TIPC_NOTIFY_NODE_UP;
+               tipc_link_set_active(nl, true);
                tipc_bcast_add_peer(n->net, nl, xmitq);
                return;
        }
@@ -1451,6 +1469,7 @@ void tipc_rcv(struct net *net, struct sk_buff *skb, struct tipc_bearer *b)
        int bearer_id = b->identity;
        struct tipc_link_entry *le;
        u16 bc_ack = msg_bcast_ack(hdr);
+       u32 self = tipc_own_addr(net);
        int rc = 0;
 
        __skb_queue_head_init(&xmitq);
@@ -1467,6 +1486,10 @@ void tipc_rcv(struct net *net, struct sk_buff *skb, struct tipc_bearer *b)
                        return tipc_node_bc_rcv(net, skb, bearer_id);
        }
 
+       /* Discard unicast link messages destined for another node */
+       if (unlikely(!msg_short(hdr) && (msg_destnode(hdr) != self)))
+               goto discard;
+
        /* Locate neighboring node that sent packet */
        n = tipc_node_find(net, msg_prevnode(hdr));
        if (unlikely(!n))
index f39d9d0..8264b3d 100644 (file)
 /* Optional capabilities supported by this code version
  */
 enum {
-       TIPC_BCAST_SYNCH = (1 << 1)
+       TIPC_BCAST_SYNCH   = (1 << 1),
+       TIPC_BLOCK_FLOWCTL = (2 << 1)
 };
 
-#define TIPC_NODE_CAPABILITIES TIPC_BCAST_SYNCH
+#define TIPC_NODE_CAPABILITIES (TIPC_BCAST_SYNCH | TIPC_BLOCK_FLOWCTL)
 #define INVALID_BEARER_ID -1
 
 void tipc_node_stop(struct net *net);
@@ -70,6 +71,7 @@ void tipc_node_broadcast(struct net *net, struct sk_buff *skb);
 int tipc_node_add_conn(struct net *net, u32 dnode, u32 port, u32 peer_port);
 void tipc_node_remove_conn(struct net *net, u32 dnode, u32 port);
 int tipc_node_get_mtu(struct net *net, u32 addr, u32 sel);
+u16 tipc_node_get_capabilities(struct net *net, u32 addr);
 int tipc_nl_node_dump(struct sk_buff *skb, struct netlink_callback *cb);
 int tipc_nl_node_dump_link(struct sk_buff *skb, struct netlink_callback *cb);
 int tipc_nl_node_reset_link_stats(struct sk_buff *skb, struct genl_info *info);
index 3eeb50a..1262889 100644 (file)
@@ -96,8 +96,11 @@ struct tipc_sock {
        uint conn_timeout;
        atomic_t dupl_rcvcnt;
        bool link_cong;
-       uint sent_unacked;
-       uint rcv_unacked;
+       u16 snt_unacked;
+       u16 snd_win;
+       u16 peer_caps;
+       u16 rcv_unacked;
+       u16 rcv_win;
        struct sockaddr_tipc remote;
        struct rhash_head node;
        struct rcu_head rcu;
@@ -227,9 +230,29 @@ static struct tipc_sock *tipc_sk(const struct sock *sk)
        return container_of(sk, struct tipc_sock, sk);
 }
 
-static int tsk_conn_cong(struct tipc_sock *tsk)
+static bool tsk_conn_cong(struct tipc_sock *tsk)
 {
-       return tsk->sent_unacked >= TIPC_FLOWCTRL_WIN;
+       return tsk->snt_unacked >= tsk->snd_win;
+}
+
+/* tsk_blocks(): translate a buffer size in bytes to number of
+ * advertisable blocks, taking into account the ratio truesize(len)/len
+ * We can trust that this ratio is always < 4 for len >= FLOWCTL_BLK_SZ
+ */
+static u16 tsk_adv_blocks(int len)
+{
+       return len / FLOWCTL_BLK_SZ / 4;
+}
+
+/* tsk_inc(): increment counter for sent or received data
+ * - If block based flow control is not supported by peer we
+ *   fall back to message based ditto, incrementing the counter
+ */
+static u16 tsk_inc(struct tipc_sock *tsk, int msglen)
+{
+       if (likely(tsk->peer_caps & TIPC_BLOCK_FLOWCTL))
+               return ((msglen / FLOWCTL_BLK_SZ) + 1);
+       return 1;
 }
 
 /**
@@ -377,9 +400,12 @@ static int tipc_sk_create(struct net *net, struct socket *sock,
        sk->sk_write_space = tipc_write_space;
        sk->sk_destruct = tipc_sock_destruct;
        tsk->conn_timeout = CONN_TIMEOUT_DEFAULT;
-       tsk->sent_unacked = 0;
        atomic_set(&tsk->dupl_rcvcnt, 0);
 
+       /* Start out with safe limits until we receive an advertised window */
+       tsk->snd_win = tsk_adv_blocks(RCVBUF_MIN);
+       tsk->rcv_win = tsk->snd_win;
+
        if (sock->state == SS_READY) {
                tsk_set_unreturnable(tsk, true);
                if (sock->type == SOCK_DGRAM)
@@ -775,7 +801,7 @@ static void tipc_sk_proto_rcv(struct tipc_sock *tsk, struct sk_buff *skb)
        struct sock *sk = &tsk->sk;
        struct tipc_msg *hdr = buf_msg(skb);
        int mtyp = msg_type(hdr);
-       int conn_cong;
+       bool conn_cong;
 
        /* Ignore if connection cannot be validated: */
        if (!tsk_peer_msg(tsk, hdr))
@@ -789,7 +815,9 @@ static void tipc_sk_proto_rcv(struct tipc_sock *tsk, struct sk_buff *skb)
                return;
        } else if (mtyp == CONN_ACK) {
                conn_cong = tsk_conn_cong(tsk);
-               tsk->sent_unacked -= msg_msgcnt(hdr);
+               tsk->snt_unacked -= msg_conn_ack(hdr);
+               if (tsk->peer_caps & TIPC_BLOCK_FLOWCTL)
+                       tsk->snd_win = msg_adv_win(hdr);
                if (conn_cong)
                        sk->sk_write_space(sk);
        } else if (mtyp != CONN_PROBE_REPLY) {
@@ -1020,12 +1048,14 @@ static int __tipc_send_stream(struct socket *sock, struct msghdr *m, size_t dsz)
        u32 dnode;
        uint mtu, send, sent = 0;
        struct iov_iter save;
+       int hlen = MIN_H_SIZE;
 
        /* Handle implied connection establishment */
        if (unlikely(dest)) {
                rc = __tipc_sendmsg(sock, m, dsz);
+               hlen = msg_hdr_sz(mhdr);
                if (dsz && (dsz == rc))
-                       tsk->sent_unacked = 1;
+                       tsk->snt_unacked = tsk_inc(tsk, dsz + hlen);
                return rc;
        }
        if (dsz > (uint)INT_MAX)
@@ -1054,7 +1084,7 @@ next:
                if (likely(!tsk_conn_cong(tsk))) {
                        rc = tipc_node_xmit(net, &pktchain, dnode, portid);
                        if (likely(!rc)) {
-                               tsk->sent_unacked++;
+                               tsk->snt_unacked += tsk_inc(tsk, send + hlen);
                                sent += send;
                                if (sent == dsz)
                                        return dsz;
@@ -1118,6 +1148,13 @@ static void tipc_sk_finish_conn(struct tipc_sock *tsk, u32 peer_port,
        sk_reset_timer(sk, &sk->sk_timer, jiffies + tsk->probing_intv);
        tipc_node_add_conn(net, peer_node, tsk->portid, peer_port);
        tsk->max_pkt = tipc_node_get_mtu(net, peer_node, tsk->portid);
+       tsk->peer_caps = tipc_node_get_capabilities(net, peer_node);
+       if (tsk->peer_caps & TIPC_BLOCK_FLOWCTL)
+               return;
+
+       /* Fall back to message based flow control */
+       tsk->rcv_win = FLOWCTL_MSG_WIN;
+       tsk->snd_win = FLOWCTL_MSG_WIN;
 }
 
 /**
@@ -1214,7 +1251,7 @@ static int tipc_sk_anc_data_recv(struct msghdr *m, struct tipc_msg *msg,
        return 0;
 }
 
-static void tipc_sk_send_ack(struct tipc_sock *tsk, uint ack)
+static void tipc_sk_send_ack(struct tipc_sock *tsk)
 {
        struct net *net = sock_net(&tsk->sk);
        struct sk_buff *skb = NULL;
@@ -1230,7 +1267,14 @@ static void tipc_sk_send_ack(struct tipc_sock *tsk, uint ack)
        if (!skb)
                return;
        msg = buf_msg(skb);
-       msg_set_msgcnt(msg, ack);
+       msg_set_conn_ack(msg, tsk->rcv_unacked);
+       tsk->rcv_unacked = 0;
+
+       /* Adjust to and advertize the correct window limit */
+       if (tsk->peer_caps & TIPC_BLOCK_FLOWCTL) {
+               tsk->rcv_win = tsk_adv_blocks(tsk->sk.sk_rcvbuf);
+               msg_set_adv_win(msg, tsk->rcv_win);
+       }
        tipc_node_xmit_skb(net, skb, dnode, msg_link_selector(msg));
 }
 
@@ -1288,7 +1332,7 @@ static int tipc_recvmsg(struct socket *sock, struct msghdr *m, size_t buf_len,
        long timeo;
        unsigned int sz;
        u32 err;
-       int res;
+       int res, hlen;
 
        /* Catch invalid receive requests */
        if (unlikely(!buf_len))
@@ -1313,6 +1357,7 @@ restart:
        buf = skb_peek(&sk->sk_receive_queue);
        msg = buf_msg(buf);
        sz = msg_data_sz(msg);
+       hlen = msg_hdr_sz(msg);
        err = msg_errcode(msg);
 
        /* Discard an empty non-errored message & try again */
@@ -1335,7 +1380,7 @@ restart:
                        sz = buf_len;
                        m->msg_flags |= MSG_TRUNC;
                }
-               res = skb_copy_datagram_msg(buf, msg_hdr_sz(msg), m, sz);
+               res = skb_copy_datagram_msg(buf, hlen, m, sz);
                if (res)
                        goto exit;
                res = sz;
@@ -1347,15 +1392,15 @@ restart:
                        res = -ECONNRESET;
        }
 
-       /* Consume received message (optional) */
-       if (likely(!(flags & MSG_PEEK))) {
-               if ((sock->state != SS_READY) &&
-                   (++tsk->rcv_unacked >= TIPC_CONNACK_INTV)) {
-                       tipc_sk_send_ack(tsk, tsk->rcv_unacked);
-                       tsk->rcv_unacked = 0;
-               }
-               tsk_advance_rx_queue(sk);
+       if (unlikely(flags & MSG_PEEK))
+               goto exit;
+
+       if (likely(sock->state != SS_READY)) {
+               tsk->rcv_unacked += tsk_inc(tsk, hlen + sz);
+               if (unlikely(tsk->rcv_unacked >= (tsk->rcv_win / 4)))
+                       tipc_sk_send_ack(tsk);
        }
+       tsk_advance_rx_queue(sk);
 exit:
        release_sock(sk);
        return res;
@@ -1384,7 +1429,7 @@ static int tipc_recv_stream(struct socket *sock, struct msghdr *m,
        int sz_to_copy, target, needed;
        int sz_copied = 0;
        u32 err;
-       int res = 0;
+       int res = 0, hlen;
 
        /* Catch invalid receive attempts */
        if (unlikely(!buf_len))
@@ -1410,6 +1455,7 @@ restart:
        buf = skb_peek(&sk->sk_receive_queue);
        msg = buf_msg(buf);
        sz = msg_data_sz(msg);
+       hlen = msg_hdr_sz(msg);
        err = msg_errcode(msg);
 
        /* Discard an empty non-errored message & try again */
@@ -1434,8 +1480,7 @@ restart:
                needed = (buf_len - sz_copied);
                sz_to_copy = (sz <= needed) ? sz : needed;
 
-               res = skb_copy_datagram_msg(buf, msg_hdr_sz(msg) + offset,
-                                           m, sz_to_copy);
+               res = skb_copy_datagram_msg(buf, hlen + offset, m, sz_to_copy);
                if (res)
                        goto exit;
 
@@ -1457,20 +1502,18 @@ restart:
                        res = -ECONNRESET;
        }
 
-       /* Consume received message (optional) */
-       if (likely(!(flags & MSG_PEEK))) {
-               if (unlikely(++tsk->rcv_unacked >= TIPC_CONNACK_INTV)) {
-                       tipc_sk_send_ack(tsk, tsk->rcv_unacked);
-                       tsk->rcv_unacked = 0;
-               }
-               tsk_advance_rx_queue(sk);
-       }
+       if (unlikely(flags & MSG_PEEK))
+               goto exit;
+
+       tsk->rcv_unacked += tsk_inc(tsk, hlen + sz);
+       if (unlikely(tsk->rcv_unacked >= (tsk->rcv_win / 4)))
+               tipc_sk_send_ack(tsk);
+       tsk_advance_rx_queue(sk);
 
        /* Loop around if more data is required */
        if ((sz_copied < buf_len) &&    /* didn't get all requested data */
            (!skb_queue_empty(&sk->sk_receive_queue) ||
            (sz_copied < target)) &&    /* and more is ready or required */
-           (!(flags & MSG_PEEK)) &&    /* and aren't just peeking at data */
            (!err))                     /* and haven't reached a FIN */
                goto restart;
 
@@ -1602,30 +1645,33 @@ static bool filter_connect(struct tipc_sock *tsk, struct sk_buff *skb)
 /**
  * rcvbuf_limit - get proper overload limit of socket receive queue
  * @sk: socket
- * @buf: message
+ * @skb: message
  *
- * For all connection oriented messages, irrespective of importance,
- * the default overload value (i.e. 67MB) is set as limit.
+ * For connection oriented messages, irrespective of importance,
+ * default queue limit is 2 MB.
  *
- * For all connectionless messages, by default new queue limits are
- * as belows:
+ * For connectionless messages, queue limits are based on message
+ * importance as follows:
  *
- * TIPC_LOW_IMPORTANCE       (4 MB)
- * TIPC_MEDIUM_IMPORTANCE    (8 MB)
- * TIPC_HIGH_IMPORTANCE      (16 MB)
- * TIPC_CRITICAL_IMPORTANCE  (32 MB)
+ * TIPC_LOW_IMPORTANCE       (2 MB)
+ * TIPC_MEDIUM_IMPORTANCE    (4 MB)
+ * TIPC_HIGH_IMPORTANCE      (8 MB)
+ * TIPC_CRITICAL_IMPORTANCE  (16 MB)
  *
  * Returns overload limit according to corresponding message importance
  */
-static unsigned int rcvbuf_limit(struct sock *sk, struct sk_buff *buf)
+static unsigned int rcvbuf_limit(struct sock *sk, struct sk_buff *skb)
 {
-       struct tipc_msg *msg = buf_msg(buf);
+       struct tipc_sock *tsk = tipc_sk(sk);
+       struct tipc_msg *hdr = buf_msg(skb);
+
+       if (unlikely(!msg_connected(hdr)))
+               return sk->sk_rcvbuf << msg_importance(hdr);
 
-       if (msg_connected(msg))
-               return sysctl_tipc_rmem[2];
+       if (likely(tsk->peer_caps & TIPC_BLOCK_FLOWCTL))
+               return sk->sk_rcvbuf;
 
-       return sk->sk_rcvbuf >> TIPC_CRITICAL_IMPORTANCE <<
-               msg_importance(msg);
+       return FLOWCTL_MSG_LIM;
 }
 
 /**
@@ -1748,7 +1794,7 @@ static void tipc_sk_enqueue(struct sk_buff_head *inputq, struct sock *sk,
 
                /* Try backlog, compensating for double-counted bytes */
                dcnt = &tipc_sk(sk)->dupl_rcvcnt;
-               if (sk->sk_backlog.len)
+               if (!sk->sk_backlog.len)
                        atomic_set(dcnt, 0);
                lim = rcvbuf_limit(sk, skb) + atomic_read(dcnt);
                if (likely(!sk_add_backlog(sk, skb, lim)))
index 4241f22..06fb594 100644 (file)
@@ -1,6 +1,6 @@
 /* net/tipc/socket.h: Include file for TIPC socket code
  *
- * Copyright (c) 2014-2015, Ericsson AB
+ * Copyright (c) 2014-2016, Ericsson AB
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
 #include <net/sock.h>
 #include <net/genetlink.h>
 
-#define TIPC_CONNACK_INTV         256
-#define TIPC_FLOWCTRL_WIN        (TIPC_CONNACK_INTV * 2)
-#define TIPC_CONN_OVERLOAD_LIMIT ((TIPC_FLOWCTRL_WIN * 2 + 1) * \
-                                 SKB_TRUESIZE(TIPC_MAX_USER_MSG_SIZE))
+/* Compatibility values for deprecated message based flow control */
+#define FLOWCTL_MSG_WIN 512
+#define FLOWCTL_MSG_LIM ((FLOWCTL_MSG_WIN * 2 + 1) * SKB_TRUESIZE(MAX_MSG_SIZE))
+
+#define FLOWCTL_BLK_SZ 1024
+
+/* Socket receive buffer sizes */
+#define RCVBUF_MIN  (FLOWCTL_BLK_SZ * 512)
+#define RCVBUF_DEF  (FLOWCTL_BLK_SZ * 1024 * 2)
+#define RCVBUF_MAX  (FLOWCTL_BLK_SZ * 1024 * 16)
+
 int tipc_socket_init(void);
 void tipc_socket_stop(void);
 void tipc_sk_rcv(struct net *net, struct sk_buff_head *inputq);
index 79de588..0dd0224 100644 (file)
@@ -326,8 +326,7 @@ static void tipc_subscrb_rcv_cb(struct net *net, int conid,
                return tipc_subscrp_cancel(s, subscriber);
        }
 
-       if (s)
-               tipc_subscrp_subscribe(net, s, subscriber, swap);
+       tipc_subscrp_subscribe(net, s, subscriber, swap);
 }
 
 /* Handle one request to establish a new subscriber */
index 5621473..4120b7a 100644 (file)
@@ -2051,7 +2051,7 @@ static u32 vmci_transport_get_local_cid(void)
        return vmci_get_context_id();
 }
 
-static struct vsock_transport vmci_transport = {
+static const struct vsock_transport vmci_transport = {
        .init = vmci_transport_socket_init,
        .destruct = vmci_transport_destruct,
        .release = vmci_transport_release,
index fd7f34a..afeb1ef 100644 (file)
@@ -2429,7 +2429,8 @@ static int nl80211_send_iface(struct sk_buff *msg, u32 portid, u32 seq, int flag
 
        if (nla_put_u32(msg, NL80211_ATTR_WIPHY, rdev->wiphy_idx) ||
            nla_put_u32(msg, NL80211_ATTR_IFTYPE, wdev->iftype) ||
-           nla_put_u64(msg, NL80211_ATTR_WDEV, wdev_id(wdev)) ||
+           nla_put_u64_64bit(msg, NL80211_ATTR_WDEV, wdev_id(wdev),
+                             NL80211_ATTR_PAD) ||
            nla_put(msg, NL80211_ATTR_MAC, ETH_ALEN, wdev_address(wdev)) ||
            nla_put_u32(msg, NL80211_ATTR_GENERATION,
                        rdev->devlist_generation ^
@@ -6874,7 +6875,8 @@ static int nl80211_send_bss(struct sk_buff *msg, struct netlink_callback *cb,
        if (wdev->netdev &&
            nla_put_u32(msg, NL80211_ATTR_IFINDEX, wdev->netdev->ifindex))
                goto nla_put_failure;
-       if (nla_put_u64(msg, NL80211_ATTR_WDEV, wdev_id(wdev)))
+       if (nla_put_u64_64bit(msg, NL80211_ATTR_WDEV, wdev_id(wdev),
+                             NL80211_ATTR_PAD))
                goto nla_put_failure;
 
        bss = nla_nest_start(msg, NL80211_ATTR_BSS);
@@ -6895,7 +6897,8 @@ static int nl80211_send_bss(struct sk_buff *msg, struct netlink_callback *cb,
         */
        ies = rcu_dereference(res->ies);
        if (ies) {
-               if (nla_put_u64(msg, NL80211_BSS_TSF, ies->tsf))
+               if (nla_put_u64_64bit(msg, NL80211_BSS_TSF, ies->tsf,
+                                     NL80211_BSS_PAD))
                        goto fail_unlock_rcu;
                if (ies->len && nla_put(msg, NL80211_BSS_INFORMATION_ELEMENTS,
                                        ies->len, ies->data))
@@ -6905,7 +6908,8 @@ static int nl80211_send_bss(struct sk_buff *msg, struct netlink_callback *cb,
        /* and this pointer is always (unless driver didn't know) beacon data */
        ies = rcu_dereference(res->beacon_ies);
        if (ies && ies->from_beacon) {
-               if (nla_put_u64(msg, NL80211_BSS_BEACON_TSF, ies->tsf))
+               if (nla_put_u64_64bit(msg, NL80211_BSS_BEACON_TSF, ies->tsf,
+                                     NL80211_BSS_PAD))
                        goto fail_unlock_rcu;
                if (ies->len && nla_put(msg, NL80211_BSS_BEACON_IES,
                                        ies->len, ies->data))
@@ -6924,8 +6928,8 @@ static int nl80211_send_bss(struct sk_buff *msg, struct netlink_callback *cb,
                goto nla_put_failure;
 
        if (intbss->ts_boottime &&
-           nla_put_u64(msg, NL80211_BSS_LAST_SEEN_BOOTTIME,
-                       intbss->ts_boottime))
+           nla_put_u64_64bit(msg, NL80211_BSS_LAST_SEEN_BOOTTIME,
+                             intbss->ts_boottime, NL80211_BSS_PAD))
                goto nla_put_failure;
 
        switch (rdev->wiphy.signal_type) {
@@ -7045,28 +7049,28 @@ static int nl80211_send_survey(struct sk_buff *msg, u32 portid, u32 seq,
            nla_put_flag(msg, NL80211_SURVEY_INFO_IN_USE))
                goto nla_put_failure;
        if ((survey->filled & SURVEY_INFO_TIME) &&
-           nla_put_u64(msg, NL80211_SURVEY_INFO_TIME,
-                       survey->time))
+           nla_put_u64_64bit(msg, NL80211_SURVEY_INFO_TIME,
+                       survey->time, NL80211_SURVEY_INFO_PAD))
                goto nla_put_failure;
        if ((survey->filled & SURVEY_INFO_TIME_BUSY) &&
-           nla_put_u64(msg, NL80211_SURVEY_INFO_TIME_BUSY,
-                       survey->time_busy))
+           nla_put_u64_64bit(msg, NL80211_SURVEY_INFO_TIME_BUSY,
+                             survey->time_busy, NL80211_SURVEY_INFO_PAD))
                goto nla_put_failure;
        if ((survey->filled & SURVEY_INFO_TIME_EXT_BUSY) &&
-           nla_put_u64(msg, NL80211_SURVEY_INFO_TIME_EXT_BUSY,
-                       survey->time_ext_busy))
+           nla_put_u64_64bit(msg, NL80211_SURVEY_INFO_TIME_EXT_BUSY,
+                             survey->time_ext_busy, NL80211_SURVEY_INFO_PAD))
                goto nla_put_failure;
        if ((survey->filled & SURVEY_INFO_TIME_RX) &&
-           nla_put_u64(msg, NL80211_SURVEY_INFO_TIME_RX,
-                       survey->time_rx))
+           nla_put_u64_64bit(msg, NL80211_SURVEY_INFO_TIME_RX,
+                             survey->time_rx, NL80211_SURVEY_INFO_PAD))
                goto nla_put_failure;
        if ((survey->filled & SURVEY_INFO_TIME_TX) &&
-           nla_put_u64(msg, NL80211_SURVEY_INFO_TIME_TX,
-                       survey->time_tx))
+           nla_put_u64_64bit(msg, NL80211_SURVEY_INFO_TIME_TX,
+                             survey->time_tx, NL80211_SURVEY_INFO_PAD))
                goto nla_put_failure;
        if ((survey->filled & SURVEY_INFO_TIME_SCAN) &&
-           nla_put_u64(msg, NL80211_SURVEY_INFO_TIME_SCAN,
-                       survey->time_scan))
+           nla_put_u64_64bit(msg, NL80211_SURVEY_INFO_TIME_SCAN,
+                             survey->time_scan, NL80211_SURVEY_INFO_PAD))
                goto nla_put_failure;
 
        nla_nest_end(msg, infoattr);
@@ -7786,8 +7790,8 @@ __cfg80211_alloc_vendor_skb(struct cfg80211_registered_device *rdev,
        }
 
        if (wdev) {
-               if (nla_put_u64(skb, NL80211_ATTR_WDEV,
-                               wdev_id(wdev)))
+               if (nla_put_u64_64bit(skb, NL80211_ATTR_WDEV,
+                                     wdev_id(wdev), NL80211_ATTR_PAD))
                        goto nla_put_failure;
                if (wdev->netdev &&
                    nla_put_u32(skb, NL80211_ATTR_IFINDEX,
@@ -8380,7 +8384,8 @@ static int nl80211_remain_on_channel(struct sk_buff *skb,
        if (err)
                goto free_msg;
 
-       if (nla_put_u64(msg, NL80211_ATTR_COOKIE, cookie))
+       if (nla_put_u64_64bit(msg, NL80211_ATTR_COOKIE, cookie,
+                             NL80211_ATTR_PAD))
                goto nla_put_failure;
 
        genlmsg_end(msg, hdr);
@@ -8792,7 +8797,8 @@ static int nl80211_tx_mgmt(struct sk_buff *skb, struct genl_info *info)
                goto free_msg;
 
        if (msg) {
-               if (nla_put_u64(msg, NL80211_ATTR_COOKIE, cookie))
+               if (nla_put_u64_64bit(msg, NL80211_ATTR_COOKIE, cookie,
+                                     NL80211_ATTR_PAD))
                        goto nla_put_failure;
 
                genlmsg_end(msg, hdr);
@@ -10078,7 +10084,8 @@ static int nl80211_probe_client(struct sk_buff *skb,
        if (err)
                goto free_msg;
 
-       if (nla_put_u64(msg, NL80211_ATTR_COOKIE, cookie))
+       if (nla_put_u64_64bit(msg, NL80211_ATTR_COOKIE, cookie,
+                             NL80211_ATTR_PAD))
                goto nla_put_failure;
 
        genlmsg_end(msg, hdr);
@@ -10503,8 +10510,9 @@ static int nl80211_vendor_cmd_dump(struct sk_buff *skb,
                        break;
 
                if (nla_put_u32(skb, NL80211_ATTR_WIPHY, rdev->wiphy_idx) ||
-                   (wdev && nla_put_u64(skb, NL80211_ATTR_WDEV,
-                                        wdev_id(wdev)))) {
+                   (wdev && nla_put_u64_64bit(skb, NL80211_ATTR_WDEV,
+                                              wdev_id(wdev),
+                                              NL80211_ATTR_PAD))) {
                        genlmsg_cancel(skb, hdr);
                        break;
                }
@@ -11711,7 +11719,8 @@ static int nl80211_send_scan_msg(struct sk_buff *msg,
        if (nla_put_u32(msg, NL80211_ATTR_WIPHY, rdev->wiphy_idx) ||
            (wdev->netdev && nla_put_u32(msg, NL80211_ATTR_IFINDEX,
                                         wdev->netdev->ifindex)) ||
-           nla_put_u64(msg, NL80211_ATTR_WDEV, wdev_id(wdev)))
+           nla_put_u64_64bit(msg, NL80211_ATTR_WDEV, wdev_id(wdev),
+                             NL80211_ATTR_PAD))
                goto nla_put_failure;
 
        /* ignore errors and send incomplete event anyway */
@@ -12378,11 +12387,13 @@ static void nl80211_send_remain_on_chan_event(
        if (nla_put_u32(msg, NL80211_ATTR_WIPHY, rdev->wiphy_idx) ||
            (wdev->netdev && nla_put_u32(msg, NL80211_ATTR_IFINDEX,
                                         wdev->netdev->ifindex)) ||
-           nla_put_u64(msg, NL80211_ATTR_WDEV, wdev_id(wdev)) ||
+           nla_put_u64_64bit(msg, NL80211_ATTR_WDEV, wdev_id(wdev),
+                             NL80211_ATTR_PAD) ||
            nla_put_u32(msg, NL80211_ATTR_WIPHY_FREQ, chan->center_freq) ||
            nla_put_u32(msg, NL80211_ATTR_WIPHY_CHANNEL_TYPE,
                        NL80211_CHAN_NO_HT) ||
-           nla_put_u64(msg, NL80211_ATTR_COOKIE, cookie))
+           nla_put_u64_64bit(msg, NL80211_ATTR_COOKIE, cookie,
+                             NL80211_ATTR_PAD))
                goto nla_put_failure;
 
        if (cmd == NL80211_CMD_REMAIN_ON_CHANNEL &&
@@ -12616,7 +12627,8 @@ int nl80211_send_mgmt(struct cfg80211_registered_device *rdev,
        if (nla_put_u32(msg, NL80211_ATTR_WIPHY, rdev->wiphy_idx) ||
            (netdev && nla_put_u32(msg, NL80211_ATTR_IFINDEX,
                                        netdev->ifindex)) ||
-           nla_put_u64(msg, NL80211_ATTR_WDEV, wdev_id(wdev)) ||
+           nla_put_u64_64bit(msg, NL80211_ATTR_WDEV, wdev_id(wdev),
+                             NL80211_ATTR_PAD) ||
            nla_put_u32(msg, NL80211_ATTR_WIPHY_FREQ, freq) ||
            (sig_dbm &&
             nla_put_u32(msg, NL80211_ATTR_RX_SIGNAL_DBM, sig_dbm)) ||
@@ -12659,9 +12671,11 @@ void cfg80211_mgmt_tx_status(struct wireless_dev *wdev, u64 cookie,
        if (nla_put_u32(msg, NL80211_ATTR_WIPHY, rdev->wiphy_idx) ||
            (netdev && nla_put_u32(msg, NL80211_ATTR_IFINDEX,
                                   netdev->ifindex)) ||
-           nla_put_u64(msg, NL80211_ATTR_WDEV, wdev_id(wdev)) ||
+           nla_put_u64_64bit(msg, NL80211_ATTR_WDEV, wdev_id(wdev),
+                             NL80211_ATTR_PAD) ||
            nla_put(msg, NL80211_ATTR_FRAME, len, buf) ||
-           nla_put_u64(msg, NL80211_ATTR_COOKIE, cookie) ||
+           nla_put_u64_64bit(msg, NL80211_ATTR_COOKIE, cookie,
+                             NL80211_ATTR_PAD) ||
            (ack && nla_put_flag(msg, NL80211_ATTR_ACK)))
                goto nla_put_failure;
 
@@ -13041,7 +13055,8 @@ nl80211_radar_notify(struct cfg80211_registered_device *rdev,
                struct wireless_dev *wdev = netdev->ieee80211_ptr;
 
                if (nla_put_u32(msg, NL80211_ATTR_IFINDEX, netdev->ifindex) ||
-                   nla_put_u64(msg, NL80211_ATTR_WDEV, wdev_id(wdev)))
+                   nla_put_u64_64bit(msg, NL80211_ATTR_WDEV, wdev_id(wdev),
+                                     NL80211_ATTR_PAD))
                        goto nla_put_failure;
        }
 
@@ -13086,7 +13101,8 @@ void cfg80211_probe_status(struct net_device *dev, const u8 *addr,
        if (nla_put_u32(msg, NL80211_ATTR_WIPHY, rdev->wiphy_idx) ||
            nla_put_u32(msg, NL80211_ATTR_IFINDEX, dev->ifindex) ||
            nla_put(msg, NL80211_ATTR_MAC, ETH_ALEN, addr) ||
-           nla_put_u64(msg, NL80211_ATTR_COOKIE, cookie) ||
+           nla_put_u64_64bit(msg, NL80211_ATTR_COOKIE, cookie,
+                             NL80211_ATTR_PAD) ||
            (acked && nla_put_flag(msg, NL80211_ATTR_ACK)))
                goto nla_put_failure;
 
@@ -13231,7 +13247,8 @@ void cfg80211_report_wowlan_wakeup(struct wireless_dev *wdev,
                goto free_msg;
 
        if (nla_put_u32(msg, NL80211_ATTR_WIPHY, rdev->wiphy_idx) ||
-           nla_put_u64(msg, NL80211_ATTR_WDEV, wdev_id(wdev)))
+           nla_put_u64_64bit(msg, NL80211_ATTR_WDEV, wdev_id(wdev),
+                             NL80211_ATTR_PAD))
                goto free_msg;
 
        if (wdev->netdev && nla_put_u32(msg, NL80211_ATTR_IFINDEX,
@@ -13506,7 +13523,8 @@ void cfg80211_crit_proto_stopped(struct wireless_dev *wdev, gfp_t gfp)
                goto nla_put_failure;
 
        if (nla_put_u32(msg, NL80211_ATTR_WIPHY, rdev->wiphy_idx) ||
-           nla_put_u64(msg, NL80211_ATTR_WDEV, wdev_id(wdev)))
+           nla_put_u64_64bit(msg, NL80211_ATTR_WDEV, wdev_id(wdev),
+                             NL80211_ATTR_PAD))
                goto nla_put_failure;
 
        genlmsg_end(msg, hdr);
@@ -13539,7 +13557,8 @@ void nl80211_send_ap_stopped(struct wireless_dev *wdev)
 
        if (nla_put_u32(msg, NL80211_ATTR_WIPHY, rdev->wiphy_idx) ||
            nla_put_u32(msg, NL80211_ATTR_IFINDEX, wdev->netdev->ifindex) ||
-           nla_put_u64(msg, NL80211_ATTR_WDEV, wdev_id(wdev)))
+           nla_put_u64_64bit(msg, NL80211_ATTR_WDEV, wdev_id(wdev),
+                             NL80211_ATTR_PAD))
                goto out;
 
        genlmsg_end(msg, hdr);
index 744dd7a..0bf2478 100644 (file)
@@ -60,6 +60,7 @@ always += spintest_kern.o
 always += map_perf_test_kern.o
 always += test_overhead_tp_kern.o
 always += test_overhead_kprobe_kern.o
+always += parse_varlen.o parse_simple.o parse_ldabs.o
 
 HOSTCFLAGS += -I$(objtree)/usr/include
 
@@ -81,10 +82,44 @@ HOSTLOADLIBES_spintest += -lelf
 HOSTLOADLIBES_map_perf_test += -lelf -lrt
 HOSTLOADLIBES_test_overhead += -lelf -lrt
 
+# Allows pointing LLC/CLANG to a LLVM backend with bpf support, redefine on cmdline:
+#  make samples/bpf/ LLC=~/git/llvm/build/bin/llc CLANG=~/git/llvm/build/bin/clang
+LLC ?= llc
+CLANG ?= clang
+
+# Trick to allow make to be run from this directory
+all:
+       $(MAKE) -C ../../ $$PWD/
+
+clean:
+       $(MAKE) -C ../../ M=$$PWD clean
+       @rm -f *~
+
+# Verify LLVM compiler tools are available and bpf target is supported by llc
+.PHONY: verify_cmds verify_target_bpf $(CLANG) $(LLC)
+
+verify_cmds: $(CLANG) $(LLC)
+       @for TOOL in $^ ; do \
+               if ! (which -- "$${TOOL}" > /dev/null 2>&1); then \
+                       echo "*** ERROR: Cannot find LLVM tool $${TOOL}" ;\
+                       exit 1; \
+               else true; fi; \
+       done
+
+verify_target_bpf: verify_cmds
+       @if ! (${LLC} -march=bpf -mattr=help > /dev/null 2>&1); then \
+               echo "*** ERROR: LLVM (${LLC}) does not support 'bpf' target" ;\
+               echo "   NOTICE: LLVM version >= 3.7.1 required" ;\
+               exit 2; \
+       else true; fi
+
+$(src)/*.c: verify_target_bpf
+
 # asm/sysreg.h - inline assembly used by it is incompatible with llvm.
 # But, there is no easy way to fix it, so just exclude it since it is
 # useless for BPF samples.
 $(obj)/%.o: $(src)/%.c
-       clang $(NOSTDINC_FLAGS) $(LINUXINCLUDE) $(EXTRA_CFLAGS) \
+       $(CLANG) $(NOSTDINC_FLAGS) $(LINUXINCLUDE) $(EXTRA_CFLAGS) \
                -D__KERNEL__ -D__ASM_SYSREG_H -Wno-unused-value -Wno-pointer-sign \
-               -O2 -emit-llvm -c $< -o -| llc -march=bpf -filetype=obj -o $@
+               -Wno-compare-distinct-pointer-types \
+               -O2 -emit-llvm -c $< -o -| $(LLC) -march=bpf -filetype=obj -o $@
diff --git a/samples/bpf/README.rst b/samples/bpf/README.rst
new file mode 100644 (file)
index 0000000..a43eae3
--- /dev/null
@@ -0,0 +1,66 @@
+eBPF sample programs
+====================
+
+This directory contains a mini eBPF library, test stubs, verifier
+test-suite and examples for using eBPF.
+
+Build dependencies
+==================
+
+Compiling requires having installed:
+ * clang >= version 3.4.0
+ * llvm >= version 3.7.1
+
+Note that LLVM's tool 'llc' must support target 'bpf', list version
+and supported targets with command: ``llc --version``
+
+Kernel headers
+--------------
+
+There are usually dependencies to header files of the current kernel.
+To avoid installing devel kernel headers system wide, as a normal
+user, simply call::
+
+ make headers_install
+
+This will creates a local "usr/include" directory in the git/build top
+level directory, that the make system automatically pickup first.
+
+Compiling
+=========
+
+For building the BPF samples, issue the below command from the kernel
+top level directory::
+
+ make samples/bpf/
+
+Do notice the "/" slash after the directory name.
+
+It is also possible to call make from this directory.  This will just
+hide the the invocation of make as above with the appended "/".
+
+Manually compiling LLVM with 'bpf' support
+------------------------------------------
+
+Since version 3.7.0, LLVM adds a proper LLVM backend target for the
+BPF bytecode architecture.
+
+By default llvm will build all non-experimental backends including bpf.
+To generate a smaller llc binary one can use::
+
+ -DLLVM_TARGETS_TO_BUILD="BPF"
+
+Quick sniplet for manually compiling LLVM and clang
+(build dependencies are cmake and gcc-c++)::
+
+ $ git clone http://llvm.org/git/llvm.git
+ $ cd llvm/tools
+ $ git clone --depth 1 http://llvm.org/git/clang.git
+ $ cd ..; mkdir build; cd build
+ $ cmake .. -DLLVM_TARGETS_TO_BUILD="BPF;X86"
+ $ make -j $(getconf _NPROCESSORS_ONLN)
+
+It is also possible to point make to the newly compiled 'llc' or
+'clang' command via redefining LLC or CLANG on the make command line::
+
+ make samples/bpf/ LLC=~/git/llvm/build/bin/llc CLANG=~/git/llvm/build/bin/clang
diff --git a/samples/bpf/parse_ldabs.c b/samples/bpf/parse_ldabs.c
new file mode 100644 (file)
index 0000000..d175501
--- /dev/null
@@ -0,0 +1,41 @@
+/* Copyright (c) 2016 Facebook
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of version 2 of the GNU General Public
+ * License as published by the Free Software Foundation.
+ */
+#include <linux/ip.h>
+#include <linux/ipv6.h>
+#include <linux/in.h>
+#include <linux/tcp.h>
+#include <linux/udp.h>
+#include <uapi/linux/bpf.h>
+#include "bpf_helpers.h"
+
+#define DEFAULT_PKTGEN_UDP_PORT        9
+#define IP_MF                  0x2000
+#define IP_OFFSET              0x1FFF
+
+static inline int ip_is_fragment(struct __sk_buff *ctx, __u64 nhoff)
+{
+       return load_half(ctx, nhoff + offsetof(struct iphdr, frag_off))
+               & (IP_MF | IP_OFFSET);
+}
+
+SEC("ldabs")
+int handle_ingress(struct __sk_buff *skb)
+{
+       __u64 troff = ETH_HLEN + sizeof(struct iphdr);
+
+       if (load_half(skb, offsetof(struct ethhdr, h_proto)) != ETH_P_IP)
+               return 0;
+       if (load_byte(skb, ETH_HLEN + offsetof(struct iphdr, protocol)) != IPPROTO_UDP ||
+           load_byte(skb, ETH_HLEN) != 0x45)
+               return 0;
+       if (ip_is_fragment(skb, ETH_HLEN))
+               return 0;
+       if (load_half(skb, troff + offsetof(struct udphdr, dest)) == DEFAULT_PKTGEN_UDP_PORT)
+               return TC_ACT_SHOT;
+       return 0;
+}
+char _license[] SEC("license") = "GPL";
diff --git a/samples/bpf/parse_simple.c b/samples/bpf/parse_simple.c
new file mode 100644 (file)
index 0000000..cf2511c
--- /dev/null
@@ -0,0 +1,48 @@
+/* Copyright (c) 2016 Facebook
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of version 2 of the GNU General Public
+ * License as published by the Free Software Foundation.
+ */
+#include <linux/ip.h>
+#include <linux/ipv6.h>
+#include <linux/in.h>
+#include <linux/tcp.h>
+#include <linux/udp.h>
+#include <uapi/linux/bpf.h>
+#include <net/ip.h>
+#include "bpf_helpers.h"
+
+#define DEFAULT_PKTGEN_UDP_PORT 9
+
+/* copy of 'struct ethhdr' without __packed */
+struct eth_hdr {
+       unsigned char   h_dest[ETH_ALEN];
+       unsigned char   h_source[ETH_ALEN];
+       unsigned short  h_proto;
+};
+
+SEC("simple")
+int handle_ingress(struct __sk_buff *skb)
+{
+       void *data = (void *)(long)skb->data;
+       struct eth_hdr *eth = data;
+       struct iphdr *iph = data + sizeof(*eth);
+       struct udphdr *udp = data + sizeof(*eth) + sizeof(*iph);
+       void *data_end = (void *)(long)skb->data_end;
+
+       /* single length check */
+       if (data + sizeof(*eth) + sizeof(*iph) + sizeof(*udp) > data_end)
+               return 0;
+
+       if (eth->h_proto != htons(ETH_P_IP))
+               return 0;
+       if (iph->protocol != IPPROTO_UDP || iph->ihl != 5)
+               return 0;
+       if (ip_is_fragment(iph))
+               return 0;
+       if (udp->dest == htons(DEFAULT_PKTGEN_UDP_PORT))
+               return TC_ACT_SHOT;
+       return 0;
+}
+char _license[] SEC("license") = "GPL";
diff --git a/samples/bpf/parse_varlen.c b/samples/bpf/parse_varlen.c
new file mode 100644 (file)
index 0000000..edab34d
--- /dev/null
@@ -0,0 +1,153 @@
+/* Copyright (c) 2016 Facebook
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of version 2 of the GNU General Public
+ * License as published by the Free Software Foundation.
+ */
+#include <linux/if_ether.h>
+#include <linux/ip.h>
+#include <linux/ipv6.h>
+#include <linux/in.h>
+#include <linux/tcp.h>
+#include <linux/udp.h>
+#include <uapi/linux/bpf.h>
+#include <net/ip.h>
+#include "bpf_helpers.h"
+
+#define DEFAULT_PKTGEN_UDP_PORT 9
+#define DEBUG 0
+
+static int tcp(void *data, uint64_t tp_off, void *data_end)
+{
+       struct tcphdr *tcp = data + tp_off;
+
+       if (tcp + 1 > data_end)
+               return 0;
+       if (tcp->dest == htons(80) || tcp->source == htons(80))
+               return TC_ACT_SHOT;
+       return 0;
+}
+
+static int udp(void *data, uint64_t tp_off, void *data_end)
+{
+       struct udphdr *udp = data + tp_off;
+
+       if (udp + 1 > data_end)
+               return 0;
+       if (udp->dest == htons(DEFAULT_PKTGEN_UDP_PORT) ||
+           udp->source == htons(DEFAULT_PKTGEN_UDP_PORT)) {
+               if (DEBUG) {
+                       char fmt[] = "udp port 9 indeed\n";
+
+                       bpf_trace_printk(fmt, sizeof(fmt));
+               }
+               return TC_ACT_SHOT;
+       }
+       return 0;
+}
+
+static int parse_ipv4(void *data, uint64_t nh_off, void *data_end)
+{
+       struct iphdr *iph;
+       uint64_t ihl_len;
+
+       iph = data + nh_off;
+       if (iph + 1 > data_end)
+               return 0;
+
+       if (ip_is_fragment(iph))
+               return 0;
+       ihl_len = iph->ihl * 4;
+
+       if (iph->protocol == IPPROTO_IPIP) {
+               iph = data + nh_off + ihl_len;
+               if (iph + 1 > data_end)
+                       return 0;
+               ihl_len += iph->ihl * 4;
+       }
+
+       if (iph->protocol == IPPROTO_TCP)
+               return tcp(data, nh_off + ihl_len, data_end);
+       else if (iph->protocol == IPPROTO_UDP)
+               return udp(data, nh_off + ihl_len, data_end);
+       return 0;
+}
+
+static int parse_ipv6(void *data, uint64_t nh_off, void *data_end)
+{
+       struct ipv6hdr *ip6h;
+       struct iphdr *iph;
+       uint64_t ihl_len = sizeof(struct ipv6hdr);
+       uint64_t nexthdr;
+
+       ip6h = data + nh_off;
+       if (ip6h + 1 > data_end)
+               return 0;
+
+       nexthdr = ip6h->nexthdr;
+
+       if (nexthdr == IPPROTO_IPIP) {
+               iph = data + nh_off + ihl_len;
+               if (iph + 1 > data_end)
+                       return 0;
+               ihl_len += iph->ihl * 4;
+               nexthdr = iph->protocol;
+       } else if (nexthdr == IPPROTO_IPV6) {
+               ip6h = data + nh_off + ihl_len;
+               if (ip6h + 1 > data_end)
+                       return 0;
+               ihl_len += sizeof(struct ipv6hdr);
+               nexthdr = ip6h->nexthdr;
+       }
+
+       if (nexthdr == IPPROTO_TCP)
+               return tcp(data, nh_off + ihl_len, data_end);
+       else if (nexthdr == IPPROTO_UDP)
+               return udp(data, nh_off + ihl_len, data_end);
+       return 0;
+}
+
+struct vlan_hdr {
+       uint16_t h_vlan_TCI;
+       uint16_t h_vlan_encapsulated_proto;
+};
+
+SEC("varlen")
+int handle_ingress(struct __sk_buff *skb)
+{
+       void *data = (void *)(long)skb->data;
+       struct ethhdr *eth = data;
+       void *data_end = (void *)(long)skb->data_end;
+       uint64_t h_proto, nh_off;
+
+       nh_off = sizeof(*eth);
+       if (data + nh_off > data_end)
+               return 0;
+
+       h_proto = eth->h_proto;
+
+       if (h_proto == ETH_P_8021Q || h_proto == ETH_P_8021AD) {
+               struct vlan_hdr *vhdr;
+
+               vhdr = data + nh_off;
+               nh_off += sizeof(struct vlan_hdr);
+               if (data + nh_off > data_end)
+                       return 0;
+               h_proto = vhdr->h_vlan_encapsulated_proto;
+       }
+       if (h_proto == ETH_P_8021Q || h_proto == ETH_P_8021AD) {
+               struct vlan_hdr *vhdr;
+
+               vhdr = data + nh_off;
+               nh_off += sizeof(struct vlan_hdr);
+               if (data + nh_off > data_end)
+                       return 0;
+               h_proto = vhdr->h_vlan_encapsulated_proto;
+       }
+       if (h_proto == htons(ETH_P_IP))
+               return parse_ipv4(data, nh_off, data_end);
+       else if (h_proto == htons(ETH_P_IPV6))
+               return parse_ipv6(data, nh_off, data_end);
+       return 0;
+}
+char _license[] SEC("license") = "GPL";
diff --git a/samples/bpf/test_cls_bpf.sh b/samples/bpf/test_cls_bpf.sh
new file mode 100755 (executable)
index 0000000..0365d5e
--- /dev/null
@@ -0,0 +1,37 @@
+#!/bin/bash
+
+function pktgen {
+    ../pktgen/pktgen_bench_xmit_mode_netif_receive.sh -i $IFC -s 64 \
+        -m 90:e2:ba:ff:ff:ff -d 192.168.0.1 -t 4
+    local dropped=`tc -s qdisc show dev $IFC | tail -3 | awk '/drop/{print $7}'`
+    if [ "$dropped" == "0," ]; then
+        echo "FAIL"
+    else
+        echo "Successfully filtered " $dropped " packets"
+    fi
+}
+
+function test {
+    echo -n "Loading bpf program '$2'... "
+    tc qdisc add dev $IFC clsact
+    tc filter add dev $IFC ingress bpf da obj $1 sec $2
+    local status=$?
+    if [ $status -ne 0 ]; then
+        echo "FAIL"
+    else
+        echo "ok"
+       pktgen
+    fi
+    tc qdisc del dev $IFC clsact
+}
+
+IFC=test_veth
+
+ip link add name $IFC type veth peer name pair_$IFC
+ip link set $IFC up
+ip link set pair_$IFC up
+
+test ./parse_simple.o simple
+test ./parse_varlen.o varlen
+test ./parse_ldabs.o ldabs
+ip link del dev $IFC
index 9eba8d1..fe2fcec 100644 (file)
@@ -1448,6 +1448,86 @@ static struct bpf_test tests[] = {
                .result = ACCEPT,
                .prog_type = BPF_PROG_TYPE_SCHED_CLS,
        },
+       {
+               "pkt: test1",
+               .insns = {
+                       BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
+                                   offsetof(struct __sk_buff, data)),
+                       BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
+                                   offsetof(struct __sk_buff, data_end)),
+                       BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
+                       BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
+                       BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
+                       BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
+                       BPF_MOV64_IMM(BPF_REG_0, 0),
+                       BPF_EXIT_INSN(),
+               },
+               .result = ACCEPT,
+               .prog_type = BPF_PROG_TYPE_SCHED_CLS,
+       },
+       {
+               "pkt: test2",
+               .insns = {
+                       BPF_MOV64_IMM(BPF_REG_0, 1),
+                       BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_1,
+                                   offsetof(struct __sk_buff, data_end)),
+                       BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
+                                   offsetof(struct __sk_buff, data)),
+                       BPF_MOV64_REG(BPF_REG_5, BPF_REG_3),
+                       BPF_ALU64_IMM(BPF_ADD, BPF_REG_5, 14),
+                       BPF_JMP_REG(BPF_JGT, BPF_REG_5, BPF_REG_4, 15),
+                       BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_3, 7),
+                       BPF_LDX_MEM(BPF_B, BPF_REG_4, BPF_REG_3, 12),
+                       BPF_ALU64_IMM(BPF_MUL, BPF_REG_4, 14),
+                       BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
+                                   offsetof(struct __sk_buff, data)),
+                       BPF_ALU64_REG(BPF_ADD, BPF_REG_3, BPF_REG_4),
+                       BPF_MOV64_REG(BPF_REG_2, BPF_REG_1),
+                       BPF_ALU64_IMM(BPF_LSH, BPF_REG_2, 48),
+                       BPF_ALU64_IMM(BPF_RSH, BPF_REG_2, 48),
+                       BPF_ALU64_REG(BPF_ADD, BPF_REG_3, BPF_REG_2),
+                       BPF_MOV64_REG(BPF_REG_2, BPF_REG_3),
+                       BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, 8),
+                       BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_1,
+                                   offsetof(struct __sk_buff, data_end)),
+                       BPF_JMP_REG(BPF_JGT, BPF_REG_2, BPF_REG_1, 1),
+                       BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_3, 4),
+                       BPF_MOV64_IMM(BPF_REG_0, 0),
+                       BPF_EXIT_INSN(),
+               },
+               .result = ACCEPT,
+               .prog_type = BPF_PROG_TYPE_SCHED_CLS,
+       },
+       {
+               "pkt: test3",
+               .insns = {
+                       BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
+                                   offsetof(struct __sk_buff, data)),
+                       BPF_MOV64_IMM(BPF_REG_0, 0),
+                       BPF_EXIT_INSN(),
+               },
+               .errstr = "invalid bpf_context access off=76",
+               .result = REJECT,
+               .prog_type = BPF_PROG_TYPE_SOCKET_FILTER,
+       },
+       {
+               "pkt: test4",
+               .insns = {
+                       BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
+                                   offsetof(struct __sk_buff, data)),
+                       BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
+                                   offsetof(struct __sk_buff, data_end)),
+                       BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
+                       BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
+                       BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
+                       BPF_STX_MEM(BPF_B, BPF_REG_2, BPF_REG_2, 0),
+                       BPF_MOV64_IMM(BPF_REG_0, 0),
+                       BPF_EXIT_INSN(),
+               },
+               .errstr = "cannot write",
+               .result = REJECT,
+               .prog_type = BPF_PROG_TYPE_SCHED_CLS,
+       },
 };
 
 static int probe_filter_length(struct bpf_insn *fp)
index 8d8d1ec..9b96f4f 100644 (file)
@@ -18,7 +18,6 @@ int bpf_prog1(struct pt_regs *ctx)
                u64 cookie;
        } data;
 
-       memset(&data, 0, sizeof(data));
        data.pid = bpf_get_current_pid_tgid();
        data.cookie = 0x12345678;
 
index 023cc4c..626f3bb 100644 (file)
@@ -104,12 +104,11 @@ EXPORT_SYMBOL_GPL(snd_hdac_ext_stream_init_all);
  */
 void snd_hdac_stream_free_all(struct hdac_ext_bus *ebus)
 {
-       struct hdac_stream *s;
+       struct hdac_stream *s, *_s;
        struct hdac_ext_stream *stream;
        struct hdac_bus *bus = ebus_to_hbus(ebus);
 
-       while (!list_empty(&bus->stream_list)) {
-               s = list_first_entry(&bus->stream_list, struct hdac_stream, list);
+       list_for_each_entry_safe(s, _s, &bus->stream_list, list) {
                stream = stream_to_hdac_ext_stream(s);
                snd_hdac_ext_stream_decouple(ebus, stream, false);
                list_del(&s->list);
index d1a4d69..03c9872 100644 (file)
@@ -299,13 +299,11 @@ EXPORT_SYMBOL_GPL(_snd_hdac_read_parm);
 int snd_hdac_read_parm_uncached(struct hdac_device *codec, hda_nid_t nid,
                                int parm)
 {
-       int val;
+       unsigned int cmd, val;
 
-       if (codec->regmap)
-               regcache_cache_bypass(codec->regmap, true);
-       val = snd_hdac_read_parm(codec, nid, parm);
-       if (codec->regmap)
-               regcache_cache_bypass(codec->regmap, false);
+       cmd = snd_hdac_regmap_encode_verb(nid, AC_VERB_PARAMETERS) | parm;
+       if (snd_hdac_regmap_read_raw_uncached(codec, cmd, &val) < 0)
+               return -1;
        return val;
 }
 EXPORT_SYMBOL_GPL(snd_hdac_read_parm_uncached);
index 54babe1..607bbea 100644 (file)
@@ -20,6 +20,7 @@
 #include <sound/core.h>
 #include <sound/hdaudio.h>
 #include <sound/hda_i915.h>
+#include <sound/hda_register.h>
 
 static struct i915_audio_component *hdac_acomp;
 
@@ -97,26 +98,65 @@ int snd_hdac_display_power(struct hdac_bus *bus, bool enable)
 }
 EXPORT_SYMBOL_GPL(snd_hdac_display_power);
 
+#define CONTROLLER_IN_GPU(pci) (((pci)->device == 0x0a0c) || \
+                               ((pci)->device == 0x0c0c) || \
+                               ((pci)->device == 0x0d0c) || \
+                               ((pci)->device == 0x160c))
+
 /**
- * snd_hdac_get_display_clk - Get CDCLK in kHz
+ * snd_hdac_i915_set_bclk - Reprogram BCLK for HSW/BDW
  * @bus: HDA core bus
  *
- * This function is supposed to be used only by a HD-audio controller
- * driver that needs the interaction with i915 graphics.
+ * Intel HSW/BDW display HDA controller is in GPU. Both its power and link BCLK
+ * depends on GPU. Two Extended Mode registers EM4 (M value) and EM5 (N Value)
+ * are used to convert CDClk (Core Display Clock) to 24MHz BCLK:
+ * BCLK = CDCLK * M / N
+ * The values will be lost when the display power well is disabled and need to
+ * be restored to avoid abnormal playback speed.
  *
- * This function queries CDCLK value in kHz from the graphics driver and
- * returns the value.  A negative code is returned in error.
+ * Call this function at initializing and changing power well, as well as
+ * at ELD notifier for the hotplug.
  */
-int snd_hdac_get_display_clk(struct hdac_bus *bus)
+void snd_hdac_i915_set_bclk(struct hdac_bus *bus)
 {
        struct i915_audio_component *acomp = bus->audio_component;
+       struct pci_dev *pci = to_pci_dev(bus->dev);
+       int cdclk_freq;
+       unsigned int bclk_m, bclk_n;
+
+       if (!acomp || !acomp->ops || !acomp->ops->get_cdclk_freq)
+               return; /* only for i915 binding */
+       if (!CONTROLLER_IN_GPU(pci))
+               return; /* only HSW/BDW */
+
+       cdclk_freq = acomp->ops->get_cdclk_freq(acomp->dev);
+       switch (cdclk_freq) {
+       case 337500:
+               bclk_m = 16;
+               bclk_n = 225;
+               break;
+
+       case 450000:
+       default: /* default CDCLK 450MHz */
+               bclk_m = 4;
+               bclk_n = 75;
+               break;
+
+       case 540000:
+               bclk_m = 4;
+               bclk_n = 90;
+               break;
+
+       case 675000:
+               bclk_m = 8;
+               bclk_n = 225;
+               break;
+       }
 
-       if (!acomp || !acomp->ops)
-               return -ENODEV;
-
-       return acomp->ops->get_cdclk_freq(acomp->dev);
+       snd_hdac_chip_writew(bus, HSW_EM4, bclk_m);
+       snd_hdac_chip_writew(bus, HSW_EM5, bclk_n);
 }
-EXPORT_SYMBOL_GPL(snd_hdac_get_display_clk);
+EXPORT_SYMBOL_GPL(snd_hdac_i915_set_bclk);
 
 /* There is a fixed mapping between audio pin node and display port
  * on current Intel platforms:
index bdbcd6b..87041dd 100644 (file)
@@ -453,14 +453,30 @@ int snd_hdac_regmap_write_raw(struct hdac_device *codec, unsigned int reg,
 EXPORT_SYMBOL_GPL(snd_hdac_regmap_write_raw);
 
 static int reg_raw_read(struct hdac_device *codec, unsigned int reg,
-                       unsigned int *val)
+                       unsigned int *val, bool uncached)
 {
-       if (!codec->regmap)
+       if (uncached || !codec->regmap)
                return hda_reg_read(codec, reg, val);
        else
                return regmap_read(codec->regmap, reg, val);
 }
 
+static int __snd_hdac_regmap_read_raw(struct hdac_device *codec,
+                                     unsigned int reg, unsigned int *val,
+                                     bool uncached)
+{
+       int err;
+
+       err = reg_raw_read(codec, reg, val, uncached);
+       if (err == -EAGAIN) {
+               err = snd_hdac_power_up_pm(codec);
+               if (!err)
+                       err = reg_raw_read(codec, reg, val, uncached);
+               snd_hdac_power_down_pm(codec);
+       }
+       return err;
+}
+
 /**
  * snd_hdac_regmap_read_raw - read a pseudo register with power mgmt
  * @codec: the codec object
@@ -472,19 +488,19 @@ static int reg_raw_read(struct hdac_device *codec, unsigned int reg,
 int snd_hdac_regmap_read_raw(struct hdac_device *codec, unsigned int reg,
                             unsigned int *val)
 {
-       int err;
-
-       err = reg_raw_read(codec, reg, val);
-       if (err == -EAGAIN) {
-               err = snd_hdac_power_up_pm(codec);
-               if (!err)
-                       err = reg_raw_read(codec, reg, val);
-               snd_hdac_power_down_pm(codec);
-       }
-       return err;
+       return __snd_hdac_regmap_read_raw(codec, reg, val, false);
 }
 EXPORT_SYMBOL_GPL(snd_hdac_regmap_read_raw);
 
+/* Works like snd_hdac_regmap_read_raw(), but this doesn't read from the
+ * cache but always via hda verbs.
+ */
+int snd_hdac_regmap_read_raw_uncached(struct hdac_device *codec,
+                                     unsigned int reg, unsigned int *val)
+{
+       return __snd_hdac_regmap_read_raw(codec, reg, val, true);
+}
+
 /**
  * snd_hdac_regmap_update_raw - update a pseudo register with power mgmt
  * @codec: the codec object
index 7ca5b89..dfaf1a9 100644 (file)
@@ -826,7 +826,7 @@ static hda_nid_t path_power_update(struct hda_codec *codec,
                                   bool allow_powerdown)
 {
        hda_nid_t nid, changed = 0;
-       int i, state;
+       int i, state, power;
 
        for (i = 0; i < path->depth; i++) {
                nid = path->path[i];
@@ -838,7 +838,9 @@ static hda_nid_t path_power_update(struct hda_codec *codec,
                        state = AC_PWRST_D0;
                else
                        state = AC_PWRST_D3;
-               if (!snd_hda_check_power_state(codec, nid, state)) {
+               power = snd_hda_codec_read(codec, nid, 0,
+                                          AC_VERB_GET_POWER_STATE, 0);
+               if (power != (state | (state << 4))) {
                        snd_hda_codec_write(codec, nid, 0,
                                            AC_VERB_SET_POWER_STATE, state);
                        changed = nid;
index b680be0..9a0d144 100644 (file)
@@ -857,50 +857,6 @@ static int param_set_xint(const char *val, const struct kernel_param *kp)
 #define azx_del_card_list(chip) /* NOP */
 #endif /* CONFIG_PM */
 
-/* Intel HSW/BDW display HDA controller is in GPU. Both its power and link BCLK
- * depends on GPU. Two Extended Mode registers EM4 (M value) and EM5 (N Value)
- * are used to convert CDClk (Core Display Clock) to 24MHz BCLK:
- * BCLK = CDCLK * M / N
- * The values will be lost when the display power well is disabled and need to
- * be restored to avoid abnormal playback speed.
- */
-static void haswell_set_bclk(struct hda_intel *hda)
-{
-       struct azx *chip = &hda->chip;
-       int cdclk_freq;
-       unsigned int bclk_m, bclk_n;
-
-       if (!hda->need_i915_power)
-               return;
-
-       cdclk_freq = snd_hdac_get_display_clk(azx_bus(chip));
-       switch (cdclk_freq) {
-       case 337500:
-               bclk_m = 16;
-               bclk_n = 225;
-               break;
-
-       case 450000:
-       default: /* default CDCLK 450MHz */
-               bclk_m = 4;
-               bclk_n = 75;
-               break;
-
-       case 540000:
-               bclk_m = 4;
-               bclk_n = 90;
-               break;
-
-       case 675000:
-               bclk_m = 8;
-               bclk_n = 225;
-               break;
-       }
-
-       azx_writew(chip, HSW_EM4, bclk_m);
-       azx_writew(chip, HSW_EM5, bclk_n);
-}
-
 #if defined(CONFIG_PM_SLEEP) || defined(SUPPORT_VGA_SWITCHEROO)
 /*
  * power management
@@ -958,7 +914,7 @@ static int azx_resume(struct device *dev)
        if (chip->driver_caps & AZX_DCAPS_I915_POWERWELL
                && hda->need_i915_power) {
                snd_hdac_display_power(azx_bus(chip), true);
-               haswell_set_bclk(hda);
+               snd_hdac_i915_set_bclk(azx_bus(chip));
        }
        if (chip->msi)
                if (pci_enable_msi(pci) < 0)
@@ -1058,7 +1014,7 @@ static int azx_runtime_resume(struct device *dev)
                bus = azx_bus(chip);
                if (hda->need_i915_power) {
                        snd_hdac_display_power(bus, true);
-                       haswell_set_bclk(hda);
+                       snd_hdac_i915_set_bclk(bus);
                } else {
                        /* toggle codec wakeup bit for STATESTS read */
                        snd_hdac_set_codec_wakeup(bus, true);
@@ -1796,12 +1752,8 @@ static int azx_first_init(struct azx *chip)
        /* initialize chip */
        azx_init_pci(chip);
 
-       if (chip->driver_caps & AZX_DCAPS_I915_POWERWELL) {
-               struct hda_intel *hda;
-
-               hda = container_of(chip, struct hda_intel, chip);
-               haswell_set_bclk(hda);
-       }
+       if (chip->driver_caps & AZX_DCAPS_I915_POWERWELL)
+               snd_hdac_i915_set_bclk(bus);
 
        hda_intel_init_chip(chip, (probe_only[dev] & 2) == 0);
 
@@ -2232,6 +2184,9 @@ static const struct pci_device_id azx_ids[] = {
        /* Broxton-P(Apollolake) */
        { PCI_DEVICE(0x8086, 0x5a98),
          .driver_data = AZX_DRIVER_PCH | AZX_DCAPS_INTEL_BROXTON },
+       /* Broxton-T */
+       { PCI_DEVICE(0x8086, 0x1a98),
+         .driver_data = AZX_DRIVER_PCH | AZX_DCAPS_INTEL_BROXTON },
        /* Haswell */
        { PCI_DEVICE(0x8086, 0x0a0c),
          .driver_data = AZX_DRIVER_HDMI | AZX_DCAPS_INTEL_HASWELL },
index a47e8ae..80bbadc 100644 (file)
@@ -361,6 +361,7 @@ static int cs_parse_auto_config(struct hda_codec *codec)
 {
        struct cs_spec *spec = codec->spec;
        int err;
+       int i;
 
        err = snd_hda_parse_pin_defcfg(codec, &spec->gen.autocfg, NULL, 0);
        if (err < 0)
@@ -370,6 +371,19 @@ static int cs_parse_auto_config(struct hda_codec *codec)
        if (err < 0)
                return err;
 
+       /* keep the ADCs powered up when it's dynamically switchable */
+       if (spec->gen.dyn_adc_switch) {
+               unsigned int done = 0;
+               for (i = 0; i < spec->gen.input_mux.num_items; i++) {
+                       int idx = spec->gen.dyn_adc_idx[i];
+                       if (done & (1 << idx))
+                               continue;
+                       snd_hda_gen_fix_pin_power(codec,
+                                                 spec->gen.adc_nids[idx]);
+                       done |= 1 << idx;
+               }
+       }
+
        return 0;
 }
 
index c83c1a8..1483f85 100644 (file)
@@ -1858,6 +1858,8 @@ static void hdmi_set_chmap(struct hdac_device *hdac, int pcm_idx,
        struct hdmi_spec *spec = codec->spec;
        struct hdmi_spec_per_pin *per_pin = pcm_idx_to_pin(spec, pcm_idx);
 
+       if (!per_pin)
+               return;
        mutex_lock(&per_pin->lock);
        per_pin->chmap_set = true;
        memcpy(per_pin->chmap, chmap, ARRAY_SIZE(per_pin->chmap));
@@ -2230,6 +2232,7 @@ static void intel_pin_eld_notify(void *audio_ptr, int port)
        if (atomic_read(&(codec)->core.in_pm))
                return;
 
+       snd_hdac_i915_set_bclk(&codec->bus->core);
        check_presence_and_report(codec, pin_nid);
 }
 
index 1402ba9..ac4490a 100644 (file)
@@ -5449,6 +5449,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
        SND_PCI_QUIRK(0x1028, 0x064a, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
        SND_PCI_QUIRK(0x1028, 0x064b, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
        SND_PCI_QUIRK(0x1028, 0x0665, "Dell XPS 13", ALC288_FIXUP_DELL_XPS_13),
+       SND_PCI_QUIRK(0x1028, 0x0669, "Dell Optiplex 9020m", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE),
        SND_PCI_QUIRK(0x1028, 0x069a, "Dell Vostro 5480", ALC290_FIXUP_SUBWOOFER_HSJACK),
        SND_PCI_QUIRK(0x1028, 0x06c7, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE),
        SND_PCI_QUIRK(0x1028, 0x06d9, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
@@ -5583,6 +5584,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
        SND_PCI_QUIRK(0x17aa, 0x5034, "Thinkpad T450", ALC292_FIXUP_TPT440_DOCK),
        SND_PCI_QUIRK(0x17aa, 0x5036, "Thinkpad T450s", ALC292_FIXUP_TPT440_DOCK),
        SND_PCI_QUIRK(0x17aa, 0x503c, "Thinkpad L450", ALC292_FIXUP_TPT440_DOCK),
+       SND_PCI_QUIRK(0x17aa, 0x504a, "ThinkPad X260", ALC292_FIXUP_TPT440_DOCK),
        SND_PCI_QUIRK(0x17aa, 0x504b, "Thinkpad", ALC293_FIXUP_LENOVO_SPK_NOISE),
        SND_PCI_QUIRK(0x17aa, 0x5109, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
        SND_PCI_QUIRK(0x17aa, 0x3bf8, "Quanta FL1", ALC269_FIXUP_PCM_44K),
index c5194f5..d7e71f3 100644 (file)
@@ -1341,5 +1341,6 @@ irqreturn_t pcxhr_threaded_irq(int irq, void *dev_id)
        }
 
        pcxhr_msg_thread(mgr);
+       mutex_unlock(&mgr->lock);
        return IRQ_HANDLED;
 }
index 649e92a..7ef3a0c 100644 (file)
@@ -629,6 +629,7 @@ config SND_SOC_RT5514
 
 config SND_SOC_RT5616
        tristate "Realtek RT5616 CODEC"
+       depends on I2C
 
 config SND_SOC_RT5631
        tristate "Realtek ALC5631/RT5631 CODEC"
index 92d22a0..8395931 100644 (file)
@@ -249,6 +249,18 @@ int arizona_init_spk(struct snd_soc_codec *codec)
 }
 EXPORT_SYMBOL_GPL(arizona_init_spk);
 
+int arizona_free_spk(struct snd_soc_codec *codec)
+{
+       struct arizona_priv *priv = snd_soc_codec_get_drvdata(codec);
+       struct arizona *arizona = priv->arizona;
+
+       arizona_free_irq(arizona, ARIZONA_IRQ_SPK_OVERHEAT_WARN, arizona);
+       arizona_free_irq(arizona, ARIZONA_IRQ_SPK_OVERHEAT, arizona);
+
+       return 0;
+}
+EXPORT_SYMBOL_GPL(arizona_free_spk);
+
 static const struct snd_soc_dapm_route arizona_mono_routes[] = {
        { "OUT1R", NULL, "OUT1L" },
        { "OUT2R", NULL, "OUT2L" },
index 1ea8e4e..ce0531b 100644 (file)
@@ -307,6 +307,8 @@ extern int arizona_init_spk(struct snd_soc_codec *codec);
 extern int arizona_init_gpio(struct snd_soc_codec *codec);
 extern int arizona_init_mono(struct snd_soc_codec *codec);
 
+extern int arizona_free_spk(struct snd_soc_codec *codec);
+
 extern int arizona_init_dai(struct arizona_priv *priv, int dai);
 
 int arizona_set_output_mode(struct snd_soc_codec *codec, int output,
index 44c30fe..287d137 100644 (file)
@@ -274,7 +274,9 @@ static int cs35l32_handle_of_data(struct i2c_client *i2c_client,
        if (of_property_read_u32(np, "cirrus,sdout-share", &val) >= 0)
                pdata->sdout_share = val;
 
-       of_property_read_u32(np, "cirrus,boost-manager", &val);
+       if (of_property_read_u32(np, "cirrus,boost-manager", &val))
+               val = -1u;
+
        switch (val) {
        case CS35L32_BOOST_MGR_AUTO:
        case CS35L32_BOOST_MGR_AUTO_AUDIO:
@@ -282,13 +284,15 @@ static int cs35l32_handle_of_data(struct i2c_client *i2c_client,
        case CS35L32_BOOST_MGR_FIXED:
                pdata->boost_mng = val;
                break;
+       case -1u:
        default:
                dev_err(&i2c_client->dev,
                        "Wrong cirrus,boost-manager DT value %d\n", val);
                pdata->boost_mng = CS35L32_BOOST_MGR_BYPASS;
        }
 
-       of_property_read_u32(np, "cirrus,sdout-datacfg", &val);
+       if (of_property_read_u32(np, "cirrus,sdout-datacfg", &val))
+               val = -1u;
        switch (val) {
        case CS35L32_DATA_CFG_LR_VP:
        case CS35L32_DATA_CFG_LR_STAT:
@@ -296,13 +300,15 @@ static int cs35l32_handle_of_data(struct i2c_client *i2c_client,
        case CS35L32_DATA_CFG_LR_VPSTAT:
                pdata->sdout_datacfg = val;
                break;
+       case -1u:
        default:
                dev_err(&i2c_client->dev,
                        "Wrong cirrus,sdout-datacfg DT value %d\n", val);
                pdata->sdout_datacfg = CS35L32_DATA_CFG_LR;
        }
 
-       of_property_read_u32(np, "cirrus,battery-threshold", &val);
+       if (of_property_read_u32(np, "cirrus,battery-threshold", &val))
+               val = -1u;
        switch (val) {
        case CS35L32_BATT_THRESH_3_1V:
        case CS35L32_BATT_THRESH_3_2V:
@@ -310,13 +316,15 @@ static int cs35l32_handle_of_data(struct i2c_client *i2c_client,
        case CS35L32_BATT_THRESH_3_4V:
                pdata->batt_thresh = val;
                break;
+       case -1u:
        default:
                dev_err(&i2c_client->dev,
                        "Wrong cirrus,battery-threshold DT value %d\n", val);
                pdata->batt_thresh = CS35L32_BATT_THRESH_3_3V;
        }
 
-       of_property_read_u32(np, "cirrus,battery-recovery", &val);
+       if (of_property_read_u32(np, "cirrus,battery-recovery", &val))
+               val = -1u;
        switch (val) {
        case CS35L32_BATT_RECOV_3_1V:
        case CS35L32_BATT_RECOV_3_2V:
@@ -326,6 +334,7 @@ static int cs35l32_handle_of_data(struct i2c_client *i2c_client,
        case CS35L32_BATT_RECOV_3_6V:
                pdata->batt_recov = val;
                break;
+       case -1u:
        default:
                dev_err(&i2c_client->dev,
                        "Wrong cirrus,battery-recovery DT value %d\n", val);
index 576087b..00e9b6f 100644 (file)
@@ -1108,6 +1108,9 @@ static int cs47l24_codec_remove(struct snd_soc_codec *codec)
        priv->core.arizona->dapm = NULL;
 
        arizona_free_irq(arizona, ARIZONA_IRQ_DSP_IRQ1, priv);
+
+       arizona_free_spk(codec);
+
        return 0;
 }
 
index 26f9459..aaa038f 100644 (file)
@@ -1420,32 +1420,39 @@ static int hdmi_codec_remove(struct snd_soc_codec *codec)
 }
 
 #ifdef CONFIG_PM
-static int hdmi_codec_resume(struct snd_soc_codec *codec)
+static int hdmi_codec_prepare(struct device *dev)
 {
-       struct hdac_ext_device *edev = snd_soc_codec_get_drvdata(codec);
+       struct hdac_ext_device *edev = to_hda_ext_device(dev);
+       struct hdac_device *hdac = &edev->hdac;
+
+       pm_runtime_get_sync(&edev->hdac.dev);
+
+       /*
+        * Power down afg.
+        * codec_read is preferred over codec_write to set the power state.
+        * This way verb is send to set the power state and response
+        * is received. So setting power state is ensured without using loop
+        * to read the state.
+        */
+       snd_hdac_codec_read(hdac, hdac->afg, 0, AC_VERB_SET_POWER_STATE,
+                                                       AC_PWRST_D3);
+
+       return 0;
+}
+
+static void hdmi_codec_complete(struct device *dev)
+{
+       struct hdac_ext_device *edev = to_hda_ext_device(dev);
        struct hdac_hdmi_priv *hdmi = edev->private_data;
        struct hdac_hdmi_pin *pin;
        struct hdac_device *hdac = &edev->hdac;
-       struct hdac_bus *bus = hdac->bus;
-       int err;
-       unsigned long timeout;
-
-       hdac_hdmi_skl_enable_all_pins(&edev->hdac);
-       hdac_hdmi_skl_enable_dp12(&edev->hdac);
 
        /* Power up afg */
-       if (!snd_hdac_check_power_state(hdac, hdac->afg, AC_PWRST_D0)) {
-
-               snd_hdac_codec_write(hdac, hdac->afg, 0,
-                       AC_VERB_SET_POWER_STATE, AC_PWRST_D0);
+       snd_hdac_codec_read(hdac, hdac->afg, 0, AC_VERB_SET_POWER_STATE,
+                                                       AC_PWRST_D0);
 
-               /* Wait till power state is set to D0 */
-               timeout = jiffies + msecs_to_jiffies(1000);
-               while (!snd_hdac_check_power_state(hdac, hdac->afg, AC_PWRST_D0)
-                               && time_before(jiffies, timeout)) {
-                       msleep(50);
-               }
-       }
+       hdac_hdmi_skl_enable_all_pins(&edev->hdac);
+       hdac_hdmi_skl_enable_dp12(&edev->hdac);
 
        /*
         * As the ELD notify callback request is not entertained while the
@@ -1455,28 +1462,16 @@ static int hdmi_codec_resume(struct snd_soc_codec *codec)
        list_for_each_entry(pin, &hdmi->pin_list, head)
                hdac_hdmi_present_sense(pin, 1);
 
-       /*
-        * Codec power is turned ON during controller resume.
-        * Turn it OFF here
-        */
-       err = snd_hdac_display_power(bus, false);
-       if (err < 0) {
-               dev_err(bus->dev,
-                       "Cannot turn OFF display power on i915, err: %d\n",
-                       err);
-               return err;
-       }
-
-       return 0;
+       pm_runtime_put_sync(&edev->hdac.dev);
 }
 #else
-#define hdmi_codec_resume NULL
+#define hdmi_codec_prepare NULL
+#define hdmi_codec_complete NULL
 #endif
 
 static struct snd_soc_codec_driver hdmi_hda_codec = {
        .probe          = hdmi_codec_probe,
        .remove         = hdmi_codec_remove,
-       .resume         = hdmi_codec_resume,
        .idle_bias_off  = true,
 };
 
@@ -1561,7 +1556,6 @@ static int hdac_hdmi_runtime_suspend(struct device *dev)
        struct hdac_ext_device *edev = to_hda_ext_device(dev);
        struct hdac_device *hdac = &edev->hdac;
        struct hdac_bus *bus = hdac->bus;
-       unsigned long timeout;
        int err;
 
        dev_dbg(dev, "Enter: %s\n", __func__);
@@ -1570,20 +1564,15 @@ static int hdac_hdmi_runtime_suspend(struct device *dev)
        if (!bus)
                return 0;
 
-       /* Power down afg */
-       if (!snd_hdac_check_power_state(hdac, hdac->afg, AC_PWRST_D3)) {
-               snd_hdac_codec_write(hdac, hdac->afg, 0,
-                       AC_VERB_SET_POWER_STATE, AC_PWRST_D3);
-
-               /* Wait till power state is set to D3 */
-               timeout = jiffies + msecs_to_jiffies(1000);
-               while (!snd_hdac_check_power_state(hdac, hdac->afg, AC_PWRST_D3)
-                               && time_before(jiffies, timeout)) {
-
-                       msleep(50);
-               }
-       }
-
+       /*
+        * Power down afg.
+        * codec_read is preferred over codec_write to set the power state.
+        * This way verb is send to set the power state and response
+        * is received. So setting power state is ensured without using loop
+        * to read the state.
+        */
+       snd_hdac_codec_read(hdac, hdac->afg, 0, AC_VERB_SET_POWER_STATE,
+                                                       AC_PWRST_D3);
        err = snd_hdac_display_power(bus, false);
        if (err < 0) {
                dev_err(bus->dev, "Cannot turn on display power on i915\n");
@@ -1616,9 +1605,8 @@ static int hdac_hdmi_runtime_resume(struct device *dev)
        hdac_hdmi_skl_enable_dp12(&edev->hdac);
 
        /* Power up afg */
-       if (!snd_hdac_check_power_state(hdac, hdac->afg, AC_PWRST_D0))
-               snd_hdac_codec_write(hdac, hdac->afg, 0,
-                       AC_VERB_SET_POWER_STATE, AC_PWRST_D0);
+       snd_hdac_codec_read(hdac, hdac->afg, 0, AC_VERB_SET_POWER_STATE,
+                                                       AC_PWRST_D0);
 
        return 0;
 }
@@ -1629,6 +1617,8 @@ static int hdac_hdmi_runtime_resume(struct device *dev)
 
 static const struct dev_pm_ops hdac_hdmi_pm = {
        SET_RUNTIME_PM_OPS(hdac_hdmi_runtime_suspend, hdac_hdmi_runtime_resume, NULL)
+       .prepare = hdmi_codec_prepare,
+       .complete = hdmi_codec_complete,
 };
 
 static const struct hda_device_id hdmi_list[] = {
index 1c87299..683769f 100644 (file)
@@ -343,9 +343,12 @@ static const struct snd_soc_dapm_widget nau8825_dapm_widgets[] = {
        SND_SOC_DAPM_SUPPLY("ADC Power", NAU8825_REG_ANALOG_ADC_2, 6, 0, NULL,
                0),
 
-       /* ADC for button press detection */
-       SND_SOC_DAPM_ADC("SAR", NULL, NAU8825_REG_SAR_CTRL,
-               NAU8825_SAR_ADC_EN_SFT, 0),
+       /* ADC for button press detection. A dapm supply widget is used to
+        * prevent dapm_power_widgets keeping the codec at SND_SOC_BIAS_ON
+        * during suspend.
+        */
+       SND_SOC_DAPM_SUPPLY("SAR", NAU8825_REG_SAR_CTRL,
+               NAU8825_SAR_ADC_EN_SFT, 0, NULL, 0),
 
        SND_SOC_DAPM_PGA_S("ADACL", 2, NAU8825_REG_RDAC, 12, 0, NULL, 0),
        SND_SOC_DAPM_PGA_S("ADACR", 2, NAU8825_REG_RDAC, 13, 0, NULL, 0),
@@ -607,6 +610,16 @@ static bool nau8825_is_jack_inserted(struct regmap *regmap)
 
 static void nau8825_restart_jack_detection(struct regmap *regmap)
 {
+       /* Chip needs one FSCLK cycle in order to generate interrupts,
+        * as we cannot guarantee one will be provided by the system. Turning
+        * master mode on then off enables us to generate that FSCLK cycle
+        * with a minimum of contention on the clock bus.
+        */
+       regmap_update_bits(regmap, NAU8825_REG_I2S_PCM_CTRL2,
+               NAU8825_I2S_MS_MASK, NAU8825_I2S_MS_MASTER);
+       regmap_update_bits(regmap, NAU8825_REG_I2S_PCM_CTRL2,
+               NAU8825_I2S_MS_MASK, NAU8825_I2S_MS_SLAVE);
+
        /* this will restart the entire jack detection process including MIC/GND
         * switching and create interrupts. We have to go from 0 to 1 and back
         * to 0 to restart.
@@ -728,7 +741,10 @@ static irqreturn_t nau8825_interrupt(int irq, void *data)
        struct regmap *regmap = nau8825->regmap;
        int active_irq, clear_irq = 0, event = 0, event_mask = 0;
 
-       regmap_read(regmap, NAU8825_REG_IRQ_STATUS, &active_irq);
+       if (regmap_read(regmap, NAU8825_REG_IRQ_STATUS, &active_irq)) {
+               dev_err(nau8825->dev, "failed to read irq status\n");
+               return IRQ_NONE;
+       }
 
        if ((active_irq & NAU8825_JACK_EJECTION_IRQ_MASK) ==
                NAU8825_JACK_EJECTION_DETECTED) {
@@ -1141,33 +1157,74 @@ static int nau8825_set_bias_level(struct snd_soc_codec *codec,
                                        return ret;
                                }
                        }
-
-                       ret = regcache_sync(nau8825->regmap);
-                       if (ret) {
-                               dev_err(codec->dev,
-                                       "Failed to sync cache: %d\n", ret);
-                               return ret;
-                       }
                }
-
                break;
 
        case SND_SOC_BIAS_OFF:
                if (nau8825->mclk_freq)
                        clk_disable_unprepare(nau8825->mclk);
-
-               regcache_mark_dirty(nau8825->regmap);
                break;
        }
        return 0;
 }
 
+#ifdef CONFIG_PM
+static int nau8825_suspend(struct snd_soc_codec *codec)
+{
+       struct nau8825 *nau8825 = snd_soc_codec_get_drvdata(codec);
+
+       disable_irq(nau8825->irq);
+       regcache_cache_only(nau8825->regmap, true);
+       regcache_mark_dirty(nau8825->regmap);
+
+       return 0;
+}
+
+static int nau8825_resume(struct snd_soc_codec *codec)
+{
+       struct nau8825 *nau8825 = snd_soc_codec_get_drvdata(codec);
+
+       /* The chip may lose power and reset in S3. regcache_sync restores
+        * register values including configurations for sysclk, irq, and
+        * jack/button detection.
+        */
+       regcache_cache_only(nau8825->regmap, false);
+       regcache_sync(nau8825->regmap);
+
+       /* Check the jack plug status directly. If the headset is unplugged
+        * during S3 when the chip has no power, there will be no jack
+        * detection irq even after the nau8825_restart_jack_detection below,
+        * because the chip just thinks no headset has ever been plugged in.
+        */
+       if (!nau8825_is_jack_inserted(nau8825->regmap)) {
+               nau8825_eject_jack(nau8825);
+               snd_soc_jack_report(nau8825->jack, 0, SND_JACK_HEADSET);
+       }
+
+       enable_irq(nau8825->irq);
+
+       /* Run jack detection to check the type (OMTP or CTIA) of the headset
+        * if there is one. This handles the case where a different type of
+        * headset is plugged in during S3. This triggers an IRQ iff a headset
+        * is already plugged in.
+        */
+       nau8825_restart_jack_detection(nau8825->regmap);
+
+       return 0;
+}
+#else
+#define nau8825_suspend NULL
+#define nau8825_resume NULL
+#endif
+
 static struct snd_soc_codec_driver nau8825_codec_driver = {
        .probe = nau8825_codec_probe,
        .set_sysclk = nau8825_set_sysclk,
        .set_pll = nau8825_set_pll,
        .set_bias_level = nau8825_set_bias_level,
        .suspend_bias_off = true,
+       .suspend = nau8825_suspend,
+       .resume = nau8825_resume,
 
        .controls = nau8825_controls,
        .num_controls = ARRAY_SIZE(nau8825_controls),
@@ -1277,16 +1334,6 @@ static int nau8825_setup_irq(struct nau8825 *nau8825)
        regmap_update_bits(regmap, NAU8825_REG_ENA_CTRL,
                NAU8825_ENABLE_DACR, NAU8825_ENABLE_DACR);
 
-       /* Chip needs one FSCLK cycle in order to generate interrupts,
-        * as we cannot guarantee one will be provided by the system. Turning
-        * master mode on then off enables us to generate that FSCLK cycle
-        * with a minimum of contention on the clock bus.
-        */
-       regmap_update_bits(regmap, NAU8825_REG_I2S_PCM_CTRL2,
-               NAU8825_I2S_MS_MASK, NAU8825_I2S_MS_MASTER);
-       regmap_update_bits(regmap, NAU8825_REG_I2S_PCM_CTRL2,
-               NAU8825_I2S_MS_MASK, NAU8825_I2S_MS_SLAVE);
-
        ret = devm_request_threaded_irq(nau8825->dev, nau8825->irq, NULL,
                nau8825_interrupt, IRQF_TRIGGER_LOW | IRQF_ONESHOT,
                "nau8825", nau8825);
@@ -1354,36 +1401,6 @@ static int nau8825_i2c_remove(struct i2c_client *client)
        return 0;
 }
 
-#ifdef CONFIG_PM_SLEEP
-static int nau8825_suspend(struct device *dev)
-{
-       struct i2c_client *client = to_i2c_client(dev);
-       struct nau8825 *nau8825 = dev_get_drvdata(dev);
-
-       disable_irq(client->irq);
-       regcache_cache_only(nau8825->regmap, true);
-       regcache_mark_dirty(nau8825->regmap);
-
-       return 0;
-}
-
-static int nau8825_resume(struct device *dev)
-{
-       struct i2c_client *client = to_i2c_client(dev);
-       struct nau8825 *nau8825 = dev_get_drvdata(dev);
-
-       regcache_cache_only(nau8825->regmap, false);
-       regcache_sync(nau8825->regmap);
-       enable_irq(client->irq);
-
-       return 0;
-}
-#endif
-
-static const struct dev_pm_ops nau8825_pm = {
-       SET_SYSTEM_SLEEP_PM_OPS(nau8825_suspend, nau8825_resume)
-};
-
 static const struct i2c_device_id nau8825_i2c_ids[] = {
        { "nau8825", 0 },
        { }
@@ -1410,7 +1427,6 @@ static struct i2c_driver nau8825_driver = {
                .name = "nau8825",
                .of_match_table = of_match_ptr(nau8825_of_ids),
                .acpi_match_table = ACPI_PTR(nau8825_acpi_match),
-               .pm = &nau8825_pm,
        },
        .probe = nau8825_i2c_probe,
        .remove = nau8825_i2c_remove,
index e8b5ba0..09e8988 100644 (file)
@@ -359,7 +359,7 @@ static const DECLARE_TLV_DB_RANGE(bst_tlv,
 
 /* Interface data select */
 static const char * const rt5640_data_select[] = {
-       "Normal", "left copy to right", "right copy to left", "Swap"};
+       "Normal", "Swap", "left copy to right", "right copy to left"};
 
 static SOC_ENUM_SINGLE_DECL(rt5640_if1_dac_enum, RT5640_DIG_INF_DATA,
                            RT5640_IF1_DAC_SEL_SFT, rt5640_data_select);
index 1761c3a..58b664b 100644 (file)
 #define RT5640_IF1_DAC_SEL_MASK                        (0x3 << 14)
 #define RT5640_IF1_DAC_SEL_SFT                 14
 #define RT5640_IF1_DAC_SEL_NOR                 (0x0 << 14)
-#define RT5640_IF1_DAC_SEL_L2R                 (0x1 << 14)
-#define RT5640_IF1_DAC_SEL_R2L                 (0x2 << 14)
-#define RT5640_IF1_DAC_SEL_SWAP                        (0x3 << 14)
+#define RT5640_IF1_DAC_SEL_SWAP                        (0x1 << 14)
+#define RT5640_IF1_DAC_SEL_L2R                 (0x2 << 14)
+#define RT5640_IF1_DAC_SEL_R2L                 (0x3 << 14)
 #define RT5640_IF1_ADC_SEL_MASK                        (0x3 << 12)
 #define RT5640_IF1_ADC_SEL_SFT                 12
 #define RT5640_IF1_ADC_SEL_NOR                 (0x0 << 12)
-#define RT5640_IF1_ADC_SEL_L2R                 (0x1 << 12)
-#define RT5640_IF1_ADC_SEL_R2L                 (0x2 << 12)
-#define RT5640_IF1_ADC_SEL_SWAP                        (0x3 << 12)
+#define RT5640_IF1_ADC_SEL_SWAP                        (0x1 << 12)
+#define RT5640_IF1_ADC_SEL_L2R                 (0x2 << 12)
+#define RT5640_IF1_ADC_SEL_R2L                 (0x3 << 12)
 #define RT5640_IF2_DAC_SEL_MASK                        (0x3 << 10)
 #define RT5640_IF2_DAC_SEL_SFT                 10
 #define RT5640_IF2_DAC_SEL_NOR                 (0x0 << 10)
-#define RT5640_IF2_DAC_SEL_L2R                 (0x1 << 10)
-#define RT5640_IF2_DAC_SEL_R2L                 (0x2 << 10)
-#define RT5640_IF2_DAC_SEL_SWAP                        (0x3 << 10)
+#define RT5640_IF2_DAC_SEL_SWAP                        (0x1 << 10)
+#define RT5640_IF2_DAC_SEL_L2R                 (0x2 << 10)
+#define RT5640_IF2_DAC_SEL_R2L                 (0x3 << 10)
 #define RT5640_IF2_ADC_SEL_MASK                        (0x3 << 8)
 #define RT5640_IF2_ADC_SEL_SFT                 8
 #define RT5640_IF2_ADC_SEL_NOR                 (0x0 << 8)
-#define RT5640_IF2_ADC_SEL_L2R                 (0x1 << 8)
-#define RT5640_IF2_ADC_SEL_R2L                 (0x2 << 8)
-#define RT5640_IF2_ADC_SEL_SWAP                        (0x3 << 8)
+#define RT5640_IF2_ADC_SEL_SWAP                        (0x1 << 8)
+#define RT5640_IF2_ADC_SEL_L2R                 (0x2 << 8)
+#define RT5640_IF2_ADC_SEL_R2L                 (0x3 << 8)
 #define RT5640_IF3_DAC_SEL_MASK                        (0x3 << 6)
 #define RT5640_IF3_DAC_SEL_SFT                 6
 #define RT5640_IF3_DAC_SEL_NOR                 (0x0 << 6)
-#define RT5640_IF3_DAC_SEL_L2R                 (0x1 << 6)
-#define RT5640_IF3_DAC_SEL_R2L                 (0x2 << 6)
-#define RT5640_IF3_DAC_SEL_SWAP                        (0x3 << 6)
+#define RT5640_IF3_DAC_SEL_SWAP                        (0x1 << 6)
+#define RT5640_IF3_DAC_SEL_L2R                 (0x2 << 6)
+#define RT5640_IF3_DAC_SEL_R2L                 (0x3 << 6)
 #define RT5640_IF3_ADC_SEL_MASK                        (0x3 << 4)
 #define RT5640_IF3_ADC_SEL_SFT                 4
 #define RT5640_IF3_ADC_SEL_NOR                 (0x0 << 4)
-#define RT5640_IF3_ADC_SEL_L2R                 (0x1 << 4)
-#define RT5640_IF3_ADC_SEL_R2L                 (0x2 << 4)
-#define RT5640_IF3_ADC_SEL_SWAP                        (0x3 << 4)
+#define RT5640_IF3_ADC_SEL_SWAP                        (0x1 << 4)
+#define RT5640_IF3_ADC_SEL_L2R                 (0x2 << 4)
+#define RT5640_IF3_ADC_SEL_R2L                 (0x3 << 4)
 
 /* REC Left Mixer Control 1 (0x3b) */
 #define RT5640_G_HP_L_RM_L_MASK                        (0x7 << 13)
index a8b3e3f..1bae17e 100644 (file)
@@ -1955,11 +1955,16 @@ err_adsp2_codec_probe:
 static int wm5102_codec_remove(struct snd_soc_codec *codec)
 {
        struct wm5102_priv *priv = snd_soc_codec_get_drvdata(codec);
+       struct arizona *arizona = priv->core.arizona;
 
        wm_adsp2_codec_remove(&priv->core.adsp[0], codec);
 
        priv->core.arizona->dapm = NULL;
 
+       arizona_free_irq(arizona, ARIZONA_IRQ_DSP_IRQ1, priv);
+
+       arizona_free_spk(codec);
+
        return 0;
 }
 
index 83ba70f..2728ac5 100644 (file)
@@ -2298,6 +2298,8 @@ static int wm5110_codec_remove(struct snd_soc_codec *codec)
 
        arizona_free_irq(arizona, ARIZONA_IRQ_DSP_IRQ1, priv);
 
+       arizona_free_spk(codec);
+
        return 0;
 }
 
index 8822360..720a14e 100644 (file)
@@ -2471,7 +2471,7 @@ static void wm8962_configure_bclk(struct snd_soc_codec *codec)
                break;
        default:
                dev_warn(codec->dev, "Unknown DSPCLK divisor read back\n");
-               dspclk = wm8962->sysclk;
+               dspclk = wm8962->sysclk_rate;
        }
 
        dev_dbg(codec->dev, "DSPCLK is %dHz, BCLK %d\n", dspclk, wm8962->bclk);
index 52d766e..6b0785b 100644 (file)
@@ -1072,6 +1072,8 @@ static int wm8997_codec_remove(struct snd_soc_codec *codec)
 
        priv->core.arizona->dapm = NULL;
 
+       arizona_free_spk(codec);
+
        return 0;
 }
 
index 0123960..449f666 100644 (file)
@@ -1324,6 +1324,8 @@ static int wm8998_codec_remove(struct snd_soc_codec *codec)
 
        priv->core.arizona->dapm = NULL;
 
+       arizona_free_spk(codec);
+
        return 0;
 }
 
index b3e6c23..1120f4f 100644 (file)
@@ -163,7 +163,6 @@ config SND_SOC_INTEL_SKYLAKE
        tristate
        select SND_HDA_EXT_CORE
        select SND_SOC_TOPOLOGY
-       select SND_HDA_I915
        select SND_SOC_INTEL_SST
 
 config SND_SOC_INTEL_SKL_RT286_MACH
index ac60f13..9156522 100644 (file)
@@ -1345,7 +1345,7 @@ int sst_hsw_stream_reset(struct sst_hsw *hsw, struct sst_hsw_stream *stream)
                return 0;
 
        /* wait for pause to complete before we reset the stream */
-       while (stream->running && tries--)
+       while (stream->running && --tries)
                msleep(1);
        if (!tries) {
                dev_err(hsw->dev, "error: reset stream %d still running\n",
index a5267e8..2962ef2 100644 (file)
@@ -336,6 +336,11 @@ void skl_dsp_free(struct sst_dsp *dsp)
        skl_ipc_int_disable(dsp);
 
        free_irq(dsp->irq, dsp);
+       dsp->cl_dev.ops.cl_cleanup_controller(dsp);
+       skl_cldma_int_disable(dsp);
+       skl_ipc_op_int_disable(dsp);
+       skl_ipc_int_disable(dsp);
+
        skl_dsp_disable_core(dsp);
 }
 EXPORT_SYMBOL_GPL(skl_dsp_free);
index 545b4e7..cdb78b7 100644 (file)
@@ -239,6 +239,7 @@ static void skl_tplg_update_buffer_size(struct skl_sst *ctx,
 {
        int multiplier = 1;
        struct skl_module_fmt *in_fmt, *out_fmt;
+       int in_rate, out_rate;
 
 
        /* Since fixups is applied to pin 0 only, ibs, obs needs
@@ -249,15 +250,24 @@ static void skl_tplg_update_buffer_size(struct skl_sst *ctx,
 
        if (mcfg->m_type == SKL_MODULE_TYPE_SRCINT)
                multiplier = 5;
-       mcfg->ibs = (in_fmt->s_freq / 1000) *
-                               (mcfg->in_fmt->channels) *
-                               (mcfg->in_fmt->bit_depth >> 3) *
-                               multiplier;
-
-       mcfg->obs = (mcfg->out_fmt->s_freq / 1000) *
-                               (mcfg->out_fmt->channels) *
-                               (mcfg->out_fmt->bit_depth >> 3) *
-                               multiplier;
+
+       if (in_fmt->s_freq % 1000)
+               in_rate = (in_fmt->s_freq / 1000) + 1;
+       else
+               in_rate = (in_fmt->s_freq / 1000);
+
+       mcfg->ibs = in_rate * (mcfg->in_fmt->channels) *
+                       (mcfg->in_fmt->bit_depth >> 3) *
+                       multiplier;
+
+       if (mcfg->out_fmt->s_freq % 1000)
+               out_rate = (mcfg->out_fmt->s_freq / 1000) + 1;
+       else
+               out_rate = (mcfg->out_fmt->s_freq / 1000);
+
+       mcfg->obs = out_rate * (mcfg->out_fmt->channels) *
+                       (mcfg->out_fmt->bit_depth >> 3) *
+                       multiplier;
 }
 
 static int skl_tplg_update_be_blob(struct snd_soc_dapm_widget *w,
@@ -485,11 +495,15 @@ skl_tplg_init_pipe_modules(struct skl *skl, struct skl_pipe *pipe)
                if (!skl_is_pipe_mcps_avail(skl, mconfig))
                        return -ENOMEM;
 
+               skl_tplg_alloc_pipe_mcps(skl, mconfig);
+
                if (mconfig->is_loadable && ctx->dsp->fw_ops.load_mod) {
                        ret = ctx->dsp->fw_ops.load_mod(ctx->dsp,
                                mconfig->id.module_id, mconfig->guid);
                        if (ret < 0)
                                return ret;
+
+                       mconfig->m_state = SKL_MODULE_LOADED;
                }
 
                /* update blob if blob is null for be with default value */
@@ -509,7 +523,6 @@ skl_tplg_init_pipe_modules(struct skl *skl, struct skl_pipe *pipe)
                ret = skl_tplg_set_module_params(w, ctx);
                if (ret < 0)
                        return ret;
-               skl_tplg_alloc_pipe_mcps(skl, mconfig);
        }
 
        return 0;
@@ -524,7 +537,8 @@ static int skl_tplg_unload_pipe_modules(struct skl_sst *ctx,
        list_for_each_entry(w_module, &pipe->w_list, node) {
                mconfig  = w_module->w->priv;
 
-               if (mconfig->is_loadable && ctx->dsp->fw_ops.unload_mod)
+               if (mconfig->is_loadable && ctx->dsp->fw_ops.unload_mod &&
+                       mconfig->m_state > SKL_MODULE_UNINIT)
                        return ctx->dsp->fw_ops.unload_mod(ctx->dsp,
                                                mconfig->id.module_id);
        }
@@ -558,6 +572,9 @@ static int skl_tplg_mixer_dapm_pre_pmu_event(struct snd_soc_dapm_widget *w,
        if (!skl_is_pipe_mem_avail(skl, mconfig))
                return -ENOMEM;
 
+       skl_tplg_alloc_pipe_mem(skl, mconfig);
+       skl_tplg_alloc_pipe_mcps(skl, mconfig);
+
        /*
         * Create a list of modules for pipe.
         * This list contains modules from source to sink
@@ -601,9 +618,6 @@ static int skl_tplg_mixer_dapm_pre_pmu_event(struct snd_soc_dapm_widget *w,
                src_module = dst_module;
        }
 
-       skl_tplg_alloc_pipe_mem(skl, mconfig);
-       skl_tplg_alloc_pipe_mcps(skl, mconfig);
-
        return 0;
 }
 
index de3c401..d2d9230 100644 (file)
@@ -274,10 +274,10 @@ struct skl_pipe {
 
 enum skl_module_state {
        SKL_MODULE_UNINIT = 0,
-       SKL_MODULE_INIT_DONE = 1,
-       SKL_MODULE_LOADED = 2,
-       SKL_MODULE_UNLOADED = 3,
-       SKL_MODULE_BIND_DONE = 4
+       SKL_MODULE_LOADED = 1,
+       SKL_MODULE_INIT_DONE = 2,
+       SKL_MODULE_BIND_DONE = 3,
+       SKL_MODULE_UNLOADED = 4,
 };
 
 struct skl_module_cfg {
index ab5e25a..3982f55 100644 (file)
@@ -222,6 +222,7 @@ static int skl_suspend(struct device *dev)
        struct hdac_ext_bus *ebus = pci_get_drvdata(pci);
        struct skl *skl  = ebus_to_skl(ebus);
        struct hdac_bus *bus = ebus_to_hbus(ebus);
+       int ret = 0;
 
        /*
         * Do not suspend if streams which are marked ignore suspend are
@@ -232,10 +233,20 @@ static int skl_suspend(struct device *dev)
                enable_irq_wake(bus->irq);
                pci_save_state(pci);
                pci_disable_device(pci);
-               return 0;
        } else {
-               return _skl_suspend(ebus);
+               ret = _skl_suspend(ebus);
+               if (ret < 0)
+                       return ret;
+       }
+
+       if (IS_ENABLED(CONFIG_SND_SOC_HDAC_HDMI)) {
+               ret = snd_hdac_display_power(bus, false);
+               if (ret < 0)
+                       dev_err(bus->dev,
+                               "Cannot turn OFF display power on i915\n");
        }
+
+       return ret;
 }
 
 static int skl_resume(struct device *dev)
@@ -316,17 +327,20 @@ static int skl_free(struct hdac_ext_bus *ebus)
 
        if (bus->irq >= 0)
                free_irq(bus->irq, (void *)bus);
-       if (bus->remap_addr)
-               iounmap(bus->remap_addr);
-
        snd_hdac_bus_free_stream_pages(bus);
        snd_hdac_stream_free_all(ebus);
        snd_hdac_link_free_all(ebus);
+
+       if (bus->remap_addr)
+               iounmap(bus->remap_addr);
+
        pci_release_regions(skl->pci);
        pci_disable_device(skl->pci);
 
        snd_hdac_ext_bus_exit(ebus);
 
+       if (IS_ENABLED(CONFIG_SND_SOC_HDAC_HDMI))
+               snd_hdac_i915_exit(&ebus->bus);
        return 0;
 }
 
@@ -719,12 +733,12 @@ static void skl_remove(struct pci_dev *pci)
        if (skl->tplg)
                release_firmware(skl->tplg);
 
-       if (IS_ENABLED(CONFIG_SND_SOC_HDAC_HDMI))
-               snd_hdac_i915_exit(&ebus->bus);
-
        if (pci_dev_run_wake(pci))
                pm_runtime_get_noresume(&pci->dev);
-       pci_dev_put(pci);
+
+       /* codec removal, invoke bus_device_remove */
+       snd_hdac_ext_bus_device_remove(ebus);
+
        skl_platform_unregister(&pci->dev);
        skl_free_dsp(skl);
        skl_machine_device_unregister(skl);
index 801ae1a..c446485 100644 (file)
@@ -2188,6 +2188,13 @@ static ssize_t dapm_widget_show_component(struct snd_soc_component *cmpnt,
        int count = 0;
        char *state = "not set";
 
+       /* card won't be set for the dummy component, as a spot fix
+        * we're checking for that case specifically here but in future
+        * we will ensure that the dummy component looks like others.
+        */
+       if (!cmpnt->card)
+               return 0;
+
        list_for_each_entry(w, &cmpnt->card->widgets, list) {
                if (w->dapm != dapm)
                        continue;
index 5a95896..55a60d3 100644 (file)
@@ -299,18 +299,38 @@ they mean, and suggestions for how to fix them.
 Errors in .c files
 ------------------
 
-If you're getting an objtool error in a compiled .c file, chances are
-the file uses an asm() statement which has a "call" instruction.  An
-asm() statement with a call instruction must declare the use of the
-stack pointer in its output operand.  For example, on x86_64:
+1. c_file.o: warning: objtool: funcA() falls through to next function funcB()
 
-   register void *__sp asm("rsp");
-   asm volatile("call func" : "+r" (__sp));
+   This means that funcA() doesn't end with a return instruction or an
+   unconditional jump, and that objtool has determined that the function
+   can fall through into the next function.  There could be different
+   reasons for this:
 
-Otherwise the stack frame may not get created before the call.
+   1) funcA()'s last instruction is a call to a "noreturn" function like
+      panic().  In this case the noreturn function needs to be added to
+      objtool's hard-coded global_noreturns array.  Feel free to bug the
+      objtool maintainer, or you can submit a patch.
 
-Another possible cause for errors in C code is if the Makefile removes
--fno-omit-frame-pointer or adds -fomit-frame-pointer to the gcc options.
+   2) funcA() uses the unreachable() annotation in a section of code
+      that is actually reachable.
+
+   3) If funcA() calls an inline function, the object code for funcA()
+      might be corrupt due to a gcc bug.  For more details, see:
+      https://gcc.gnu.org/bugzilla/show_bug.cgi?id=70646
+
+2. If you're getting any other objtool error in a compiled .c file, it
+   may be because the file uses an asm() statement which has a "call"
+   instruction.  An asm() statement with a call instruction must declare
+   the use of the stack pointer in its output operand.  For example, on
+   x86_64:
+
+     register void *__sp asm("rsp");
+     asm volatile("call func" : "+r" (__sp));
+
+   Otherwise the stack frame may not get created before the call.
+
+3. Another possible cause for errors in C code is if the Makefile removes
+   -fno-omit-frame-pointer or adds -fomit-frame-pointer to the gcc options.
 
 Also see the above section for .S file errors for more information what
 the individual error messages mean.
index 7515cb2..e8a1e69 100644 (file)
@@ -54,6 +54,7 @@ struct instruction {
        struct symbol *call_dest;
        struct instruction *jump_dest;
        struct list_head alts;
+       struct symbol *func;
 };
 
 struct alternative {
@@ -66,6 +67,7 @@ struct objtool_file {
        struct list_head insn_list;
        DECLARE_HASHTABLE(insn_hash, 16);
        struct section *rodata, *whitelist;
+       bool ignore_unreachables, c_file;
 };
 
 const char *objname;
@@ -228,7 +230,7 @@ static int __dead_end_function(struct objtool_file *file, struct symbol *func,
                        }
                }
 
-               if (insn->type == INSN_JUMP_DYNAMIC)
+               if (insn->type == INSN_JUMP_DYNAMIC && list_empty(&insn->alts))
                        /* sibling call */
                        return 0;
        }
@@ -248,6 +250,7 @@ static int dead_end_function(struct objtool_file *file, struct symbol *func)
 static int decode_instructions(struct objtool_file *file)
 {
        struct section *sec;
+       struct symbol *func;
        unsigned long offset;
        struct instruction *insn;
        int ret;
@@ -281,6 +284,21 @@ static int decode_instructions(struct objtool_file *file)
                        hash_add(file->insn_hash, &insn->hash, insn->offset);
                        list_add_tail(&insn->list, &file->insn_list);
                }
+
+               list_for_each_entry(func, &sec->symbol_list, list) {
+                       if (func->type != STT_FUNC)
+                               continue;
+
+                       if (!find_insn(file, sec, func->offset)) {
+                               WARN("%s(): can't find starting instruction",
+                                    func->name);
+                               return -1;
+                       }
+
+                       func_for_each_insn(file, func, insn)
+                               if (!insn->func)
+                                       insn->func = func;
+               }
        }
 
        return 0;
@@ -664,13 +682,40 @@ static int add_func_switch_tables(struct objtool_file *file,
                                                text_rela->addend);
 
                /*
-                * TODO: Document where this is needed, or get rid of it.
-                *
                 * rare case:   jmpq *[addr](%rip)
+                *
+                * This check is for a rare gcc quirk, currently only seen in
+                * three driver functions in the kernel, only with certain
+                * obscure non-distro configs.
+                *
+                * As part of an optimization, gcc makes a copy of an existing
+                * switch jump table, modifies it, and then hard-codes the jump
+                * (albeit with an indirect jump) to use a single entry in the
+                * table.  The rest of the jump table and some of its jump
+                * targets remain as dead code.
+                *
+                * In such a case we can just crudely ignore all unreachable
+                * instruction warnings for the entire object file.  Ideally we
+                * would just ignore them for the function, but that would
+                * require redesigning the code quite a bit.  And honestly
+                * that's just not worth doing: unreachable instruction
+                * warnings are of questionable value anyway, and this is such
+                * a rare issue.
+                *
+                * kbuild reports:
+                * - https://lkml.kernel.org/r/201603231906.LWcVUpxm%25fengguang.wu@intel.com
+                * - https://lkml.kernel.org/r/201603271114.K9i45biy%25fengguang.wu@intel.com
+                * - https://lkml.kernel.org/r/201603291058.zuJ6ben1%25fengguang.wu@intel.com
+                *
+                * gcc bug:
+                * - https://gcc.gnu.org/bugzilla/show_bug.cgi?id=70604
                 */
-               if (!rodata_rela)
+               if (!rodata_rela) {
                        rodata_rela = find_rela_by_dest(file->rodata,
                                                        text_rela->addend + 4);
+                       if (rodata_rela)
+                               file->ignore_unreachables = true;
+               }
 
                if (!rodata_rela)
                        continue;
@@ -732,9 +777,6 @@ static int decode_sections(struct objtool_file *file)
 {
        int ret;
 
-       file->whitelist = find_section_by_name(file->elf, "__func_stack_frame_non_standard");
-       file->rodata = find_section_by_name(file->elf, ".rodata");
-
        ret = decode_instructions(file);
        if (ret)
                return ret;
@@ -799,6 +841,7 @@ static int validate_branch(struct objtool_file *file,
        struct alternative *alt;
        struct instruction *insn;
        struct section *sec;
+       struct symbol *func = NULL;
        unsigned char state;
        int ret;
 
@@ -813,6 +856,16 @@ static int validate_branch(struct objtool_file *file,
        }
 
        while (1) {
+               if (file->c_file && insn->func) {
+                       if (func && func != insn->func) {
+                               WARN("%s() falls through to next function %s()",
+                                    func->name, insn->func->name);
+                               return 1;
+                       }
+
+                       func = insn->func;
+               }
+
                if (insn->visited) {
                        if (frame_state(insn->state) != frame_state(state)) {
                                WARN_FUNC("frame pointer state mismatch",
@@ -823,13 +876,6 @@ static int validate_branch(struct objtool_file *file,
                        return 0;
                }
 
-               /*
-                * Catch a rare case where a noreturn function falls through to
-                * the next function.
-                */
-               if (is_fentry_call(insn) && (state & STATE_FENTRY))
-                       return 0;
-
                insn->visited = true;
                insn->state = state;
 
@@ -1035,12 +1081,8 @@ static int validate_functions(struct objtool_file *file)
                                continue;
 
                        insn = find_insn(file, sec, func->offset);
-                       if (!insn) {
-                               WARN("%s(): can't find starting instruction",
-                                    func->name);
-                               warnings++;
+                       if (!insn)
                                continue;
-                       }
 
                        ret = validate_branch(file, insn, 0);
                        warnings += ret;
@@ -1056,13 +1098,14 @@ static int validate_functions(struct objtool_file *file)
                                if (insn->visited)
                                        continue;
 
-                               if (!ignore_unreachable_insn(func, insn) &&
-                                   !warnings) {
-                                       WARN_FUNC("function has unreachable instruction", insn->sec, insn->offset);
-                                       warnings++;
-                               }
-
                                insn->visited = true;
+
+                               if (file->ignore_unreachables || warnings ||
+                                   ignore_unreachable_insn(func, insn))
+                                       continue;
+
+                               WARN_FUNC("function has unreachable instruction", insn->sec, insn->offset);
+                               warnings++;
                        }
                }
        }
@@ -1133,6 +1176,10 @@ int cmd_check(int argc, const char **argv)
 
        INIT_LIST_HEAD(&file.insn_list);
        hash_init(file.insn_hash);
+       file.whitelist = find_section_by_name(file.elf, "__func_stack_frame_non_standard");
+       file.rodata = find_section_by_name(file.elf, ".rodata");
+       file.ignore_unreachables = false;
+       file.c_file = find_section_by_name(file.elf, ".comment");
 
        ret = decode_sections(&file);
        if (ret < 0)
index 407f11b..6175784 100644 (file)
@@ -1130,7 +1130,7 @@ static int intel_pt_synth_transaction_sample(struct intel_pt_queue *ptq)
                pr_err("Intel Processor Trace: failed to deliver transaction event, error %d\n",
                       ret);
 
-       if (pt->synth_opts.callchain)
+       if (pt->synth_opts.last_branch)
                intel_pt_reset_last_branch_rb(ptq);
 
        return ret;