Skip to content
代码片段 群组 项目

比较版本

更改显示为版本正在合并到目标版本。了解更多关于比较版本的信息。

来源

选择目标项目
No results found

目标

选择目标项目
  • wireguard/wireguard-linux-compat
  • apsara2825/wireguard-linux-compat
2 个结果
显示更改
源代码提交(23)
  • Jason A. Donenfeld's avatar
    receive: drop handshakes if queue lock is contended · e44c78cb
    Jason A. Donenfeld 创作于
    
    If we're being delivered packets from multiple CPUs so quickly that the
    ring lock is contended for CPU tries, then it's safe to assume that the
    queue is near capacity anyway, so just drop the packet rather than
    spinning. This helps deal with multicore DoS that can interfere with
    data path performance. It _still_ does not completely fix the issue, but
    it again chips away at it.
    
    Reported-by: default avatarStreun Fabio <fstreun@student.ethz.ch>
    Signed-off-by: default avatarJason A. Donenfeld <Jason@zx2c4.com>
    e44c78cb
  • Gustavo A. R. Silva's avatar
    ratelimiter: use kvcalloc() instead of kvzalloc() · 5325bc82
    Gustavo A. R. Silva 创作于
    
    Use 2-factor argument form kvcalloc() instead of kvzalloc().
    
    Signed-off-by: default avatarGustavo A. R. Silva <gustavoars@kernel.org>
    Signed-off-by: default avatarJason A. Donenfeld <Jason@zx2c4.com>
    5325bc82
  • Arnd Bergmann's avatar
    compat: siphash: use _unaligned version by default · ea6b8e7b
    Arnd Bergmann 创作于
    On ARM v6 and later, we define CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
    because the ordinary load/store instructions (ldr, ldrh, ldrb) can
    tolerate any misalignment of the memory address. However, load/store
    double and load/store multiple instructions (ldrd, ldm) may still only
    be used on memory addresses that are 32-bit aligned, and so we have to
    use the CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS macro with care, or we
    may end up with a severe performance hit due to alignment traps that
    require fixups by the kernel. Testing shows that this currently happens
    with clang-13 but not gcc-11. In theory, any compiler version can
    produce this bug or other problems, as we are dealing with undefined
    behavior in C99 even on architectures that support this in hardware,
    see also https://gcc.gnu.org/bugzilla/show_bug.cgi?id=100363
    
    .
    
    Fortunately, the get_unaligned() accessors do the right thing: when
    building for ARMv6 or later, the compiler will emit unaligned accesses
    using the ordinary load/store instructions (but avoid the ones that
    require 32-bit alignment). When building for older ARM, those accessors
    will emit the appropriate sequence of ldrb/mov/orr instructions. And on
    architectures that can truly tolerate any kind of misalignment, the
    get_unaligned() accessors resolve to the leXX_to_cpup accessors that
    operate on aligned addresses.
    
    Since the compiler will in fact emit ldrd or ldm instructions when
    building this code for ARM v6 or later, the solution is to use the
    unaligned accessors unconditionally on architectures where this is
    known to be fast. The _aligned version of the hash function is
    however still needed to get the best performance on architectures
    that cannot do any unaligned access in hardware.
    
    This new version avoids the undefined behavior and should produce
    the fastest hash on all architectures we support.
    
    Reported-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
    Signed-off-by: default avatarArnd Bergmann <arnd@arndb.de>
    Reviewed-by: default avatarJason A. Donenfeld <Jason@zx2c4.com>
    Acked-by: default avatarArd Biesheuvel <ardb@kernel.org>
    Signed-off-by: default avatarJason A. Donenfeld <Jason@zx2c4.com>
    ea6b8e7b
  • Jason A. Donenfeld's avatar
    compat: udp_tunnel: don't take reference to non-init namespace · 8e40dd62
    Jason A. Donenfeld 创作于
    
    The comment to sk_change_net is instructive:
    
      Kernel sockets, f.e. rtnl or icmp_socket, are a part of a namespace.
      They should not hold a reference to a namespace in order to allow
      to stop it.
      Sockets after sk_change_net should be released using sk_release_kernel
    
    We weren't following these rules before, and were instead using
    __sock_create, which means we kept a reference to the namespace, which
    in turn meant that interfaces were not cleaned up on namespace
    exit.
    
    Signed-off-by: default avatarJason A. Donenfeld <Jason@zx2c4.com>
    8e40dd62
  • Mathias Krause's avatar
    crypto: curve25519-x86_64: solve register constraints with reserved registers · 3c9f3b69
    Mathias Krause 创作于
    
    The register constraints for the inline assembly in fsqr() and fsqr2()
    are pretty tight on what the compiler may assign to the remaining three
    register variables. The clobber list only allows the following to be
    used: RDI, RSI, RBP and R12. With RAP reserving R12 and a kernel having
    CONFIG_FRAME_POINTER=y, claiming RBP, there are only two registers left
    so the compiler rightfully complains about impossible constraints.
    
    Provide alternatives that'll allow a memory reference for 'out' to solve
    the allocation constraint dilemma for this configuration.
    
    Also make 'out' an input-only operand as it is only used as such. This
    not only allows gcc to optimize its usage further, but also works around
    older gcc versions, apparently failing to handle multiple alternatives
    correctly, as in failing to initialize the 'out' operand with its input
    value.
    
    Signed-off-by: default avatarMathias Krause <minipli@grsecurity.net>
    Signed-off-by: default avatarJason A. Donenfeld <Jason@zx2c4.com>
    3c9f3b69
  • Jason A. Donenfeld's avatar
    version: bump · 743eef23
    Jason A. Donenfeld 创作于
    
    Signed-off-by: default avatarJason A. Donenfeld <Jason@zx2c4.com>
    743eef23
  • Jason A. Donenfeld's avatar
    compat: drop Ubuntu 14.04 · 4f4c0198
    Jason A. Donenfeld 创作于
    It's been over a year since we announced sunsetting this.
    
    Link: https://lore.kernel.org/wireguard/CAHmME9rckipsdZYW+LA=x6wCMybdFFA+VqoogFXnR=kHYiCteg@mail.gmail.com/T
    
    
    Signed-off-by: default avatarJason A. Donenfeld <Jason@zx2c4.com>
    4f4c0198
  • Jason A. Donenfeld's avatar
    crypto: curve25519-x86_64: use in/out register constraints more precisely · 273018b7
    Jason A. Donenfeld 创作于
    Rather than passing all variables as modified, pass ones that are only
    read into that parameter. This helps with old gcc versions when
    alternatives are additionally used, and lets gcc's codegen be a little
    bit more efficient. This also syncs up with the latest Vale/EverCrypt
    output.
    
    This also forward ports 3c9f3b69 ("crypto: curve25519-x86_64: solve
    register constraints with reserved registers").
    
    Cc: Aymeric Fromherz <aymeric.fromherz@inria.fr>
    Cc: Mathias Krause <minipli@grsecurity.net>
    Link: https://lore.kernel.org/wireguard/1554725710.1290070.1639240504281.JavaMail.zimbra@inria.fr/
    Link: https://github.com/project-everest/hacl-star/pull/501
    
    
    Signed-off-by: default avatarJason A. Donenfeld <Jason@zx2c4.com>
    273018b7
  • Jason A. Donenfeld's avatar
    queueing: use CFI-safe ptr_ring cleanup function · 4eff63d2
    Jason A. Donenfeld 创作于
    
    We make too nuanced use of ptr_ring to entirely move to the skb_array
    wrappers, but we at least should avoid the naughty function pointer cast
    when cleaning up skbs. Otherwise RAP/CFI will honk at us. This patch
    uses the __skb_array_destroy_skb wrapper for the cleanup, rather than
    directly providing kfree_skb, which is what other drivers in the same
    situation do too.
    
    Reported-by: default avatarPaX Team <pageexec@freemail.hu>
    Signed-off-by: default avatarJason A. Donenfeld <Jason@zx2c4.com>
    4eff63d2
  • Jason A. Donenfeld's avatar
    qemu: simplify RNG seeding · ffb8cd62
    Jason A. Donenfeld 创作于
    
    We don't actualy need to write anything in the pool. Instead, we just
    force the total over 128, and we should be good to go for all old
    kernels. We also only need this on getrandom() kernels, which simplifies
    things too.
    
    Signed-off-by: default avatarJason A. Donenfeld <Jason@zx2c4.com>
    ffb8cd62
  • Wang Hai's avatar
    socket: free skb in send6 when ipv6 is disabled · fa32671b
    Wang Hai 创作于
    
    I got a memory leak report:
    
    unreferenced object 0xffff8881191fc040 (size 232):
      comm "kworker/u17:0", pid 23193, jiffies 4295238848 (age 3464.870s)
      hex dump (first 32 bytes):
        00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
        00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
      backtrace:
        [<ffffffff814c3ef4>] slab_post_alloc_hook+0x84/0x3b0
        [<ffffffff814c8977>] kmem_cache_alloc_node+0x167/0x340
        [<ffffffff832974fb>] __alloc_skb+0x1db/0x200
        [<ffffffff82612b5d>] wg_socket_send_buffer_to_peer+0x3d/0xc0
        [<ffffffff8260e94a>] wg_packet_send_handshake_initiation+0xfa/0x110
        [<ffffffff8260ec81>] wg_packet_handshake_send_worker+0x21/0x30
        [<ffffffff8119c558>] process_one_work+0x2e8/0x770
        [<ffffffff8119ca2a>] worker_thread+0x4a/0x4b0
        [<ffffffff811a88e0>] kthread+0x120/0x160
        [<ffffffff8100242f>] ret_from_fork+0x1f/0x30
    
    In function wg_socket_send_buffer_as_reply_to_skb() or wg_socket_send_
    buffer_to_peer(), the semantics of send6() is required to free skb. But
    when CONFIG_IPV6 is disable, kfree_skb() is missing. This patch adds it
    to fix this bug.
    
    Signed-off-by: default avatarWang Hai <wanghai38@huawei.com>
    Signed-off-by: default avatarJason A. Donenfeld <Jason@zx2c4.com>
    fa32671b
  • Jason A. Donenfeld's avatar
    socket: ignore v6 endpoints when ipv6 is disabled · ec89ca64
    Jason A. Donenfeld 创作于
    
    The previous commit fixed a memory leak on the send path in the event
    that IPv6 is disabled at compile time, but how did a packet even arrive
    there to begin with? It turns out we have previously allowed IPv6
    endpoints even when IPv6 support is disabled at compile time. This is
    awkward and inconsistent. Instead, let's just ignore all things IPv6,
    the same way we do other malformed endpoints, in the case where IPv6 is
    disabled.
    
    Signed-off-by: default avatarJason A. Donenfeld <Jason@zx2c4.com>
    ec89ca64
  • Jason A. Donenfeld's avatar
    qemu: enable ACPI for SMP · f909532a
    Jason A. Donenfeld 创作于
    
    It turns out that by having CONFIG_ACPI=n, we've been failing to boot
    additional CPUs, and so these systems were functionally UP. The code
    bloat is unfortunate for build times, but I don't see an alternative. So
    this commit sets CONFIG_ACPI=y for x86_64 and i686 configs.
    
    Signed-off-by: default avatarJason A. Donenfeld <Jason@zx2c4.com>
    f909532a
  • Nikolay Aleksandrov's avatar
    device: check for metadata_dst with skb_valid_dst() · f9d9b4db
    Nikolay Aleksandrov 创作于
    When we try to transmit an skb with md_dst attached through wireguard
    we hit a null pointer dereference in wg_xmit() due to the use of
    dst_mtu() which calls into dst_blackhole_mtu() which in turn tries to
    dereference dst->dev.
    
    Since wireguard doesn't use md_dsts we should use skb_valid_dst(), which
    checks for DST_METADATA flag, and if it's set, then falls back to
    wireguard's device mtu. That gives us the best chance of transmitting
    the packet; otherwise if the blackhole netdev is used we'd get
    ETH_MIN_MTU.
    
     [  263.693506] BUG: kernel NULL pointer dereference, address: 00000000000000e0
     [  263.693908] #PF: supervisor read access in kernel mode
     [  263.694174] #PF: error_code(0x0000) - not-present page
     [  263.694424] PGD 0 P4D 0
     [  263.694653] Oops: 0000 [#1] PREEMPT SMP NOPTI
     [  263.694876] CPU: 5 PID: 951 Comm: mausezahn Kdump: loaded Not tainted 5.18.0-rc1+ #522
     [  263.695190] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.15.0-1.fc35 04/01/2014
     [  263.695529] RIP: 0010:dst_blackhole_mtu+0x17/0x20
     [  263.695770] Code: 00 00 00 0f 1f 44 00 00 c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 48 8b 47 10 48 83 e0 fc 8b 40 04 85 c0 75 09 48 8b 07 <8b> 80 e0 00 00 00 c3 66 90 0f 1f 44 00 00 48 89 d7 be 01 00 00 00
     [  263.696339] RSP: 0018:ffffa4a4422fbb28 EFLAGS: 00010246
     [  263.696600] RAX: 0000000000000000 RBX: ffff8ac9c3553000 RCX: 0000000000000000
     [  263.696891] RDX: 0000000000000401 RSI: 00000000fffffe01 RDI: ffffc4a43fb48900
     [  263.697178] RBP: ffffa4a4422fbb90 R08: ffffffff9622635e R09: 0000000000000002
     [  263.697469] R10: ffffffff9b69a6c0 R11: ffffa4a4422fbd0c R12: ffff8ac9d18b1a00
     [  263.697766] R13: ffff8ac9d0ce1840 R14: ffff8ac9d18b1a00 R15: ffff8ac9c3553000
     [  263.698054] FS:  00007f3704c337c0(0000) GS:ffff8acaebf40000(0000) knlGS:0000000000000000
     [  263.698470] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
     [  263.698826] CR2: 00000000000000e0 CR3: 0000000117a5c000 CR4: 00000000000006e0
     [  263.699214] Call Trace:
     [  263.699505]  <TASK>
     [  263.699759]  wg_xmit+0x411/0x450
     [  263.700059]  ? bpf_skb_set_tunnel_key+0x46/0x2d0
     [   263.700382]  ? dev_queue_xmit_nit+0x31/0x2b0
     [  263.700719]  dev_hard_start_xmit+0xd9/0x220
     [  263.701047]  __dev_queue_xmit+0x8b9/0xd30
     [  263.701344]  __bpf_redirect+0x1a4/0x380
     [  263.701664]  __dev_queue_xmit+0x83b/0xd30
     [  263.701961]  ? packet_parse_headers+0xb4/0xf0
     [  263.702275]  packet_sendmsg+0x9a8/0x16a0
     [  263.702596]  ? _raw_spin_unlock_irqrestore+0x23/0x40
     [  263.702933]  sock_sendmsg+0x5e/0x60
     [  263.703239]  __sys_sendto+0xf0/0x160
     [  263.703549]  __x64_sys_sendto+0x20/0x30
     [  263.703853]  do_syscall_64+0x3b/0x90
     [  263.704162]  entry_SYSCALL_64_after_hwframe+0x44/0xae
     [  263.704494] RIP: 0033:0x7f3704d50506
     [  263.704789] Code: 48 c7 c0 ff ff ff ff eb b7 66 2e 0f 1f 84 00 00 00 00 00 90 41 89 ca 64 8b 04 25 18 00 00 00 85 c0 75 11 b8 2c 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 72 c3 90 55 48 83 ec 30 44 89 4c 24 2c 4c 89
     [  263.705652] RSP: 002b:00007ffe954b0b88 EFLAGS: 00000246 ORIG_RAX: 000000000000002c
     [  263.706141] RAX: ffffffffffffffda RBX: 0000558bb259b490 RCX: 00007f3704d50506
     [  263.706544] RDX: 000000000000004a RSI: 0000558bb259b7b2 RDI: 0000000000000003
     [  263.706952] RBP: 0000000000000000 R08: 00007ffe954b0b90 R09: 0000000000000014
     [  263.707339] R10: 0000000000000000 R11: 0000000000000246 R12: 00007ffe954b0b90
     [  263.707735] R13: 000000000000004a R14: 0000558bb259b7b2 R15: 0000000000000001
     [  263.708132]  </TASK>
     [  263.708398] Modules linked in: bridge netconsole bonding [last unloaded: bridge]
     [  263.708942] CR2: 00000000000000e0
    
    Link: https://github.com/cilium/cilium/issues/19428
    
    
    Reported-by: default avatarMartynas Pumputis <m@lambda.lt>
    Signed-off-by: default avatarNikolay Aleksandrov <razor@blackwall.org>
    Acked-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
    [Jason: polyfilled for < 4.3]
    Signed-off-by: default avatarJason A. Donenfeld <Jason@zx2c4.com>
    f9d9b4db
  • Jason A. Donenfeld's avatar
    netns: make routing loop test non-fatal · f8886735
    Jason A. Donenfeld 创作于
    I hate to do this, but I still do not have a good solution to actually
    fix this bug across architectures. So just disable it for now, so that
    the CI can still deliver actionable results. This commit adds a large
    red warning, so that at least the failure isn't lost forever, and
    hopefully this can be revisited down the line.
    
    Link: https://lore.kernel.org/netdev/CAHmME9pv1x6C4TNdL6648HydD8r+txpV4hTUXOBVkrapBXH4QQ@mail.gmail.com/
    Link: https://lore.kernel.org/netdev/YmszSXueTxYOC41G@zx2c4.com/
    Link: https://lore.kernel.org/wireguard/CAHmME9rNnBiNvBstb7MPwK-7AmAN0sOfnhdR=eeLrowWcKxaaQ@mail.gmail.com/
    
    
    Signed-off-by: default avatarJason A. Donenfeld <Jason@zx2c4.com>
    f8886735
  • Jason A. Donenfeld's avatar
    netns: limit parallelism to $(nproc) tests at once · 894152a5
    Jason A. Donenfeld 创作于
    
    The parallel tests were added to catch queueing issues from multiple
    cores. But what happens in reality when testing tons of processes is
    that these separate threads wind up fighting with the scheduler, and we
    wind up with contention in places we don't care about that decrease the
    chances of hitting a bug. So just do a test with the number of CPU
    cores, rather than trying to scale up arbitrarily.
    
    Signed-off-by: default avatarJason A. Donenfeld <Jason@zx2c4.com>
    894152a5
  • Jason A. Donenfeld's avatar
    qemu: use vports on arm · 33c87a11
    Jason A. Donenfeld 创作于
    
    Rather than having to hack up QEMU, just use the virtio serial device.
    
    Signed-off-by: default avatarJason A. Donenfeld <Jason@zx2c4.com>
    33c87a11
  • Jason A. Donenfeld's avatar
    qemu: set panic_on_warn=1 from cmdline · c7560fd0
    Jason A. Donenfeld 创作于
    
    Rather than setting this once init is running, set panic_on_warn from
    the kernel command line, so that it catches splats from WireGuard
    initialization code and the various crypto selftests.
    
    Signed-off-by: default avatarJason A. Donenfeld <Jason@zx2c4.com>
    c7560fd0
  • Jason A. Donenfeld's avatar
    qemu: give up on RHEL8 in CI · ba45dd6f
    Jason A. Donenfeld 创作于
    
    They keep breaking their kernel and being difficult when I send patches
    to fix it, so just give up on trying to support this in the CI. It'll
    bitrot and people will complain and we'll see what happens at that
    point.
    
    Signed-off-by: default avatarJason A. Donenfeld <Jason@zx2c4.com>
    ba45dd6f
  • Jason A. Donenfeld's avatar
    3ec3e822
  • Jason A. Donenfeld's avatar
    version: bump · 18fbcd68
    Jason A. Donenfeld 创作于
    
    Signed-off-by: default avatarJason A. Donenfeld <Jason@zx2c4.com>
    18fbcd68
  • Jason A. Donenfeld's avatar
    compat: do not backport ktime_get_coarse_boottime_ns to c8s · 99935b07
    Jason A. Donenfeld 创作于
    
    Also bump the c8s version stamp.
    
    Reported-by: default avatarVladimír Beneš <vbenes@redhat.com>
    Signed-off-by: default avatarJason A. Donenfeld <Jason@zx2c4.com>
    99935b07
  • Jason A. Donenfeld's avatar
    compat: drop CentOS 8 Stream support · 3d3c92b4
    Jason A. Donenfeld 创作于
    Nobody uses this and it's impossible to maintain given the current CI
    situation.
    
    RHEL 7 and 8 release remain for now, though that might not always be the
    case. See the link for details.
    
    Link: https://lists.zx2c4.com/pipermail/wireguard/2022-June/007664.html
    
    
    Suggested-by: default avatarPhilip J. Perry <phil@elrepo.org>
    Signed-off-by: default avatarJason A. Donenfeld <Jason@zx2c4.com>
    3d3c92b4
显示
662 个添加382 个删除
...@@ -12,6 +12,10 @@ ifeq ($(wildcard $(srctree)/include/linux/ptr_ring.h),) ...@@ -12,6 +12,10 @@ ifeq ($(wildcard $(srctree)/include/linux/ptr_ring.h),)
ccflags-y += -I$(kbuild-dir)/compat/ptr_ring/include ccflags-y += -I$(kbuild-dir)/compat/ptr_ring/include
endif endif
ifeq ($(wildcard $(srctree)/include/linux/skb_array.h),)
ccflags-y += -I$(kbuild-dir)/compat/skb_array/include
endif
ifeq ($(wildcard $(srctree)/include/linux/siphash.h),) ifeq ($(wildcard $(srctree)/include/linux/siphash.h),)
ccflags-y += -I$(kbuild-dir)/compat/siphash/include ccflags-y += -I$(kbuild-dir)/compat/siphash/include
wireguard-y += compat/siphash/siphash.o wireguard-y += compat/siphash/siphash.o
...@@ -65,6 +69,10 @@ ifeq ($(wildcard $(srctree)/arch/arm64/include/asm/neon.h)$(CONFIG_ARM64),y) ...@@ -65,6 +69,10 @@ ifeq ($(wildcard $(srctree)/arch/arm64/include/asm/neon.h)$(CONFIG_ARM64),y)
ccflags-y += -I$(kbuild-dir)/compat/neon-arm/include ccflags-y += -I$(kbuild-dir)/compat/neon-arm/include
endif endif
ifeq ($(wildcard $(srctree)/include/net/dst_metadata.h),)
ccflags-y += -I$(kbuild-dir)/compat/dstmetadata/include
endif
ifeq ($(CONFIG_X86_64),y) ifeq ($(CONFIG_X86_64),y)
ifeq ($(ssse3_instr),) ifeq ($(ssse3_instr),)
ssse3_instr := $(call as-instr,pshufb %xmm0$(comma)%xmm0,-DCONFIG_AS_SSSE3=1) ssse3_instr := $(call as-instr,pshufb %xmm0$(comma)%xmm0,-DCONFIG_AS_SSSE3=1)
......
...@@ -15,9 +15,6 @@ ...@@ -15,9 +15,6 @@
#define ISRHEL7 #define ISRHEL7
#elif RHEL_MAJOR == 8 #elif RHEL_MAJOR == 8
#define ISRHEL8 #define ISRHEL8
#if RHEL_MINOR >= 6
#define ISCENTOS8S
#endif
#endif #endif
#endif #endif
......
...@@ -16,15 +16,10 @@ ...@@ -16,15 +16,10 @@
#define ISRHEL7 #define ISRHEL7
#elif RHEL_MAJOR == 8 #elif RHEL_MAJOR == 8
#define ISRHEL8 #define ISRHEL8
#if RHEL_MINOR >= 6
#define ISCENTOS8S
#endif
#endif #endif
#endif #endif
#ifdef UTS_UBUNTU_RELEASE_ABI #ifdef UTS_UBUNTU_RELEASE_ABI
#if LINUX_VERSION_CODE == KERNEL_VERSION(3, 13, 11) #if LINUX_VERSION_CODE < KERNEL_VERSION(4, 5, 0) && LINUX_VERSION_CODE >= KERNEL_VERSION(4, 4, 0)
#define ISUBUNTU1404
#elif LINUX_VERSION_CODE < KERNEL_VERSION(4, 5, 0) && LINUX_VERSION_CODE >= KERNEL_VERSION(4, 4, 0)
#define ISUBUNTU1604 #define ISUBUNTU1604
#elif LINUX_VERSION_CODE < KERNEL_VERSION(4, 16, 0) && LINUX_VERSION_CODE >= KERNEL_VERSION(4, 15, 0) #elif LINUX_VERSION_CODE < KERNEL_VERSION(4, 16, 0) && LINUX_VERSION_CODE >= KERNEL_VERSION(4, 15, 0)
#define ISUBUNTU1804 #define ISUBUNTU1804
...@@ -219,7 +214,7 @@ static inline void skb_scrub_packet(struct sk_buff *skb, bool xnet) ...@@ -219,7 +214,7 @@ static inline void skb_scrub_packet(struct sk_buff *skb, bool xnet)
#define skb_scrub_packet(a, b) skb_scrub_packet(a) #define skb_scrub_packet(a, b) skb_scrub_packet(a)
#endif #endif
#if ((LINUX_VERSION_CODE < KERNEL_VERSION(3, 14, 0) && LINUX_VERSION_CODE >= KERNEL_VERSION(3, 13, 0)) || LINUX_VERSION_CODE < KERNEL_VERSION(3, 12, 63) || defined(ISUBUNTU1404)) && !defined(ISRHEL7) #if ((LINUX_VERSION_CODE < KERNEL_VERSION(3, 14, 0) && LINUX_VERSION_CODE >= KERNEL_VERSION(3, 13, 0)) || LINUX_VERSION_CODE < KERNEL_VERSION(3, 12, 63)) && !defined(ISRHEL7)
#include <linux/random.h> #include <linux/random.h>
static inline u32 __compat_prandom_u32_max(u32 ep_ro) static inline u32 __compat_prandom_u32_max(u32 ep_ro)
{ {
...@@ -268,7 +263,7 @@ static inline u32 __compat_prandom_u32_max(u32 ep_ro) ...@@ -268,7 +263,7 @@ static inline u32 __compat_prandom_u32_max(u32 ep_ro)
#endif #endif
#endif #endif
#if (LINUX_VERSION_CODE < KERNEL_VERSION(3, 17, 3) && LINUX_VERSION_CODE >= KERNEL_VERSION(3, 17, 0)) || (LINUX_VERSION_CODE < KERNEL_VERSION(3, 16, 35) && LINUX_VERSION_CODE >= KERNEL_VERSION(3, 15, 0)) || (LINUX_VERSION_CODE < KERNEL_VERSION(3, 14, 24) && LINUX_VERSION_CODE >= KERNEL_VERSION(3, 13, 0) && !defined(ISUBUNTU1404)) || (LINUX_VERSION_CODE < KERNEL_VERSION(3, 12, 33) && LINUX_VERSION_CODE >= KERNEL_VERSION(3, 11, 0)) || (LINUX_VERSION_CODE < KERNEL_VERSION(3, 10, 60) && !defined(ISRHEL7)) #if (LINUX_VERSION_CODE < KERNEL_VERSION(3, 17, 3) && LINUX_VERSION_CODE >= KERNEL_VERSION(3, 17, 0)) || (LINUX_VERSION_CODE < KERNEL_VERSION(3, 16, 35) && LINUX_VERSION_CODE >= KERNEL_VERSION(3, 15, 0)) || (LINUX_VERSION_CODE < KERNEL_VERSION(3, 14, 24) && LINUX_VERSION_CODE >= KERNEL_VERSION(3, 13, 0)) || (LINUX_VERSION_CODE < KERNEL_VERSION(3, 12, 33) && LINUX_VERSION_CODE >= KERNEL_VERSION(3, 11, 0)) || (LINUX_VERSION_CODE < KERNEL_VERSION(3, 10, 60) && !defined(ISRHEL7))
static inline void memzero_explicit(void *s, size_t count) static inline void memzero_explicit(void *s, size_t count)
{ {
memset(s, 0, count); memset(s, 0, count);
...@@ -281,7 +276,7 @@ static const struct in6_addr __compat_in6addr_any = IN6ADDR_ANY_INIT; ...@@ -281,7 +276,7 @@ static const struct in6_addr __compat_in6addr_any = IN6ADDR_ANY_INIT;
#define in6addr_any __compat_in6addr_any #define in6addr_any __compat_in6addr_any
#endif #endif
#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 13, 0) && LINUX_VERSION_CODE >= KERNEL_VERSION(4, 2, 0) #if LINUX_VERSION_CODE < KERNEL_VERSION(4, 13, 0) && LINUX_VERSION_CODE >= KERNEL_VERSION(4, 2, 0) && (LINUX_VERSION_CODE >= KERNEL_VERSION(4, 10, 0) || LINUX_VERSION_CODE < KERNEL_VERSION(4, 9, 320))
#include <linux/completion.h> #include <linux/completion.h>
#include <linux/random.h> #include <linux/random.h>
#include <linux/errno.h> #include <linux/errno.h>
...@@ -325,7 +320,7 @@ static inline int wait_for_random_bytes(void) ...@@ -325,7 +320,7 @@ static inline int wait_for_random_bytes(void)
} }
#endif #endif
#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 19, 0) && LINUX_VERSION_CODE >= KERNEL_VERSION(4, 2, 0) && !defined(ISRHEL8) #if LINUX_VERSION_CODE < KERNEL_VERSION(4, 19, 0) && LINUX_VERSION_CODE >= KERNEL_VERSION(4, 2, 0) && (LINUX_VERSION_CODE >= KERNEL_VERSION(4, 15, 0) || LINUX_VERSION_CODE < KERNEL_VERSION(4, 14, 285)) && (LINUX_VERSION_CODE >= KERNEL_VERSION(4, 10, 0) || LINUX_VERSION_CODE < KERNEL_VERSION(4, 9, 320)) && !defined(ISRHEL8)
#include <linux/random.h> #include <linux/random.h>
#include <linux/slab.h> #include <linux/slab.h>
struct rng_is_initialized_callback { struct rng_is_initialized_callback {
...@@ -377,7 +372,7 @@ static inline bool rng_is_initialized(void) ...@@ -377,7 +372,7 @@ static inline bool rng_is_initialized(void)
} }
#endif #endif
#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 13, 0) #if LINUX_VERSION_CODE < KERNEL_VERSION(4, 13, 0) && (LINUX_VERSION_CODE >= KERNEL_VERSION(4, 10, 0) || LINUX_VERSION_CODE < KERNEL_VERSION(4, 9, 320))
static inline int get_random_bytes_wait(void *buf, int nbytes) static inline int get_random_bytes_wait(void *buf, int nbytes)
{ {
int ret = wait_for_random_bytes(); int ret = wait_for_random_bytes();
...@@ -502,7 +497,7 @@ static inline void *__compat_kvzalloc(size_t size, gfp_t flags) ...@@ -502,7 +497,7 @@ static inline void *__compat_kvzalloc(size_t size, gfp_t flags)
#define kvzalloc __compat_kvzalloc #define kvzalloc __compat_kvzalloc
#endif #endif
#if ((LINUX_VERSION_CODE < KERNEL_VERSION(3, 15, 0) && LINUX_VERSION_CODE >= KERNEL_VERSION(3, 13, 0)) || LINUX_VERSION_CODE < KERNEL_VERSION(3, 12, 41)) && !defined(ISUBUNTU1404) #if ((LINUX_VERSION_CODE < KERNEL_VERSION(3, 15, 0) && LINUX_VERSION_CODE >= KERNEL_VERSION(3, 13, 0)) || LINUX_VERSION_CODE < KERNEL_VERSION(3, 12, 41))
#include <linux/vmalloc.h> #include <linux/vmalloc.h>
#include <linux/mm.h> #include <linux/mm.h>
static inline void __compat_kvfree(const void *addr) static inline void __compat_kvfree(const void *addr)
...@@ -515,6 +510,28 @@ static inline void __compat_kvfree(const void *addr) ...@@ -515,6 +510,28 @@ static inline void __compat_kvfree(const void *addr)
#define kvfree __compat_kvfree #define kvfree __compat_kvfree
#endif #endif
#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 12, 0)
#include <linux/vmalloc.h>
#include <linux/mm.h>
static inline void *__compat_kvmalloc_array(size_t n, size_t size, gfp_t flags)
{
if (n != 0 && SIZE_MAX / n < size)
return NULL;
return kvmalloc(n * size, flags);
}
#define kvmalloc_array __compat_kvmalloc_array
#endif
#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 18, 0)
#include <linux/vmalloc.h>
#include <linux/mm.h>
static inline void *__compat_kvcalloc(size_t n, size_t size, gfp_t flags)
{
return kvmalloc_array(n, size, flags | __GFP_ZERO);
}
#define kvcalloc __compat_kvcalloc
#endif
#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 11, 9) #if LINUX_VERSION_CODE < KERNEL_VERSION(4, 11, 9)
#include <linux/netdevice.h> #include <linux/netdevice.h>
#define priv_destructor destructor #define priv_destructor destructor
...@@ -704,7 +721,7 @@ static inline void *skb_put_data(struct sk_buff *skb, const void *data, unsigned ...@@ -704,7 +721,7 @@ static inline void *skb_put_data(struct sk_buff *skb, const void *data, unsigned
#endif #endif
#endif #endif
#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 17, 0) #if LINUX_VERSION_CODE < KERNEL_VERSION(4, 17, 0) && (LINUX_VERSION_CODE >= KERNEL_VERSION(4, 15, 0) || LINUX_VERSION_CODE < KERNEL_VERSION(4, 14, 285)) && (LINUX_VERSION_CODE >= KERNEL_VERSION(4, 10, 0) || LINUX_VERSION_CODE < KERNEL_VERSION(4, 9, 320))
static inline void le32_to_cpu_array(u32 *buf, unsigned int words) static inline void le32_to_cpu_array(u32 *buf, unsigned int words)
{ {
while (words--) { while (words--) {
...@@ -875,11 +892,13 @@ static inline void skb_mark_not_on_list(struct sk_buff *skb) ...@@ -875,11 +892,13 @@ static inline void skb_mark_not_on_list(struct sk_buff *skb)
#endif #endif
#endif #endif
#if LINUX_VERSION_CODE >= KERNEL_VERSION(5, 5, 0) #if LINUX_VERSION_CODE >= KERNEL_VERSION(5, 4, 200) || (LINUX_VERSION_CODE < KERNEL_VERSION(4, 20, 0) && LINUX_VERSION_CODE >= KERNEL_VERSION(4, 19, 249)) || (LINUX_VERSION_CODE < KERNEL_VERSION(4, 15, 0) && LINUX_VERSION_CODE >= KERNEL_VERSION(4, 14, 285)) || (LINUX_VERSION_CODE < KERNEL_VERSION(4, 10, 0) && LINUX_VERSION_CODE >= KERNEL_VERSION(4, 9, 320))
#define blake2s_init zinc_blake2s_init #define blake2s_init zinc_blake2s_init
#define blake2s_init_key zinc_blake2s_init_key #define blake2s_init_key zinc_blake2s_init_key
#define blake2s_update zinc_blake2s_update #define blake2s_update zinc_blake2s_update
#define blake2s_final zinc_blake2s_final #define blake2s_final zinc_blake2s_final
#endif
#if LINUX_VERSION_CODE >= KERNEL_VERSION(5, 5, 0)
#define blake2s_hmac zinc_blake2s_hmac #define blake2s_hmac zinc_blake2s_hmac
#define chacha20 zinc_chacha20 #define chacha20 zinc_chacha20
#define hchacha20 zinc_hchacha20 #define hchacha20 zinc_hchacha20
......
#ifndef skb_valid_dst
#define skb_valid_dst(skb) (!!skb_dst(skb))
#endif
...@@ -22,9 +22,7 @@ typedef struct { ...@@ -22,9 +22,7 @@ typedef struct {
} siphash_key_t; } siphash_key_t;
u64 __siphash_aligned(const void *data, size_t len, const siphash_key_t *key); u64 __siphash_aligned(const void *data, size_t len, const siphash_key_t *key);
#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
u64 __siphash_unaligned(const void *data, size_t len, const siphash_key_t *key); u64 __siphash_unaligned(const void *data, size_t len, const siphash_key_t *key);
#endif
u64 siphash_1u64(const u64 a, const siphash_key_t *key); u64 siphash_1u64(const u64 a, const siphash_key_t *key);
u64 siphash_2u64(const u64 a, const u64 b, const siphash_key_t *key); u64 siphash_2u64(const u64 a, const u64 b, const siphash_key_t *key);
...@@ -77,10 +75,9 @@ static inline u64 ___siphash_aligned(const __le64 *data, size_t len, ...@@ -77,10 +75,9 @@ static inline u64 ___siphash_aligned(const __le64 *data, size_t len,
static inline u64 siphash(const void *data, size_t len, static inline u64 siphash(const void *data, size_t len,
const siphash_key_t *key) const siphash_key_t *key)
{ {
#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) ||
if (!IS_ALIGNED((unsigned long)data, SIPHASH_ALIGNMENT)) !IS_ALIGNED((unsigned long)data, SIPHASH_ALIGNMENT))
return __siphash_unaligned(data, len, key); return __siphash_unaligned(data, len, key);
#endif
return ___siphash_aligned(data, len, key); return ___siphash_aligned(data, len, key);
} }
...@@ -91,10 +88,8 @@ typedef struct { ...@@ -91,10 +88,8 @@ typedef struct {
u32 __hsiphash_aligned(const void *data, size_t len, u32 __hsiphash_aligned(const void *data, size_t len,
const hsiphash_key_t *key); const hsiphash_key_t *key);
#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
u32 __hsiphash_unaligned(const void *data, size_t len, u32 __hsiphash_unaligned(const void *data, size_t len,
const hsiphash_key_t *key); const hsiphash_key_t *key);
#endif
u32 hsiphash_1u32(const u32 a, const hsiphash_key_t *key); u32 hsiphash_1u32(const u32 a, const hsiphash_key_t *key);
u32 hsiphash_2u32(const u32 a, const u32 b, const hsiphash_key_t *key); u32 hsiphash_2u32(const u32 a, const u32 b, const hsiphash_key_t *key);
...@@ -130,10 +125,9 @@ static inline u32 ___hsiphash_aligned(const __le32 *data, size_t len, ...@@ -130,10 +125,9 @@ static inline u32 ___hsiphash_aligned(const __le32 *data, size_t len,
static inline u32 hsiphash(const void *data, size_t len, static inline u32 hsiphash(const void *data, size_t len,
const hsiphash_key_t *key) const hsiphash_key_t *key)
{ {
#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) ||
if (!IS_ALIGNED((unsigned long)data, HSIPHASH_ALIGNMENT)) !IS_ALIGNED((unsigned long)data, HSIPHASH_ALIGNMENT))
return __hsiphash_unaligned(data, len, key); return __hsiphash_unaligned(data, len, key);
#endif
return ___hsiphash_aligned(data, len, key); return ___hsiphash_aligned(data, len, key);
} }
......
...@@ -57,6 +57,7 @@ ...@@ -57,6 +57,7 @@
SIPROUND; \ SIPROUND; \
return (v0 ^ v1) ^ (v2 ^ v3); return (v0 ^ v1) ^ (v2 ^ v3);
#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
u64 __siphash_aligned(const void *data, size_t len, const siphash_key_t *key) u64 __siphash_aligned(const void *data, size_t len, const siphash_key_t *key)
{ {
const u8 *end = data + len - (len % sizeof(u64)); const u8 *end = data + len - (len % sizeof(u64));
...@@ -76,19 +77,19 @@ u64 __siphash_aligned(const void *data, size_t len, const siphash_key_t *key) ...@@ -76,19 +77,19 @@ u64 __siphash_aligned(const void *data, size_t len, const siphash_key_t *key)
bytemask_from_count(left))); bytemask_from_count(left)));
#else #else
switch (left) { switch (left) {
case 7: b |= ((u64)end[6]) << 48; case 7: b |= ((u64)end[6]) << 48; fallthrough;
case 6: b |= ((u64)end[5]) << 40; case 6: b |= ((u64)end[5]) << 40; fallthrough;
case 5: b |= ((u64)end[4]) << 32; case 5: b |= ((u64)end[4]) << 32; fallthrough;
case 4: b |= le32_to_cpup(data); break; case 4: b |= le32_to_cpup(data); break;
case 3: b |= ((u64)end[2]) << 16; case 3: b |= ((u64)end[2]) << 16; fallthrough;
case 2: b |= le16_to_cpup(data); break; case 2: b |= le16_to_cpup(data); break;
case 1: b |= end[0]; case 1: b |= end[0];
} }
#endif #endif
POSTAMBLE POSTAMBLE
} }
#endif
#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
u64 __siphash_unaligned(const void *data, size_t len, const siphash_key_t *key) u64 __siphash_unaligned(const void *data, size_t len, const siphash_key_t *key)
{ {
const u8 *end = data + len - (len % sizeof(u64)); const u8 *end = data + len - (len % sizeof(u64));
...@@ -108,18 +109,17 @@ u64 __siphash_unaligned(const void *data, size_t len, const siphash_key_t *key) ...@@ -108,18 +109,17 @@ u64 __siphash_unaligned(const void *data, size_t len, const siphash_key_t *key)
bytemask_from_count(left))); bytemask_from_count(left)));
#else #else
switch (left) { switch (left) {
case 7: b |= ((u64)end[6]) << 48; case 7: b |= ((u64)end[6]) << 48; fallthrough;
case 6: b |= ((u64)end[5]) << 40; case 6: b |= ((u64)end[5]) << 40; fallthrough;
case 5: b |= ((u64)end[4]) << 32; case 5: b |= ((u64)end[4]) << 32; fallthrough;
case 4: b |= get_unaligned_le32(end); break; case 4: b |= get_unaligned_le32(end); break;
case 3: b |= ((u64)end[2]) << 16; case 3: b |= ((u64)end[2]) << 16; fallthrough;
case 2: b |= get_unaligned_le16(end); break; case 2: b |= get_unaligned_le16(end); break;
case 1: b |= end[0]; case 1: b |= end[0];
} }
#endif #endif
POSTAMBLE POSTAMBLE
} }
#endif
/** /**
* siphash_1u64 - compute 64-bit siphash PRF value of a u64 * siphash_1u64 - compute 64-bit siphash PRF value of a u64
...@@ -250,6 +250,7 @@ u64 siphash_3u32(const u32 first, const u32 second, const u32 third, ...@@ -250,6 +250,7 @@ u64 siphash_3u32(const u32 first, const u32 second, const u32 third,
HSIPROUND; \ HSIPROUND; \
return (v0 ^ v1) ^ (v2 ^ v3); return (v0 ^ v1) ^ (v2 ^ v3);
#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
u32 __hsiphash_aligned(const void *data, size_t len, const hsiphash_key_t *key) u32 __hsiphash_aligned(const void *data, size_t len, const hsiphash_key_t *key)
{ {
const u8 *end = data + len - (len % sizeof(u64)); const u8 *end = data + len - (len % sizeof(u64));
...@@ -268,19 +269,19 @@ u32 __hsiphash_aligned(const void *data, size_t len, const hsiphash_key_t *key) ...@@ -268,19 +269,19 @@ u32 __hsiphash_aligned(const void *data, size_t len, const hsiphash_key_t *key)
bytemask_from_count(left))); bytemask_from_count(left)));
#else #else
switch (left) { switch (left) {
case 7: b |= ((u64)end[6]) << 48; case 7: b |= ((u64)end[6]) << 48; fallthrough;
case 6: b |= ((u64)end[5]) << 40; case 6: b |= ((u64)end[5]) << 40; fallthrough;
case 5: b |= ((u64)end[4]) << 32; case 5: b |= ((u64)end[4]) << 32; fallthrough;
case 4: b |= le32_to_cpup(data); break; case 4: b |= le32_to_cpup(data); break;
case 3: b |= ((u64)end[2]) << 16; case 3: b |= ((u64)end[2]) << 16; fallthrough;
case 2: b |= le16_to_cpup(data); break; case 2: b |= le16_to_cpup(data); break;
case 1: b |= end[0]; case 1: b |= end[0];
} }
#endif #endif
HPOSTAMBLE HPOSTAMBLE
} }
#endif
#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
u32 __hsiphash_unaligned(const void *data, size_t len, u32 __hsiphash_unaligned(const void *data, size_t len,
const hsiphash_key_t *key) const hsiphash_key_t *key)
{ {
...@@ -300,18 +301,17 @@ u32 __hsiphash_unaligned(const void *data, size_t len, ...@@ -300,18 +301,17 @@ u32 __hsiphash_unaligned(const void *data, size_t len,
bytemask_from_count(left))); bytemask_from_count(left)));
#else #else
switch (left) { switch (left) {
case 7: b |= ((u64)end[6]) << 48; case 7: b |= ((u64)end[6]) << 48; fallthrough;
case 6: b |= ((u64)end[5]) << 40; case 6: b |= ((u64)end[5]) << 40; fallthrough;
case 5: b |= ((u64)end[4]) << 32; case 5: b |= ((u64)end[4]) << 32; fallthrough;
case 4: b |= get_unaligned_le32(end); break; case 4: b |= get_unaligned_le32(end); break;
case 3: b |= ((u64)end[2]) << 16; case 3: b |= ((u64)end[2]) << 16; fallthrough;
case 2: b |= get_unaligned_le16(end); break; case 2: b |= get_unaligned_le16(end); break;
case 1: b |= end[0]; case 1: b |= end[0];
} }
#endif #endif
HPOSTAMBLE HPOSTAMBLE
} }
#endif
/** /**
* hsiphash_1u32 - compute 64-bit hsiphash PRF value of a u32 * hsiphash_1u32 - compute 64-bit hsiphash PRF value of a u32
...@@ -412,6 +412,7 @@ u32 hsiphash_4u32(const u32 first, const u32 second, const u32 third, ...@@ -412,6 +412,7 @@ u32 hsiphash_4u32(const u32 first, const u32 second, const u32 third,
HSIPROUND; \ HSIPROUND; \
return v1 ^ v3; return v1 ^ v3;
#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
u32 __hsiphash_aligned(const void *data, size_t len, const hsiphash_key_t *key) u32 __hsiphash_aligned(const void *data, size_t len, const hsiphash_key_t *key)
{ {
const u8 *end = data + len - (len % sizeof(u32)); const u8 *end = data + len - (len % sizeof(u32));
...@@ -425,14 +426,14 @@ u32 __hsiphash_aligned(const void *data, size_t len, const hsiphash_key_t *key) ...@@ -425,14 +426,14 @@ u32 __hsiphash_aligned(const void *data, size_t len, const hsiphash_key_t *key)
v0 ^= m; v0 ^= m;
} }
switch (left) { switch (left) {
case 3: b |= ((u32)end[2]) << 16; case 3: b |= ((u32)end[2]) << 16; fallthrough;
case 2: b |= le16_to_cpup(data); break; case 2: b |= le16_to_cpup(data); break;
case 1: b |= end[0]; case 1: b |= end[0];
} }
HPOSTAMBLE HPOSTAMBLE
} }
#endif
#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
u32 __hsiphash_unaligned(const void *data, size_t len, u32 __hsiphash_unaligned(const void *data, size_t len,
const hsiphash_key_t *key) const hsiphash_key_t *key)
{ {
...@@ -447,13 +448,12 @@ u32 __hsiphash_unaligned(const void *data, size_t len, ...@@ -447,13 +448,12 @@ u32 __hsiphash_unaligned(const void *data, size_t len,
v0 ^= m; v0 ^= m;
} }
switch (left) { switch (left) {
case 3: b |= ((u32)end[2]) << 16; case 3: b |= ((u32)end[2]) << 16; fallthrough;
case 2: b |= get_unaligned_le16(end); break; case 2: b |= get_unaligned_le16(end); break;
case 1: b |= end[0]; case 1: b |= end[0];
} }
HPOSTAMBLE HPOSTAMBLE
} }
#endif
/** /**
* hsiphash_1u32 - compute 32-bit hsiphash PRF value of a u32 * hsiphash_1u32 - compute 32-bit hsiphash PRF value of a u32
......
#ifndef _WG_SKB_ARRAY_H
#define _WG_SKB_ARRAY_H
#include <linux/skbuff.h>
static void __skb_array_destroy_skb(void *ptr)
{
kfree_skb(ptr);
}
#endif
...@@ -38,9 +38,10 @@ int udp_sock_create4(struct net *net, struct udp_port_cfg *cfg, ...@@ -38,9 +38,10 @@ int udp_sock_create4(struct net *net, struct udp_port_cfg *cfg,
struct socket *sock = NULL; struct socket *sock = NULL;
struct sockaddr_in udp_addr; struct sockaddr_in udp_addr;
err = __sock_create(net, AF_INET, SOCK_DGRAM, 0, &sock, 1); err = sock_create_kern(AF_INET, SOCK_DGRAM, 0, &sock);
if (err < 0) if (err < 0)
goto error; goto error;
sk_change_net(sock->sk, net);
udp_addr.sin_family = AF_INET; udp_addr.sin_family = AF_INET;
udp_addr.sin_addr = cfg->local_ip; udp_addr.sin_addr = cfg->local_ip;
...@@ -72,7 +73,7 @@ int udp_sock_create4(struct net *net, struct udp_port_cfg *cfg, ...@@ -72,7 +73,7 @@ int udp_sock_create4(struct net *net, struct udp_port_cfg *cfg,
error: error:
if (sock) { if (sock) {
kernel_sock_shutdown(sock, SHUT_RDWR); kernel_sock_shutdown(sock, SHUT_RDWR);
sock_release(sock); sk_release_kernel(sock->sk);
} }
*sockp = NULL; *sockp = NULL;
return err; return err;
...@@ -229,7 +230,7 @@ void udp_tunnel_sock_release(struct socket *sock) ...@@ -229,7 +230,7 @@ void udp_tunnel_sock_release(struct socket *sock)
{ {
rcu_assign_sk_user_data(sock->sk, NULL); rcu_assign_sk_user_data(sock->sk, NULL);
kernel_sock_shutdown(sock, SHUT_RDWR); kernel_sock_shutdown(sock, SHUT_RDWR);
sock_release(sock); sk_release_kernel(sock->sk);
} }
#if IS_ENABLED(CONFIG_IPV6) #if IS_ENABLED(CONFIG_IPV6)
...@@ -254,9 +255,10 @@ int udp_sock_create6(struct net *net, struct udp_port_cfg *cfg, ...@@ -254,9 +255,10 @@ int udp_sock_create6(struct net *net, struct udp_port_cfg *cfg,
int err; int err;
struct socket *sock = NULL; struct socket *sock = NULL;
err = __sock_create(net, AF_INET6, SOCK_DGRAM, 0, &sock, 1); err = sock_create_kern(AF_INET6, SOCK_DGRAM, 0, &sock);
if (err < 0) if (err < 0)
goto error; goto error;
sk_change_net(sock->sk, net);
if (cfg->ipv6_v6only) { if (cfg->ipv6_v6only) {
int val = 1; int val = 1;
...@@ -301,7 +303,7 @@ int udp_sock_create6(struct net *net, struct udp_port_cfg *cfg, ...@@ -301,7 +303,7 @@ int udp_sock_create6(struct net *net, struct udp_port_cfg *cfg,
error: error:
if (sock) { if (sock) {
kernel_sock_shutdown(sock, SHUT_RDWR); kernel_sock_shutdown(sock, SHUT_RDWR);
sock_release(sock); sk_release_kernel(sock->sk);
} }
*sockp = NULL; *sockp = NULL;
return err; return err;
......
...@@ -19,6 +19,7 @@ ...@@ -19,6 +19,7 @@
#include <linux/if_arp.h> #include <linux/if_arp.h>
#include <linux/icmp.h> #include <linux/icmp.h>
#include <linux/suspend.h> #include <linux/suspend.h>
#include <net/dst_metadata.h>
#include <net/icmp.h> #include <net/icmp.h>
#include <net/rtnetlink.h> #include <net/rtnetlink.h>
#include <net/ip_tunnels.h> #include <net/ip_tunnels.h>
...@@ -160,7 +161,7 @@ static netdev_tx_t wg_xmit(struct sk_buff *skb, struct net_device *dev) ...@@ -160,7 +161,7 @@ static netdev_tx_t wg_xmit(struct sk_buff *skb, struct net_device *dev)
goto err_peer; goto err_peer;
} }
mtu = skb_dst(skb) ? dst_mtu(skb_dst(skb)) : dev->mtu; mtu = skb_valid_dst(skb) ? dst_mtu(skb_dst(skb)) : dev->mtu;
__skb_queue_head_init(&packets); __skb_queue_head_init(&packets);
if (!skb_is_gso(skb)) { if (!skb_is_gso(skb)) {
......
PACKAGE_NAME="wireguard" PACKAGE_NAME="wireguard"
PACKAGE_VERSION="1.0.20210606" PACKAGE_VERSION="1.0.20220627"
AUTOINSTALL=yes AUTOINSTALL=yes
BUILT_MODULE_NAME="wireguard" BUILT_MODULE_NAME="wireguard"
......
...@@ -4,6 +4,7 @@ ...@@ -4,6 +4,7 @@
*/ */
#include "queueing.h" #include "queueing.h"
#include <linux/skb_array.h>
struct multicore_worker __percpu * struct multicore_worker __percpu *
wg_packet_percpu_multicore_worker_alloc(work_func_t function, void *ptr) wg_packet_percpu_multicore_worker_alloc(work_func_t function, void *ptr)
...@@ -42,7 +43,7 @@ void wg_packet_queue_free(struct crypt_queue *queue, bool purge) ...@@ -42,7 +43,7 @@ void wg_packet_queue_free(struct crypt_queue *queue, bool purge)
{ {
free_percpu(queue->worker); free_percpu(queue->worker);
WARN_ON(!purge && !__ptr_ring_empty(&queue->ring)); WARN_ON(!purge && !__ptr_ring_empty(&queue->ring));
ptr_ring_cleanup(&queue->ring, purge ? (void(*)(void*))kfree_skb : NULL); ptr_ring_cleanup(&queue->ring, purge ? __skb_array_destroy_skb : NULL);
} }
#define NEXT(skb) ((skb)->prev) #define NEXT(skb) ((skb)->prev)
......
...@@ -188,12 +188,12 @@ int wg_ratelimiter_init(void) ...@@ -188,12 +188,12 @@ int wg_ratelimiter_init(void)
(1U << 14) / sizeof(struct hlist_head))); (1U << 14) / sizeof(struct hlist_head)));
max_entries = table_size * 8; max_entries = table_size * 8;
table_v4 = kvzalloc(table_size * sizeof(*table_v4), GFP_KERNEL); table_v4 = kvcalloc(table_size, sizeof(*table_v4), GFP_KERNEL);
if (unlikely(!table_v4)) if (unlikely(!table_v4))
goto err_kmemcache; goto err_kmemcache;
#if IS_ENABLED(CONFIG_IPV6) #if IS_ENABLED(CONFIG_IPV6)
table_v6 = kvzalloc(table_size * sizeof(*table_v6), GFP_KERNEL); table_v6 = kvcalloc(table_size, sizeof(*table_v6), GFP_KERNEL);
if (unlikely(!table_v6)) { if (unlikely(!table_v6)) {
kvfree(table_v4); kvfree(table_v4);
goto err_kmemcache; goto err_kmemcache;
......
...@@ -563,9 +563,19 @@ void wg_packet_receive(struct wg_device *wg, struct sk_buff *skb) ...@@ -563,9 +563,19 @@ void wg_packet_receive(struct wg_device *wg, struct sk_buff *skb)
case cpu_to_le32(MESSAGE_HANDSHAKE_INITIATION): case cpu_to_le32(MESSAGE_HANDSHAKE_INITIATION):
case cpu_to_le32(MESSAGE_HANDSHAKE_RESPONSE): case cpu_to_le32(MESSAGE_HANDSHAKE_RESPONSE):
case cpu_to_le32(MESSAGE_HANDSHAKE_COOKIE): { case cpu_to_le32(MESSAGE_HANDSHAKE_COOKIE): {
int cpu; int cpu, ret = -EBUSY;
if (unlikely(!rng_is_initialized() ||
ptr_ring_produce_bh(&wg->handshake_queue.ring, skb))) { if (unlikely(!rng_is_initialized()))
goto drop;
if (atomic_read(&wg->handshake_queue_len) > MAX_QUEUED_INCOMING_HANDSHAKES / 2) {
if (spin_trylock_bh(&wg->handshake_queue.ring.producer_lock)) {
ret = __ptr_ring_produce(&wg->handshake_queue.ring, skb);
spin_unlock_bh(&wg->handshake_queue.ring.producer_lock);
}
} else
ret = ptr_ring_produce_bh(&wg->handshake_queue.ring, skb);
if (ret) {
drop:
net_dbg_skb_ratelimited("%s: Dropping handshake packet from %pISpfsc\n", net_dbg_skb_ratelimited("%s: Dropping handshake packet from %pISpfsc\n",
wg->dev->name, skb); wg->dev->name, skb);
goto err; goto err;
......
...@@ -160,6 +160,7 @@ out: ...@@ -160,6 +160,7 @@ out:
rcu_read_unlock_bh(); rcu_read_unlock_bh();
return ret; return ret;
#else #else
kfree_skb(skb);
return -EAFNOSUPPORT; return -EAFNOSUPPORT;
#endif #endif
} }
...@@ -241,7 +242,7 @@ int wg_socket_endpoint_from_skb(struct endpoint *endpoint, ...@@ -241,7 +242,7 @@ int wg_socket_endpoint_from_skb(struct endpoint *endpoint,
endpoint->addr4.sin_addr.s_addr = ip_hdr(skb)->saddr; endpoint->addr4.sin_addr.s_addr = ip_hdr(skb)->saddr;
endpoint->src4.s_addr = ip_hdr(skb)->daddr; endpoint->src4.s_addr = ip_hdr(skb)->daddr;
endpoint->src_if4 = skb->skb_iif; endpoint->src_if4 = skb->skb_iif;
} else if (skb->protocol == htons(ETH_P_IPV6)) { } else if (IS_ENABLED(CONFIG_IPV6) && skb->protocol == htons(ETH_P_IPV6)) {
endpoint->addr6.sin6_family = AF_INET6; endpoint->addr6.sin6_family = AF_INET6;
endpoint->addr6.sin6_port = udp_hdr(skb)->source; endpoint->addr6.sin6_port = udp_hdr(skb)->source;
endpoint->addr6.sin6_addr = ipv6_hdr(skb)->saddr; endpoint->addr6.sin6_addr = ipv6_hdr(skb)->saddr;
...@@ -284,7 +285,7 @@ void wg_socket_set_peer_endpoint(struct wg_peer *peer, ...@@ -284,7 +285,7 @@ void wg_socket_set_peer_endpoint(struct wg_peer *peer,
peer->endpoint.addr4 = endpoint->addr4; peer->endpoint.addr4 = endpoint->addr4;
peer->endpoint.src4 = endpoint->src4; peer->endpoint.src4 = endpoint->src4;
peer->endpoint.src_if4 = endpoint->src_if4; peer->endpoint.src_if4 = endpoint->src_if4;
} else if (endpoint->addr.sa_family == AF_INET6) { } else if (IS_ENABLED(CONFIG_IPV6) && endpoint->addr.sa_family == AF_INET6) {
peer->endpoint.addr6 = endpoint->addr6; peer->endpoint.addr6 = endpoint->addr6;
peer->endpoint.src6 = endpoint->src6; peer->endpoint.src6 = endpoint->src6;
} else { } else {
......
...@@ -22,10 +22,12 @@ ...@@ -22,10 +22,12 @@
# interfaces in $ns1 and $ns2. See https://www.wireguard.com/netns/ for further # interfaces in $ns1 and $ns2. See https://www.wireguard.com/netns/ for further
# details on how this is accomplished. # details on how this is accomplished.
set -e set -e
shopt -s extglob
exec 3>&1 exec 3>&1
export LANG=C export LANG=C
export WG_HIDE_KEYS=never export WG_HIDE_KEYS=never
NPROC=( /sys/devices/system/cpu/cpu+([0-9]) ); NPROC=${#NPROC[@]}
netns0="wg-test-$$-0" netns0="wg-test-$$-0"
netns1="wg-test-$$-1" netns1="wg-test-$$-1"
netns2="wg-test-$$-2" netns2="wg-test-$$-2"
...@@ -147,17 +149,15 @@ tests() { ...@@ -147,17 +149,15 @@ tests() {
[[ $(< /proc/version) =~ ^Linux\ version\ 5\.4[.\ ] ]] || return 0 [[ $(< /proc/version) =~ ^Linux\ version\ 5\.4[.\ ] ]] || return 0
# TCP over IPv4, in parallel # TCP over IPv4, in parallel
for max in 4 5 50; do local pids=( ) i
local pids=( ) for ((i=0; i < NPROC; ++i)) do
for ((i=0; i < max; ++i)) do n2 iperf3 -p $(( 5200 + i )) -s -1 -B 192.168.241.2 &
n2 iperf3 -p $(( 5200 + i )) -s -1 -B 192.168.241.2 & pids+=( $! ); waitiperf $netns2 $! $(( 5200 + i ))
pids+=( $! ); waitiperf $netns2 $! $(( 5200 + i ))
done
for ((i=0; i < max; ++i)) do
n1 iperf3 -Z -t 3 -p $(( 5200 + i )) -c 192.168.241.2 &
done
wait "${pids[@]}"
done done
for ((i=0; i < NPROC; ++i)) do
n1 iperf3 -Z -t 3 -p $(( 5200 + i )) -c 192.168.241.2 &
done
wait "${pids[@]}"
} }
[[ $(ip1 link show dev wg0) =~ mtu\ ([0-9]+) ]] && orig_mtu="${BASH_REMATCH[1]}" [[ $(ip1 link show dev wg0) =~ mtu\ ([0-9]+) ]] && orig_mtu="${BASH_REMATCH[1]}"
...@@ -284,7 +284,19 @@ read _ _ tx_bytes_before < <(n0 wg show wg1 transfer) ...@@ -284,7 +284,19 @@ read _ _ tx_bytes_before < <(n0 wg show wg1 transfer)
! n0 ping -W 1 -c 10 -f 192.168.241.2 || false ! n0 ping -W 1 -c 10 -f 192.168.241.2 || false
sleep 1 sleep 1
read _ _ tx_bytes_after < <(n0 wg show wg1 transfer) read _ _ tx_bytes_after < <(n0 wg show wg1 transfer)
(( tx_bytes_after - tx_bytes_before < 70000 )) if ! (( tx_bytes_after - tx_bytes_before < 70000 )); then
errstart=$'\x1b[37m\x1b[41m\x1b[1m'
errend=$'\x1b[0m'
echo "${errstart} ${errend}"
echo "${errstart} E R R O R ${errend}"
echo "${errstart} ${errend}"
echo "${errstart} This architecture does not do the right thing ${errend}"
echo "${errstart} with cross-namespace routing loops. This test ${errend}"
echo "${errstart} has thus technically failed but, as this issue ${errend}"
echo "${errstart} is as yet unsolved, these tests will continue ${errend}"
echo "${errstart} onward. :( ${errend}"
echo "${errstart} ${errend}"
fi
ip0 link del wg1 ip0 link del wg1
ip1 link del wg0 ip1 link del wg0
......
...@@ -15,7 +15,7 @@ endif ...@@ -15,7 +15,7 @@ endif
ARCH := $(firstword $(subst -, ,$(CBUILD))) ARCH := $(firstword $(subst -, ,$(CBUILD)))
# Set these from the environment to override # Set these from the environment to override
KERNEL_VERSION ?= 5.4.99 KERNEL_VERSION ?= 5.4.200
KERNEL_VERSION := $(KERNEL_VERSION)$(if $(DEBUG_KERNEL),$(if $(findstring -debug,$(KERNEL_VERSION)),,-debug),) KERNEL_VERSION := $(KERNEL_VERSION)$(if $(DEBUG_KERNEL),$(if $(findstring -debug,$(KERNEL_VERSION)),,-debug),)
BUILD_PATH ?= $(PWD)/../../../qemu-build/$(ARCH) BUILD_PATH ?= $(PWD)/../../../qemu-build/$(ARCH)
DISTFILES_PATH ?= $(PWD)/distfiles DISTFILES_PATH ?= $(PWD)/distfiles
...@@ -86,8 +86,10 @@ CROSS_COMPILE_FLAG := --build=$(CBUILD) --host=$(CHOST) ...@@ -86,8 +86,10 @@ CROSS_COMPILE_FLAG := --build=$(CBUILD) --host=$(CHOST)
export CROSS_COMPILE=$(CBUILD)- export CROSS_COMPILE=$(CBUILD)-
STRIP := $(CBUILD)-strip STRIP := $(CBUILD)-strip
endif endif
QEMU_VPORT_RESULT :=
ifeq ($(ARCH),aarch64) ifeq ($(ARCH),aarch64)
QEMU_ARCH := aarch64 QEMU_ARCH := aarch64
QEMU_VPORT_RESULT := virtio-serial-device
KERNEL_ARCH := arm64 KERNEL_ARCH := arm64
KERNEL_BZIMAGE := $(KERNEL_PATH)/arch/arm64/boot/Image KERNEL_BZIMAGE := $(KERNEL_PATH)/arch/arm64/boot/Image
ifeq ($(HOST_ARCH),$(ARCH)) ifeq ($(HOST_ARCH),$(ARCH))
...@@ -98,6 +100,7 @@ CFLAGS += -march=armv8-a -mtune=cortex-a53 ...@@ -98,6 +100,7 @@ CFLAGS += -march=armv8-a -mtune=cortex-a53
endif endif
else ifeq ($(ARCH),aarch64_be) else ifeq ($(ARCH),aarch64_be)
QEMU_ARCH := aarch64 QEMU_ARCH := aarch64
QEMU_VPORT_RESULT := virtio-serial-device
KERNEL_ARCH := arm64 KERNEL_ARCH := arm64
KERNEL_BZIMAGE := $(KERNEL_PATH)/arch/arm64/boot/Image KERNEL_BZIMAGE := $(KERNEL_PATH)/arch/arm64/boot/Image
ifeq ($(HOST_ARCH),$(ARCH)) ifeq ($(HOST_ARCH),$(ARCH))
...@@ -108,6 +111,7 @@ CFLAGS += -march=armv8-a -mtune=cortex-a53 ...@@ -108,6 +111,7 @@ CFLAGS += -march=armv8-a -mtune=cortex-a53
endif endif
else ifeq ($(ARCH),arm) else ifeq ($(ARCH),arm)
QEMU_ARCH := arm QEMU_ARCH := arm
QEMU_VPORT_RESULT := virtio-serial-device
KERNEL_ARCH := arm KERNEL_ARCH := arm
KERNEL_BZIMAGE := $(KERNEL_PATH)/arch/arm/boot/zImage KERNEL_BZIMAGE := $(KERNEL_PATH)/arch/arm/boot/zImage
ifeq ($(HOST_ARCH),$(ARCH)) ifeq ($(HOST_ARCH),$(ARCH))
...@@ -118,6 +122,7 @@ CFLAGS += -march=armv7-a -mtune=cortex-a15 -mabi=aapcs-linux ...@@ -118,6 +122,7 @@ CFLAGS += -march=armv7-a -mtune=cortex-a15 -mabi=aapcs-linux
endif endif
else ifeq ($(ARCH),armeb) else ifeq ($(ARCH),armeb)
QEMU_ARCH := arm QEMU_ARCH := arm
QEMU_VPORT_RESULT := virtio-serial-device
KERNEL_ARCH := arm KERNEL_ARCH := arm
KERNEL_BZIMAGE := $(KERNEL_PATH)/arch/arm/boot/zImage KERNEL_BZIMAGE := $(KERNEL_PATH)/arch/arm/boot/zImage
ifeq ($(HOST_ARCH),$(ARCH)) ifeq ($(HOST_ARCH),$(ARCH))
...@@ -217,7 +222,7 @@ KERNEL_ARCH := m68k ...@@ -217,7 +222,7 @@ KERNEL_ARCH := m68k
KERNEL_BZIMAGE := $(KERNEL_PATH)/vmlinux KERNEL_BZIMAGE := $(KERNEL_PATH)/vmlinux
KERNEL_CMDLINE := $(shell sed -n 's/CONFIG_CMDLINE=\(.*\)/\1/p' arch/m68k.config) KERNEL_CMDLINE := $(shell sed -n 's/CONFIG_CMDLINE=\(.*\)/\1/p' arch/m68k.config)
ifeq ($(HOST_ARCH),$(ARCH)) ifeq ($(HOST_ARCH),$(ARCH))
QEMU_MACHINE := -cpu host,accel=kvm -machine q800 -smp 1 -append $(KERNEL_CMDLINE) QEMU_MACHINE := -cpu host,accel=kvm -machine q800 -append $(KERNEL_CMDLINE)
else else
QEMU_MACHINE := -machine q800 -smp 1 -append $(KERNEL_CMDLINE) QEMU_MACHINE := -machine q800 -smp 1 -append $(KERNEL_CMDLINE)
endif endif
...@@ -230,6 +235,7 @@ MUSL_CC := $(BUILD_PATH)/musl-gcc ...@@ -230,6 +235,7 @@ MUSL_CC := $(BUILD_PATH)/musl-gcc
export CC := $(MUSL_CC) export CC := $(MUSL_CC)
USERSPACE_DEPS := $(MUSL_CC) $(BUILD_PATH)/include/.installed $(BUILD_PATH)/include/linux/.installed USERSPACE_DEPS := $(MUSL_CC) $(BUILD_PATH)/include/.installed $(BUILD_PATH)/include/linux/.installed
comma := ,
build: $(KERNEL_BZIMAGE) build: $(KERNEL_BZIMAGE)
qemu: $(KERNEL_BZIMAGE) qemu: $(KERNEL_BZIMAGE)
rm -f $(BUILD_PATH)/result rm -f $(BUILD_PATH)/result
...@@ -240,7 +246,8 @@ qemu: $(KERNEL_BZIMAGE) ...@@ -240,7 +246,8 @@ qemu: $(KERNEL_BZIMAGE)
$(QEMU_MACHINE) \ $(QEMU_MACHINE) \
-m $$(grep -q CONFIG_DEBUG_KMEMLEAK=y $(KERNEL_PATH)/.config && echo 1G || echo 256M) \ -m $$(grep -q CONFIG_DEBUG_KMEMLEAK=y $(KERNEL_PATH)/.config && echo 1G || echo 256M) \
-serial stdio \ -serial stdio \
-serial file:$(BUILD_PATH)/result \ -chardev file,path=$(BUILD_PATH)/result,id=result \
$(if $(QEMU_VPORT_RESULT),-device $(QEMU_VPORT_RESULT) -device virtserialport$(comma)chardev=result,-serial chardev:result) \
-no-reboot \ -no-reboot \
-monitor none \ -monitor none \
-kernel $< -kernel $<
...@@ -277,12 +284,6 @@ $(KERNEL_PATH)/.installed: $(KERNEL_TAR) ...@@ -277,12 +284,6 @@ $(KERNEL_PATH)/.installed: $(KERNEL_TAR)
printf 'ifdef CONFIG_X86_64\nLDFLAGS += $$(call ld-option, -z max-page-size=0x200000)\nendif\n' >> $(KERNEL_PATH)/arch/x86/Makefile printf 'ifdef CONFIG_X86_64\nLDFLAGS += $$(call ld-option, -z max-page-size=0x200000)\nendif\n' >> $(KERNEL_PATH)/arch/x86/Makefile
sed -i 's/^Elf_Addr per_cpu_load_addr;$$/static \0/' $(KERNEL_PATH)/arch/x86/tools/relocs.c || true sed -i 's/^Elf_Addr per_cpu_load_addr;$$/static \0/' $(KERNEL_PATH)/arch/x86/tools/relocs.c || true
if grep -sqr UTS_UBUNTU_RELEASE_ABI $(KERNEL_PATH)/debian/rules.d; then echo 'KBUILD_CFLAGS += -DUTS_UBUNTU_RELEASE_ABI=0' >> $(KERNEL_PATH)/Makefile; fi if grep -sqr UTS_UBUNTU_RELEASE_ABI $(KERNEL_PATH)/debian/rules.d; then echo 'KBUILD_CFLAGS += -DUTS_UBUNTU_RELEASE_ABI=0' >> $(KERNEL_PATH)/Makefile; fi
if grep -sq 'RHEL_MAJOR = 8' $(KERNEL_PATH)/Makefile.rhelver; then \
sed -i '/#include <asm\//a #include <asm\/acpi.h>/' $(KERNEL_PATH)/arch/x86/kernel/{apic/apic.c,irqinit.c,kvm.c,mpparse.c} && \
sed -i '/#include <asm\//a #include <asm\/numa.h>/' $(KERNEL_PATH)/arch/x86/kernel/setup.c && \
sed -i '/irq_hv_callback_count/d' $(KERNEL_PATH)/arch/x86/kernel/kvm.c && \
sed -i '/do_vmm_communication/d' $(KERNEL_PATH)/arch/x86/entry/entry_64.S; \
fi
sed -i "/^if INET\$$/a source \"net/wireguard/Kconfig\"" $(KERNEL_PATH)/net/Kconfig sed -i "/^if INET\$$/a source \"net/wireguard/Kconfig\"" $(KERNEL_PATH)/net/Kconfig
sed -i "/^obj-\$$(CONFIG_NETFILTER).*+=/a obj-\$$(CONFIG_WIREGUARD) += wireguard/" $(KERNEL_PATH)/net/Makefile sed -i "/^obj-\$$(CONFIG_NETFILTER).*+=/a obj-\$$(CONFIG_WIREGUARD) += wireguard/" $(KERNEL_PATH)/net/Makefile
ln -sfT $(shell readlink -f ../..) $(KERNEL_PATH)/net/wireguard ln -sfT $(shell readlink -f ../..) $(KERNEL_PATH)/net/wireguard
......
CONFIG_SERIAL_AMBA_PL011=y CONFIG_SERIAL_AMBA_PL011=y
CONFIG_SERIAL_AMBA_PL011_CONSOLE=y CONFIG_SERIAL_AMBA_PL011_CONSOLE=y
CONFIG_VIRTIO_MENU=y
CONFIG_VIRTIO_MMIO=y
CONFIG_VIRTIO_CONSOLE=y
CONFIG_CMDLINE_BOOL=y CONFIG_CMDLINE_BOOL=y
CONFIG_CMDLINE="console=ttyAMA0 wg.success=ttyAMA1" CONFIG_CMDLINE="console=ttyAMA0 wg.success=vport0p1 panic_on_warn=1"
CONFIG_FRAME_WARN=1280 CONFIG_FRAME_WARN=1280
CONFIG_CPU_BIG_ENDIAN=y CONFIG_CPU_BIG_ENDIAN=y
CONFIG_SERIAL_AMBA_PL011=y CONFIG_SERIAL_AMBA_PL011=y
CONFIG_SERIAL_AMBA_PL011_CONSOLE=y CONFIG_SERIAL_AMBA_PL011_CONSOLE=y
CONFIG_VIRTIO_MENU=y
CONFIG_VIRTIO_MMIO=y
CONFIG_VIRTIO_CONSOLE=y
CONFIG_CMDLINE_BOOL=y CONFIG_CMDLINE_BOOL=y
CONFIG_CMDLINE="console=ttyAMA0 wg.success=ttyAMA1" CONFIG_CMDLINE="console=ttyAMA0 wg.success=vport0p1 panic_on_warn=1"
CONFIG_FRAME_WARN=1280 CONFIG_FRAME_WARN=1280
...@@ -4,6 +4,9 @@ CONFIG_ARCH_VIRT=y ...@@ -4,6 +4,9 @@ CONFIG_ARCH_VIRT=y
CONFIG_THUMB2_KERNEL=n CONFIG_THUMB2_KERNEL=n
CONFIG_SERIAL_AMBA_PL011=y CONFIG_SERIAL_AMBA_PL011=y
CONFIG_SERIAL_AMBA_PL011_CONSOLE=y CONFIG_SERIAL_AMBA_PL011_CONSOLE=y
CONFIG_VIRTIO_MENU=y
CONFIG_VIRTIO_MMIO=y
CONFIG_VIRTIO_CONSOLE=y
CONFIG_CMDLINE_BOOL=y CONFIG_CMDLINE_BOOL=y
CONFIG_CMDLINE="console=ttyAMA0 wg.success=ttyAMA1" CONFIG_CMDLINE="console=ttyAMA0 wg.success=vport0p1 panic_on_warn=1"
CONFIG_FRAME_WARN=1024 CONFIG_FRAME_WARN=1024