History log of /netbsd/sys/netinet6/ip6_flow.c (Results 1 – 25 of 42)
Revision Date Author Comments
# 3f7a1da5 19-Feb-2021 christos <christos@NetBSD.org>

- Make ALIGNED_POINTER use __alignof(t) instead of sizeof(t). This is more
correct because it works with non-primitive types and provides the ABI
alignment for the type the compiler will use.
- R

- Make ALIGNED_POINTER use __alignof(t) instead of sizeof(t). This is more
correct because it works with non-primitive types and provides the ABI
alignment for the type the compiler will use.
- Remove all the *_HDR_ALIGNMENT macros and asserts
- Replace POINTER_ALIGNED_P with ACCESSIBLE_POINTER which is identical to
ALIGNED_POINTER, but returns that the pointer is always aligned if the
CPU supports unaligned accesses.
[ as proposed in tech-kern ]

show more ...


# 1a5b90cc 14-Feb-2021 christos <christos@NetBSD.org>

- centralize header align and pullup into a single inline function
- use a single macro to align pointers and expose the alignment, instead
of hard-coding 3 in 1/2 the macros.
- fix an issue in the

- centralize header align and pullup into a single inline function
- use a single macro to align pointers and expose the alignment, instead
of hard-coding 3 in 1/2 the macros.
- fix an issue in the ipv6 lt2p where it was aligning for ipv4 and pulling
for ipv6.

show more ...


# cb7533fb 06-Feb-2018 ozaki-r <ozaki-r@NetBSD.org>

Shorten the name of a workqueue instance to fit to the limit (15)


# f2cc6199 29-Jan-2018 maxv <maxv@NetBSD.org>

Style, and use __cacheline_aligned.

By the way, it would be nice to revisit the use of 'ip6flow_lock' in
ip6flow_fastforward(): it is taken right away because of 'ip6flow_inuse',
but then we perform

Style, and use __cacheline_aligned.

By the way, it would be nice to revisit the use of 'ip6flow_lock' in
ip6flow_fastforward(): it is taken right away because of 'ip6flow_inuse',
but then we perform several checks that do not require it.

show more ...


# 34fc0a2a 08-Jan-2018 knakahara <knakahara@NetBSD.org>

Committed debugging logs by mistake, sorry. Revert cryoto.c:r.1.103 and ip6_flow.c:r.1.37.


# d5e5b3e7 08-Jan-2018 knakahara <knakahara@NetBSD.org>

Fix PR kern/52910. Reported and implemented a patch by Sevan Janiyan, thanks.


# eba515c1 10-Dec-2017 maxv <maxv@NetBSD.org>

Fix use-after-free: if m_pullup fails the (freed) mbuf is pushed on the
ip6_pktq queue and re-processed later. Return 1 to say "processed and
freed".


# a5d1c1f4 17-Nov-2017 ozaki-r <ozaki-r@NetBSD.org>

Provide macros for softnet_lock and KERNEL_LOCK hiding NET_MPSAFE switch

It reduces C&P codes such as "#ifndef NET_MPSAFE KERNEL_LOCK(1, NULL); ..."
scattered all over the source code and makes it e

Provide macros for softnet_lock and KERNEL_LOCK hiding NET_MPSAFE switch

It reduces C&P codes such as "#ifndef NET_MPSAFE KERNEL_LOCK(1, NULL); ..."
scattered all over the source code and makes it easy to identify remaining
KERNEL_LOCK and/or softnet_lock that are held even if NET_MPSAFE.

No functional change

show more ...


# a41b4f38 11-Jan-2017 ozaki-r <ozaki-r@NetBSD.org>

Get rid of unnecessary header inclusions


# 9de8a52b 08-Dec-2016 ozaki-r <ozaki-r@NetBSD.org>

Add rtcache_unref to release points of rtentry stemming from rtcache

In the MP-safe world, a rtentry stemming from a rtcache can be freed at any
points. So we need to protect rtentries somehow say b

Add rtcache_unref to release points of rtentry stemming from rtcache

In the MP-safe world, a rtentry stemming from a rtcache can be freed at any
points. So we need to protect rtentries somehow say by reference couting or
passive references. Regardless of the method, we need to call some release
function of a rtentry after using it.

The change adds a new function rtcache_unref to release a rtentry. At this
point, this function does nothing because for now we don't add a reference
to a rtentry when we get one from a rtcache. We will add something useful
in a further commit.

This change is a part of changes for MP-safe routing table. It is separated
to avoid one big change that makes difficult to debug by bisecting.

show more ...


# 35c13761 18-Oct-2016 ozaki-r <ozaki-r@NetBSD.org>

Don't hold global locks if NET_MPSAFE is enabled

If NET_MPSAFE is enabled, don't hold KERNEL_LOCK and softnet_lock in
part of the network stack such as IP forwarding paths. The aim of the
change is

Don't hold global locks if NET_MPSAFE is enabled

If NET_MPSAFE is enabled, don't hold KERNEL_LOCK and softnet_lock in
part of the network stack such as IP forwarding paths. The aim of the
change is to make it easy to test the network stack without the locks
and reduce our local diffs.

By default (i.e., if NET_MPSAFE isn't enabled), the locks are held
as they used to be.

Reviewed by knakahara@

show more ...


# 898be115 23-Aug-2016 knakahara <knakahara@NetBSD.org>

improve fast-forward performance when the number of flows exceeds ip6_maxflows.

This is porting of ip_flow.c:r1.76

In ip6flow case, the before degradation is about 45%, the after degradation is
bou

improve fast-forward performance when the number of flows exceeds ip6_maxflows.

This is porting of ip_flow.c:r1.76

In ip6flow case, the before degradation is about 45%, the after degradation is
bout 55%.

show more ...


# 473203ee 02-Aug-2016 knakahara <knakahara@NetBSD.org>

ip6flow refactor like ipflow.

- move ip6flow sysctls into ip6_flow.c like ip_flow.c:r1.64
- build ip6_flow.c only if GATEWAY kernel option is enabled


# 75c04a41 26-Jul-2016 ozaki-r <ozaki-r@NetBSD.org>

Simplify by using atomic_swap instead of mutex

Suggested by kefren@


# 8901801d 11-Jul-2016 ozaki-r <ozaki-r@NetBSD.org>

Run timers in workqueue

Timers (such as nd6_timer) typically free/destroy some data in callout
(softint). If we apply psz/psref for such data, we cannot do free/destroy
process in there because sync

Run timers in workqueue

Timers (such as nd6_timer) typically free/destroy some data in callout
(softint). If we apply psz/psref for such data, we cannot do free/destroy
process in there because synchronization of psz/psref cannot be used in
softint. So run timer callbacks in workqueue works (normal LWP context).

Doing workqueue_enqueue a work twice (i.e., call workqueue_enqueue before
a previous task is scheduled) isn't allowed. For nd6_timer and
rt_timer_timer, this doesn't happen because callout_reset is called only
from workqueue's work. OTOH, ip{,6}flow_slowtimo's callout can be called
before its work starts and completes because the callout is periodically
called regardless of completion of the work. To avoid such a situation,
add a flag for each protocol; the flag is set true when a work is
enqueued and set false after the work finished. workqueue_enqueue is
called only if the flag is false.

Proposed on tech-net and tech-kern.

show more ...


# c3596435 20-Jun-2016 knakahara <knakahara@NetBSD.org>

apply if_output_lock() to L3 callers which call ifp->if_output() of L2(or L3 tunneling).


# 3c0c483c 13-Jun-2016 knakahara <knakahara@NetBSD.org>

eliminate unnecessary splnet


# ad26f29f 13-Jun-2016 knakahara <knakahara@NetBSD.org>

MP-ify fastforward to support GATEWAY kernel option.

I add "ipflow_lock" mutex in ip_flow.c and "ip6flow_lock" mutex in ip6_flow.c
to protect all data in each file. Of course, this is not MP-scalabl

MP-ify fastforward to support GATEWAY kernel option.

I add "ipflow_lock" mutex in ip_flow.c and "ip6flow_lock" mutex in ip6_flow.c
to protect all data in each file. Of course, this is not MP-scalable. However,
it is sufficient as tentative workaround. We should make it scalable somehow
in the future.

ok by ozaki-r@n.o.

show more ...


# 260c0658 23-Mar-2015 roy <roy@NetBSD.org>

Add RTF_BROADCAST to mark routes used for the broadcast address when
they are created on the fly. This makes it clear what the route is for
and allows an optimisation in ip_output() by avoiding a cal

Add RTF_BROADCAST to mark routes used for the broadcast address when
they are created on the fly. This makes it clear what the route is for
and allows an optimisation in ip_output() by avoiding a call to
in_broadcast() because most of the time we do talk to a host.
It also avoids a needless allocation for the storage of llinfo_arp and
thus vanishes from arp(8) - it showed as incomplete anyway so this
is a nice side effect.

Guard against this and routes marked with RTF_BLACKHOLE in
ip_fastforward().
While here, guard against routes marked with RTF_BLACKHOLE in
ip6_fastforward().
RTF_BROADCAST is IPv4 only, so don't bother checking that here.

show more ...


# d5f67b2c 20-May-2014 bouyer <bouyer@NetBSD.org>

Sync with the ipv4 code and call ifp->if_output() with KERNEL_LOCK
held.
Problem reported and fix tested by njoly@ on current-users@


# 99d3a214 01-Apr-2014 pooka <pooka@NetBSD.org>

Wrap ipflow_create() & ip6flow_create() in kernel lock. Prevents the
interrupt side on another core from seeing the situation while the ipflow
is being modified.


# e9e43eb8 23-May-2013 msaitoh <msaitoh@NetBSD.org>

Clear mbuf's csum_flags in ip6flow_fastforward(). Fixes PR#47849.


# be17e898 11-Oct-2012 christos <christos@NetBSD.org>

PR/47058: Antti Kantee: If the ipv6 flow code modifies the mbuf, pass the
change up to the caller.


# f32dc415 19-Jan-2012 liamjfoy <liamjfoy@NetBSD.org>

Remove ip6f_start from ip6f struct


# 29f89491 23-Mar-2009 liamjfoy <liamjfoy@NetBSD.org>

Init ip6flow pool dynamically instead of using a linkset.


12