Revision tags: v6.2.1, v6.2.0, v6.3.0, v6.0.1, v6.0.0, v6.0.0rc1, v6.1.0, v5.8.3, v5.8.2, v5.8.1, v5.8.0, v5.9.0, v5.8.0rc1, v5.6.3, v5.6.2, v5.6.1, v5.6.0, v5.6.0rc1, v5.7.0, v5.4.3, v5.4.2 |
|
#
fcf6efef |
| 02-Mar-2019 |
Sascha Wildner <saw@online.de> |
kernel: Remove numerous #include <sys/thread2.h>.
Most of them were added when we converted spl*() calls to crit_enter()/crit_exit(), almost 14 years ago. We can now remove a good chunk of them agai
kernel: Remove numerous #include <sys/thread2.h>.
Most of them were added when we converted spl*() calls to crit_enter()/crit_exit(), almost 14 years ago. We can now remove a good chunk of them again for where crit_*() are no longer used.
I had to adjust some files that were relying on thread2.h or headers that it includes coming in via other headers that it was removed from.
show more ...
|
#
afbe4b80 |
| 03-Jan-2019 |
Sascha Wildner <saw@online.de> |
i386 removal, part 69/x: Clean up sys/dev/netif.
According to comments from sephe.
|
Revision tags: v5.4.1, v5.4.0, v5.5.0, v5.4.0rc1, v5.2.2, v5.2.1, v5.2.0, v5.3.0, v5.2.0rc, v5.0.2, v5.0.1, v5.0.0, v5.0.0rc2, v5.1.0, v5.0.0rc1, v4.8.1, v4.8.0, v4.6.2, v4.9.0, v4.8.0rc, v4.6.1 |
|
#
afd2da4d |
| 03-Aug-2016 |
Matthew Dillon <dillon@apollo.backplane.com> |
kernel - Remove PG_ZERO and zeroidle (page-zeroing) entirely
* Remove the PG_ZERO flag and remove all page-zeroing optimizations, entirely. Aftering doing a substantial amount of testing, these
kernel - Remove PG_ZERO and zeroidle (page-zeroing) entirely
* Remove the PG_ZERO flag and remove all page-zeroing optimizations, entirely. Aftering doing a substantial amount of testing, these optimizations, which existed all the way back to CSRG BSD, no longer provide any benefit on a modern system.
- Pre-zeroing a page only takes 80ns on a modern cpu. vm_fault overhead in general is ~at least 1 microscond.
- Pre-zeroing a page leads to a cold-cache case on-use, forcing the fault source (e.g. a userland program) to actually get the data from main memory in its likely immediate use of the faulted page, reducing performance.
- Zeroing the page at fault-time is actually more optimal because it does not require any reading of dynamic ram and leaves the cache hot.
- Multiple synth and build tests show that active idle-time zeroing of pages actually reduces performance somewhat and incidental allocations of already-zerod pages (from page-table tear-downs) do not affect performance in any meaningful way.
* Remove bcopyi() and obbcopy() -> collapse into bcopy(). These other versions existed because bcopy() used to be specially-optimized and could not be used in all situations. That is no longer true.
* Remove bcopy function pointer argument to m_devget(). It is no longer used. This function existed to help support ancient drivers which might have needed a special memory copy to read and write mapped data. It has long been supplanted by BUSDMA.
show more ...
|
Revision tags: v4.6.0, v4.6.0rc2, v4.6.0rc, v4.7.0, v4.4.3, v4.4.2, v4.4.1, v4.4.0, v4.5.0, v4.4.0rc, v4.2.4, v4.3.1, v4.2.3, v4.2.1, v4.2.0, v4.0.6, v4.3.0, v4.2.0rc, v4.0.5, v4.0.4 |
|
#
b5523eac |
| 19-Feb-2015 |
Sascha Wildner <saw@online.de> |
kernel: Move us to using M_NOWAIT and M_WAITOK for mbuf functions.
The main reason is that our having to use the MB_WAIT and MB_DONTWAIT flags was a recurring issue when porting drivers from FreeBSD
kernel: Move us to using M_NOWAIT and M_WAITOK for mbuf functions.
The main reason is that our having to use the MB_WAIT and MB_DONTWAIT flags was a recurring issue when porting drivers from FreeBSD because it tended to get forgotten and the code would compile anyway with the wrong constants. And since MB_WAIT and MB_DONTWAIT ended up as ocflags for an objcache_get() or objcache_reclaimlist call (which use M_WAITOK and M_NOWAIT), it was just one big converting back and forth with some sanitization in between.
This commit allows M_* again for the mbuf functions and keeps the sanitizing as it was before: when M_WAITOK is among the passed flags, objcache functions will be called with M_WAITOK and when it is absent, they will be called with M_NOWAIT. All other flags are scrubbed by the MB_OCFLAG() macro which does the same as the former MBTOM().
Approved-by: dillon
show more ...
|
Revision tags: v4.0.3, v4.0.2, v4.0.1, v4.0.0, v4.0.0rc3, v4.0.0rc2, v4.0.0rc, v4.1.0, v3.8.2 |
|
#
73029d08 |
| 29-Jun-2014 |
Franco Fichtner <franco@lastsummer.de> |
kernel: make pktinfo and cpuid native to ip_input()
In order to remove ether_input_pkt(), switch the prototype of if_input() and adjust all callers. While there, consolidate the style of the invoke
kernel: make pktinfo and cpuid native to ip_input()
In order to remove ether_input_pkt(), switch the prototype of if_input() and adjust all callers. While there, consolidate the style of the invoke.
Suggested and reviewed by: sephe
show more ...
|
Revision tags: v3.8.1, v3.6.3, v3.8.0, v3.8.0rc2, v3.9.0, v3.8.0rc, v3.6.2, v3.6.1 |
|
#
dcb4b80d |
| 27-Nov-2013 |
Sascha Wildner <saw@online.de> |
kernel: Generate miidevs.h, pccarddevs.h and pcidevs.h on the fly.
It removes the need to regenerate those header file after first editing the associated list of IDs (miidevs, pccarddevs or pcidevs)
kernel: Generate miidevs.h, pccarddevs.h and pcidevs.h on the fly.
It removes the need to regenerate those header file after first editing the associated list of IDs (miidevs, pccarddevs or pcidevs). After this commit, editing the list alone is enough to add IDs.
We already did it like that for usb4bsd's usbdevs.h before. This commit adjusts things for the remaining ID lists.
show more ...
|
Revision tags: v3.6.0, v3.7.1, v3.6.0rc, v3.7.0, v3.4.3 |
|
#
ac9843a1 |
| 04-Jun-2013 |
Sepherosa Ziehau <sephe@dragonflybsd.org> |
ifq: Remove the unused parameter 'mpolled' from ifq dequeue interface
The ifq_poll() -> ifq_dequeue() model is not MPSAFE, and mpolled has not been used, i.e. set to NULL, for years; time to let it
ifq: Remove the unused parameter 'mpolled' from ifq dequeue interface
The ifq_poll() -> ifq_dequeue() model is not MPSAFE, and mpolled has not been used, i.e. set to NULL, for years; time to let it go.
show more ...
|
Revision tags: v3.4.2, v3.4.0, v3.4.1, v3.4.0rc, v3.5.0 |
|
#
4c77af2d |
| 11-Mar-2013 |
Sepherosa Ziehau <sephe@dragonflybsd.org> |
netif: Setup TX ring CPUID before hooking up interrupt vectors
|
#
d3c9c58e |
| 20-Feb-2013 |
Sascha Wildner <saw@online.de> |
kernel: Use DEVMETHOD_END in the drivers.
|
#
d40991ef |
| 13-Feb-2013 |
Sepherosa Ziehau <sephe@dragonflybsd.org> |
if: Per-cpu ifnet/ifaddr statistics, step 1/3
Wrap ifnet/ifaddr stats updating, setting and extraction into macros; ease upcoming changes.
|
#
f0a26983 |
| 11-Jan-2013 |
Sepherosa Ziehau <sephe@dragonflybsd.org> |
if: Multiple TX queue support step 1 of many; introduce ifaltq subqueue
Put the plain queue information, e.g. queue header and tail, serializer, packet staging scoreboard and ifnet.if_start schedule
if: Multiple TX queue support step 1 of many; introduce ifaltq subqueue
Put the plain queue information, e.g. queue header and tail, serializer, packet staging scoreboard and ifnet.if_start schedule netmsg etc. into its own structure (subqueue). ifaltq structure could have multiple of subqueues based on the count that drivers can specify.
Subqueue's enqueue, dequeue, purging and states updating are protected by the subqueue's serializer, so for hardwares supporting multiple TX queues, contention on queuing operation could be greatly reduced.
The subqueue is passed to if_start to let the driver know which hardware TX queue to work on. Only the related driver's TX queue serializer will be held, so for hardwares supporting multiple TX queues, contention on driver's TX queue serializer could be greatly reduced.
Bunch of ifsq_ prefixed functions are added, which is used to perform various operations on subqueues. Commonly used ifq_ prefixed functions are still kept mainly for the drivers which do not support multiple TX queues (well, these functions also ease the netif/ convertion in this step :).
All of the pseudo network devices under sys/net are converted to use the new subqueue operation. netproto/802_11 is converted too. igb(4) is converted to use the new subqueue operation, the rest of the network drivers are only changed for the if_start interface modification.
For ALTQs which have packet scheduler enabled, only the first subqueue is used (*).
(*) Whether we should utilize multiple TX queues if ALTQ's packet scheduler is enabled is quite questionable. Mainly because hardware's multiple TX queue packet dequeue mechanism could have negative impact on ALTQ's packet scheduler's decision.
show more ...
|
#
dfd3b18b |
| 05-Jan-2013 |
Sepherosa Ziehau <sephe@dragonflybsd.org> |
if: Move if_cpuid into ifaltq; prepare multiple TX queues support
if_cpuid and if_npoll_cpuid are merged and moved into ifaltq as altq_cpuid, which indicates the owner CPU of the tx queue. Since we
if: Move if_cpuid into ifaltq; prepare multiple TX queues support
if_cpuid and if_npoll_cpuid are merged and moved into ifaltq as altq_cpuid, which indicates the owner CPU of the tx queue. Since we already have code in if_start_dispatch() to catching tx queue owner CPU changes, this merging is quite safe.
show more ...
|
#
9ed293e0 |
| 28-Dec-2012 |
Sepherosa Ziehau <sephe@dragonflybsd.org> |
if: Move IFF_OACTIVE bit into ifaltq; prepare multiple TX queues support
ifaltq.altq_hw_oactive is now used to record that NIC's TX queue is full. IFF_OACTIVE is removed from kernel. User space IFF
if: Move IFF_OACTIVE bit into ifaltq; prepare multiple TX queues support
ifaltq.altq_hw_oactive is now used to record that NIC's TX queue is full. IFF_OACTIVE is removed from kernel. User space IFF_OACTIVE is kept for compability.
ifaltq.altq_hw_oactive should not be accessed directly. Following set of functions are provided and should be used: ifq_is_oactive(ifnet.if_snd) - Whether NIC's TX queue is full or not ifq_set_oactive(ifnet.if_snd) - NIC's TX queue is full ifq_clr_oactive(ifnet.if_snd) - NIC's TX queue is no longer full
show more ...
|
Revision tags: v3.2.2 |
|
#
bf58d756 |
| 11-Nov-2012 |
Sepherosa Ziehau <sephe@dragonflybsd.org> |
nge: Switch from device_polling to ifpoll
|
Revision tags: v3.2.1, v3.2.0, v3.3.0, v3.0.3 |
|
#
ed20d0e3 |
| 21-Apr-2012 |
Sascha Wildner <saw@online.de> |
kernel: Remove newlines from the panic messages that have one.
panic() itself will add a newline.
|
Revision tags: v3.0.2, v3.0.1, v3.1.0, v3.0.0 |
|
#
28e81a28 |
| 29-Dec-2011 |
Sepherosa Ziehau <sephe@dragonflybsd.org> |
intr: Remove no longer correct ithread_cpuid; use rman_get_cpuid instead
|
#
86d7f5d3 |
| 26-Nov-2011 |
John Marino <draco@marino.st> |
Initial import of binutils 2.22 on the new vendor branch
Future versions of binutils will also reside on this branch rather than continuing to create new binutils branches for each new version.
|
Revision tags: v2.12.0, v2.13.0 |
|
#
aa2b9d05 |
| 24-Jun-2011 |
Sascha Wildner <saw@online.de> |
kernel: Use NULL for DRIVER_MODULE()'s evh & arg (which are pointers).
This is just cosmetics for easier reading.
|
Revision tags: v2.10.1, v2.11.0, v2.10.0, v2.9.1, v2.8.2, v2.8.1, v2.8.0, v2.9.0, v2.6.3, v2.7.3, v2.6.2, v2.7.2, v2.7.1, v2.6.1, v2.7.0, v2.6.0, v2.5.1, v2.4.1, v2.5.0, v2.4.0, v2.3.2, v2.3.1, v2.2.1, v2.2.0, v2.3.0, v2.1.1, v2.0.1 |
|
#
95893fe4 |
| 17-Aug-2008 |
Sepherosa Ziehau <sephe@dragonflybsd.org> |
Nuke INTR_NETSAFE
|
#
e6b5847c |
| 16-May-2008 |
Sepherosa Ziehau <sephe@dragonflybsd.org> |
Unify vlan_input() and vlan_input_tag(): - For device drivers that support hardware vlan tag extraction, mbuf's M_VLANTAG is turned on and vlan tag is saved in mbuf.m_pkthdr.ether_vlantag - At the
Unify vlan_input() and vlan_input_tag(): - For device drivers that support hardware vlan tag extraction, mbuf's M_VLANTAG is turned on and vlan tag is saved in mbuf.m_pkthdr.ether_vlantag - At the very beginning of ether_input_chain(), if the packet's ether type is vlan and hardware does not extract vlan tag, vlan_ether_decap() is called to do software vlan tag extraction. - Instead of BPF_MTAP(), ETHER_BPF_MTAP() is used in ether_input_chain() to deliver possible vlan tagging information to the bpf listeners. - Ether header is restored before calling vlan_input(), so under most cases, extra ether header copy is avoided. vlan_input() does nothing more than finding vlan interface and looping back the packet to ether_input_chain() with vlan interface as input interface.
Ideas-from: FreeBSD
show more ...
|
#
9db4b353 |
| 14-May-2008 |
Sepherosa Ziehau <sephe@dragonflybsd.org> |
Reduce ifnet.if_serializer contention on output path: - Push ifnet.if_serializer holding down into each ifnet.if_output implementation - Add a serializer into ifaltq, which is used to protect send qu
Reduce ifnet.if_serializer contention on output path: - Push ifnet.if_serializer holding down into each ifnet.if_output implementation - Add a serializer into ifaltq, which is used to protect send queue instead of its parent's if_serializer. This change has following implication: o On output path, enqueueing packets and calling ifnet.if_start are decoupled o In device drivers, poll->dev_encap_ok->dequeue operation sequence is no longer safe, instead dequeue->dev_encap_fail->prepend should be used This serializer will be held by using lwkt_serialize_adaptive_enter() - Add altq_started field into ifaltq, which is used to interlock the calling of its parent's if_start, to reduce ifnet.if_serializer contention. if_devstart(), a helper function which utilizes ifaltq.altq_started, is added to reduce code duplication in ethernet device drivers. - Add if_cpuid into ifnet. This field indicates on which CPU device driver's interrupt will happen. - Add ifq_dispatch(). This function will try to hold ifnet.if_serializer in order to call ifnet.if_start. If this attempt fails, this function will schedule ifnet.if_start to be called on CPU located by ifnet.if_start_cpuid if_start_nmsg, which is per-CPU netmsg, is added to ifnet to facilitate ifnet.if_start scheduling. ifq_dispatch() is called by ether_output_frame() currently - Use ifq_classic_ functions, if altq is not enabled - Fix various device drivers bugs in their if_start implementation - Add ktr for ifq classic enqueue and dequeue - Add ktr for ifnet.if_start
show more ...
|
#
b637f170 |
| 10-Mar-2008 |
Sepherosa Ziehau <sephe@dragonflybsd.org> |
Add ETHER_BPF_MTAP() which will call vlan_ether_ptap() for packets whose vlan tagging is offloaded to NIC.
Obtained-from: FreeBSD
|
#
83790f85 |
| 10-Mar-2008 |
Sepherosa Ziehau <sephe@dragonflybsd.org> |
- Embed ether vlan tag in mbuf packet header. Add an mbuf flag to mark that this field is valid. - Hide ifvlan after the above change; drivers support hardware vlan tagging only need to check et
- Embed ether vlan tag in mbuf packet header. Add an mbuf flag to mark that this field is valid. - Hide ifvlan after the above change; drivers support hardware vlan tagging only need to check ether_vlantag in mbuf packet header. - Convert all drivers that support hardware vlan tagging to use vlan tag field in mbug packet header.
Obtained-from: FreeBSD
Change the vlan/parent serializer releasing/holding sequences into mbuf dispatching. There are several reasons to do so: - Avoid excessive vlan interface serializer releasing/holding - Touching parent interface if_snd without holding parent's serializer is unsafe - vlan's parent may disappear or be changed after vlan's serializer is released
# This dispatching could be further optimized by packing all mbufs into one # netmsg using m_nextpkt to: # - Amortize netmsg sending cost # - Reduce the time that parent interface spends on serializer releasing/holding
show more ...
|
#
e7b4468c |
| 05-Jan-2008 |
Sascha Wildner <swildner@dragonflybsd.org> |
For kmalloc(), MALLOC() and contigmalloc(), use M_ZERO instead of explicitly bzero()ing.
Reviewed-by: sephe
|
#
fbb35ef0 |
| 14-Aug-2007 |
Sepherosa Ziehau <sephe@dragonflybsd.org> |
Add a new csum flag to tell IP defragmenter that csum_data does _not_ contain a valid IP fragment payload checksum. This flag is only intented to be used by IP defragmenter.
Currently only bce(4),
Add a new csum flag to tell IP defragmenter that csum_data does _not_ contain a valid IP fragment payload checksum. This flag is only intented to be used by IP defragmenter.
Currently only bce(4), bge(4) and ti(4) provide valid IP fragment payload checksum. Turn on the new csum flag for the rest of the drivers, which support hardware TCP/UDP checksum offload but hard-wire csum_data to 0xffff, to avoid bypassing verification of defragmented payload's checksum.
Discussed-with: dillon@, hsu@ Approved-by: dillon@
show more ...
|