#
1ecb146f |
| 15-Mar-2024 |
David Howells <dhowells@redhat.com> |
netfs, afs: Use writeback retry to deal with alternate keys
Use a hook in the new writeback code's retry algorithm to rotate the keys once all the outstanding subreqs have failed rather than doing i
netfs, afs: Use writeback retry to deal with alternate keys
Use a hook in the new writeback code's retry algorithm to rotate the keys once all the outstanding subreqs have failed rather than doing it separately on each subreq.
Signed-off-by: David Howells <dhowells@redhat.com> Reviewed-by: Jeff Layton <jlayton@kernel.org> cc: Marc Dionne <marc.dionne@auristor.com> cc: linux-afs@lists.infradead.org cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org
show more ...
|
#
2df86547 |
| 08-Mar-2024 |
David Howells <dhowells@redhat.com> |
netfs: Cut over to using new writeback code
Cut over to using the new writeback code. The old code is #ifdef'd out or otherwise removed from compilation to avoid conflicts and will be removed in a
netfs: Cut over to using new writeback code
Cut over to using the new writeback code. The old code is #ifdef'd out or otherwise removed from compilation to avoid conflicts and will be removed in a future patch.
Signed-off-by: David Howells <dhowells@redhat.com> Reviewed-by: Jeff Layton <jlayton@kernel.org> cc: Eric Van Hensbergen <ericvh@kernel.org> cc: Latchesar Ionkov <lucho@ionkov.net> cc: Dominique Martinet <asmadeus@codewreck.org> cc: Christian Schoenebeck <linux_oss@crudebyte.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: v9fs@lists.linux.dev cc: linux-afs@lists.infradead.org cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org
show more ...
|
#
ed22e1db |
| 18-Mar-2024 |
David Howells <dhowells@redhat.com> |
netfs, afs: Implement helpers for new write code
Implement the helpers for the new write code in afs. There's now an optional ->prepare_write() that allows the filesystem to set the parameters for
netfs, afs: Implement helpers for new write code
Implement the helpers for the new write code in afs. There's now an optional ->prepare_write() that allows the filesystem to set the parameters for the next write, such as maximum size and maximum segment count, and an ->issue_write() that is called to initiate an (asynchronous) write operation.
Signed-off-by: David Howells <dhowells@redhat.com> Reviewed-by: Jeff Layton <jlayton@kernel.org> cc: Marc Dionne <marc.dionne@auristor.com> cc: linux-afs@lists.infradead.org cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org
show more ...
|
#
d73065e6 |
| 27-Mar-2024 |
David Howells <dhowells@redhat.com> |
afs: Use alternative invalidation to using launder_folio
Use writepages-based flushing invalidation instead of invalidate_inode_pages2() and ->launder_folio(). This will allow ->launder_folio() to
afs: Use alternative invalidation to using launder_folio
Use writepages-based flushing invalidation instead of invalidate_inode_pages2() and ->launder_folio(). This will allow ->launder_folio() to be removed eventually.
Signed-off-by: David Howells <dhowells@redhat.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: Jeff Layton <jlayton@kernel.org> cc: linux-afs@lists.infradead.org cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org
show more ...
|
#
bfacaf71 |
| 19-Feb-2024 |
Marc Dionne <marc.dionne@auristor.com> |
afs: Fix ignored callbacks over ipv4
When searching for a matching peer, all addresses need to be searched, not just the ipv6 ones in the fs_addresses6 list.
Given that the lists no longer contain
afs: Fix ignored callbacks over ipv4
When searching for a matching peer, all addresses need to be searched, not just the ipv6 ones in the fs_addresses6 list.
Given that the lists no longer contain addresses, there is little reason to splitting things between separate lists, so unify them into a single list.
When processing an incoming callback from an ipv4 address, this would lead to a failure to set call->server, resulting in the callback being ignored and the client seeing stale contents.
Fixes: 72904d7b9bfb ("rxrpc, afs: Allow afs to pin rxrpc_peer objects") Reported-by: Markus Suvanto <markus.suvanto@gmail.com> Link: https://lists.infradead.org/pipermail/linux-afs/2024-February/008035.html Signed-off-by: Marc Dionne <marc.dionne@auristor.com> Signed-off-by: David Howells <dhowells@redhat.com> Link: https://lists.infradead.org/pipermail/linux-afs/2024-February/008037.html # v1 Link: https://lists.infradead.org/pipermail/linux-afs/2024-February/008066.html # v2 Link: https://lore.kernel.org/r/20240219143906.138346-2-dhowells@redhat.com Signed-off-by: Christian Brauner <brauner@kernel.org>
show more ...
|
#
abcbd3bf |
| 17-Nov-2023 |
David Howells <dhowells@redhat.com> |
afs: trace: Log afs_make_call(), including server address
Add a tracepoint to log calls to afs_make_call(), including the destination server address.
Signed-off-by: David Howells <dhowells@redhat.c
afs: trace: Log afs_make_call(), including server address
Add a tracepoint to log calls to afs_make_call(), including the destination server address.
Signed-off-by: David Howells <dhowells@redhat.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: linux-afs@lists.infradead.org
show more ...
|
#
28f4c580 |
| 09-Nov-2023 |
David Howells <dhowells@redhat.com> |
afs: Fix offline and busy message emission
The current code assumes that offline and busy volume states apply to all instances of a volume, not just the one on the server that returned VOFFLINE or V
afs: Fix offline and busy message emission
The current code assumes that offline and busy volume states apply to all instances of a volume, not just the one on the server that returned VOFFLINE or VBUSY and will emit a notice to dmesg suggesting that the entire volume is unavailable.
Fix that by moving the flags recording this to the afs_server_entry struct that is used to represent a particular instance of a volume on a specific server. The notice is altered to include the server UUID also.
Signed-off-by: David Howells <dhowells@redhat.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: linux-afs@lists.infradead.org
show more ...
|
#
495f2ae9 |
| 18-Oct-2023 |
David Howells <dhowells@redhat.com> |
afs: Fix fileserver rotation
Fix the fileserver rotation so that it doesn't use RTT as the basis for deciding which server and address to use as this doesn't necessarily give a good indication of th
afs: Fix fileserver rotation
Fix the fileserver rotation so that it doesn't use RTT as the basis for deciding which server and address to use as this doesn't necessarily give a good indication of the best path. Instead, use the configurable preference list in conjunction with whatever probes have succeeded at the time of looking.
To this end, make the following changes:
(1) Keep an array of "server states" to track what addresses we've tried on each server and move the waitqueue entries there that we'll need for probing.
(2) Each afs_server_state struct is made to pin the corresponding server's endpoint state rather than the afs_operation struct carrying a pin on the server we're currently looking at.
(3) Drop the server list preference; we now always rescan the server list.
(4) afs_wait_for_probes() now uses the server state list to guide it in what it waits for (and to provide the waitqueue entries) and returns an indication of whether we'd got a response, run out of responsive addresses or the endpoint state had been superseded and we need to restart the iteration.
(5) Call afs_get_address_preferences*() occasionally to refresh the preference values.
(6) When picking a server, scan the addresses of the servers for which we have as-yet untested communications, looking for the highest priority one and use that instead of trying all the addresses for a particular server in ascending-RTT order.
(7) When a Busy or Offline state is seen across all available servers, do a short sleep.
(8) If we detect that we accessed a future RO volume version whilst it is undergoing replication, reissue the op against the older version until at least half of the servers are replicated.
(9) Whilst RO replication is ongoing, increase the frequency of Volume Location server checks for that volume to every ten minutes instead of hourly.
Also add a tracepoint to track progress through the rotation algorithm.
Signed-off-by: David Howells <dhowells@redhat.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: linux-afs@lists.infradead.org
show more ...
|
#
453924de |
| 08-Nov-2023 |
David Howells <dhowells@redhat.com> |
afs: Overhaul invalidation handling to better support RO volumes
Overhaul the third party-induced invalidation handling, making use of the previously added volume-level event counters (cb_scrub and
afs: Overhaul invalidation handling to better support RO volumes
Overhaul the third party-induced invalidation handling, making use of the previously added volume-level event counters (cb_scrub and cb_ro_snapshot) that are now being parsed out of the VolSync record returned by the fileserver in many of its replies.
This allows better handling of RO (and Backup) volumes. Since these are snapshot of a RW volume that are updated atomically simultantanously across all servers that host them, they only require a single callback promise for the entire volume. The currently upstream code assumes that RO volumes operate in the same manner as RW volumes, and that each file has its own individual callback - which means that it does a status fetch for *every* file in a RO volume, whether or not the volume got "released" (volume callback breaks can occur for other reasons too, such as the volumeserver taking ownership of a volume from a fileserver).
To this end, make the following changes:
(1) Change the meaning of the volume's cb_v_break counter so that it is now a hint that we need to issue a status fetch to work out the state of a volume. cb_v_break is incremented by volume break callbacks and by server initialisation callbacks.
(2) Add a second counter, cb_v_check, to the afs_volume struct such that if this differs from cb_v_break, we need to do a check. When the check is complete, cb_v_check is advanced to what cb_v_break was at the start of the status fetch.
(3) Move the list of mmap'd vnodes to the volume and trigger removal of PTEs that map to files on a volume break rather than on a server break.
(4) When a server reinitialisation callback comes in, use the server-to-volume reverse mapping added in a preceding patch to iterate over all the volumes using that server and clear the volume callback promises for that server and the general volume promise as a whole to trigger reanalysis.
(5) Replace the AFS_VNODE_CB_PROMISED flag with an AFS_NO_CB_PROMISE (TIME64_MIN) value in the cb_expires_at field, reducing the number of checks we need to make.
(6) Change afs_check_validity() to quickly see if various event counters have been incremented or if the vnode or volume callback promise is due to expire/has expired without making any changes to the state. That is now left to afs_validate() as this may get more complicated in future as we may have to examine server records too.
(7) Overhaul afs_validate() so that it does a single status fetch if we need to check the state of either the vnode or the volume - and do so under appropriate locking. The function does the following steps:
(A) If the vnode/volume is no longer seen as valid, then we take the vnode validation lock and, if the volume promise has expired, the volume check lock also. The latter prevents redundant checks being made to find out if a new version of the volume got released.
(B) If a previous RPC call found that the volsync changed unexpectedly or that a RO volume was updated, then we unmap all PTEs pointing to the file to stop mmap being used for access.
(C) If the vnode is still seen to be of uncertain validity, then we perform an FS.FetchStatus RPC op to jointly update the volume status and the vnode status. This assessment is done as part of parsing the reply:
If the RO volume creation timestamp advances, cb_ro_snapshot is incremented; if either the creation or update timestamps changes in an unexpected way, the cb_scrub counter is incremented
If the Data Version returned doesn't match the copy we have locally, then we ask for the pagecache to be zapped. This takes care of handling RO update.
(D) If cb_scrub differs between volume and vnode, the vnode's pagecache is zapped and the vnode's cb_scrub is updated unless the file is marked as having been deleted.
Signed-off-by: David Howells <dhowells@redhat.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: linux-afs@lists.infradead.org
show more ...
|
#
16069e13 |
| 05-Nov-2023 |
David Howells <dhowells@redhat.com> |
afs: Parse the VolSync record in the reply of a number of RPC ops
A number of fileserver RPC operations return a VolSync record as part of their reply that gives some information about the state of
afs: Parse the VolSync record in the reply of a number of RPC ops
A number of fileserver RPC operations return a VolSync record as part of their reply that gives some information about the state of the volume being accessed, including:
(1) A volume Creation timestamp. For an RW volume, this is the time at which the volume was created; if it changes, the RW volume was presumably restored from a backup and all cached data should be scrubbed as Data Version numbers could regress on the files in the volume.
For an RO volume, this is the time it was last snapshotted from the RW volume. It is expected to advance each time this happens; if it regresses, cached data should be scrubbed.
(2) A volume Update timestamp (Auristor only). For an RW volume, this is updated any time any change is made to a volume or its contents. If it regresses, all cached data must be scrubbed.
For an RO volume, this is a copy of the RW volume's Update timestamp at the point of snapshotting. It can be used as a version number when checking to see if a callback on a RO volume was due to a snapshot. If it regresses, all cached data must be scrubbed.
but this is currently not made use of by the in-kernel afs filesystem.
Make the afs filesystem use this by:
(1) Add an update time field to the afs_volsync struct and use a value of TIME64_MIN in both that and the creation time to indicate that they are unset.
(2) Add creation and update time fields to the afs_volume struct and use this to track the two timestamps.
(3) Add a volsync_lock mutex to the afs_volume struct to control modification access for when we detect a change in these values.
(3) Add a 'pre-op volsync' struct to the afs_operation struct to record the state of the volume tracking before the op.
(4) Add a new counter, cb_scrub, to the afs_volume struct to count events that require all data to be scrubbed. A copy is placed in the afs_vnode struct (inode) and if they no longer match, a scrub takes place.
(5) When the result of an operation is being parsed, parse the VolSync data too, if it is provided. Note that the two timestamps are handled separately, since they don't work in quite the same way.
- If the afs_volume tracking is unset, just set it and do nothing else.
- If the result timestamps are the same as the ones in afs_volume, do nothing.
- If the timestamps regress, increment cb_scrub if not already done so.
- If the creation timestamp on a RW volume changes, increment cb_scrub if not already done so.
- If the creation timestamp on a RO volume advances, update the server list and see if the current server has been excluded, if so reissue the op. Once over half of the replication sites have been updated, increment cb_ro_snapshot to indicate updates may be required and switch over to excluding unupdated replication sites.
- If the creation timestamp on a Backup volume advances, just increment cb_ro_snapshot to trigger updates.
Signed-off-by: David Howells <dhowells@redhat.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: linux-afs@lists.infradead.org
show more ...
|
#
d3acd81e |
| 14-Nov-2023 |
David Howells <dhowells@redhat.com> |
afs: Don't leave DONTUSE/NEWREPSITE servers out of server list
Don't leave servers that are marked VLSF_DONTUSE or VLSF_NEWREPSITE out of the server list for a volume; rather, mark DONTUSE ones excl
afs: Don't leave DONTUSE/NEWREPSITE servers out of server list
Don't leave servers that are marked VLSF_DONTUSE or VLSF_NEWREPSITE out of the server list for a volume; rather, mark DONTUSE ones excluded and mark either NEWREPSITE excluded if the number of updated servers is <50% of the usable servers or mark !NEWREPSITE excluded otherwise.
Mark the server list as a whole with a 3-state flag to indicate whether we think the RW volume is being replicated to the RO volume, and, if so, whether we should switch to using updated replication sites (VLSF_NEWREPSITE) or stick with the old for now.
This processing is pushed up from the VLDB RPC reply parser to the code that generates the server list from that information.
Doing this allows the old list to be kept with just the exclusion flags replaced and to keep the server records pinned and maintained.
Signed-off-by: David Howells <dhowells@redhat.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: linux-afs@lists.infradead.org
show more ...
|
#
32222f09 |
| 07-Nov-2023 |
David Howells <dhowells@redhat.com> |
afs: Apply server breaks to mmap'd files in the call processor
Apply server breaks to mmap'd files that are being used from that server from the call processor work function rather than punting it o
afs: Apply server breaks to mmap'd files in the call processor
Apply server breaks to mmap'd files that are being used from that server from the call processor work function rather than punting it off to a workqueue. The work item, afs_server_init_callback(), then bumps each individual inode off to its own work item introducing a potentially lengthy delay. This reduces that delay at the cost of extending the amount of time we delay replying to the CB.InitCallBack3 notification RPC from the server.
Signed-off-by: David Howells <dhowells@redhat.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: linux-afs@lists.infradead.org
show more ...
|
#
dfa0a449 |
| 07-Nov-2023 |
David Howells <dhowells@redhat.com> |
afs: Move the vnode/volume validity checking code into its own file
Move the code that does validity checking of vnodes and volumes with respect to third-party changes into its own file.
Signed-off
afs: Move the vnode/volume validity checking code into its own file
Move the code that does validity checking of vnodes and volumes with respect to third-party changes into its own file.
Signed-off-by: David Howells <dhowells@redhat.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: linux-afs@lists.infradead.org
show more ...
|
#
445f9b69 |
| 08-Nov-2023 |
David Howells <dhowells@redhat.com> |
afs: Defer volume record destruction to a workqueue
Defer volume record destruction to a workqueue so that afs_put_volume() isn't going to run the destruction process in the callback workqueue whils
afs: Defer volume record destruction to a workqueue
Defer volume record destruction to a workqueue so that afs_put_volume() isn't going to run the destruction process in the callback workqueue whilst the server is holding up other clients whilst waiting for us to reply to a CB.CallBack notification RPC.
Signed-off-by: David Howells <dhowells@redhat.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: linux-afs@lists.infradead.org
show more ...
|
#
ca0e79a4 |
| 02-Nov-2023 |
David Howells <dhowells@redhat.com> |
afs: Make it possible to find the volumes that are using a server
Make it possible to find the afs_volume structs that are using an afs_server struct to aid in breaking volume callbacks.
The way th
afs: Make it possible to find the volumes that are using a server
Make it possible to find the afs_volume structs that are using an afs_server struct to aid in breaking volume callbacks.
The way this is done is that each afs_volume already has an array of afs_server_entry records that point to the servers where that volume might be found. An afs_volume backpointer and a list node is added to each entry and each entry is then added to an RCU-traversable list on the afs_server to which it points.
Signed-off-by: David Howells <dhowells@redhat.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: linux-afs@lists.infradead.org
show more ...
|
#
21c1f410 |
| 31-Oct-2023 |
David Howells <dhowells@redhat.com> |
afs: Combine the endpoint state bools into a bitmask
Combine the endpoint state bool-type members into a bitmask so that some of them can be waited upon more easily.
Signed-off-by: David Howells <d
afs: Combine the endpoint state bools into a bitmask
Combine the endpoint state bool-type members into a bitmask so that some of them can be waited upon more easily.
Signed-off-by: David Howells <dhowells@redhat.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: linux-afs@lists.infradead.org
show more ...
|
#
f49b594d |
| 31-Oct-2023 |
David Howells <dhowells@redhat.com> |
afs: Keep a record of the current fileserver endpoint state
Keep a record of the current fileserver endpoint state, including the probe state, and replace it when a new probe is started rather than
afs: Keep a record of the current fileserver endpoint state
Keep a record of the current fileserver endpoint state, including the probe state, and replace it when a new probe is started rather than just squelching the old state and overwriting it. Clearance of the old state can cause a race if there's another thread also currently trying to communicate with that server.
It appears that this race might be the culprit for some occasions where kafs complains about invalid data in the RPC reply because the rotation algorithm fell all the way through without actually issuing an RPC call and the error return got filled in from the probe state (which has a zero error recorded). Whatever happens to be in the caller's reply buffer is then taken as the response.
Signed-off-by: David Howells <dhowells@redhat.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: linux-afs@lists.infradead.org
show more ...
|
#
e6a7d7f7 |
| 30-Oct-2023 |
David Howells <dhowells@redhat.com> |
afs: Dispatch vlserver probes in priority order
When probing all the addresses for a volume location server, dispatch them in order of descending priority to try and get back highest priority one fi
afs: Dispatch vlserver probes in priority order
When probing all the addresses for a volume location server, dispatch them in order of descending priority to try and get back highest priority one first.
Also add a tracepoint to show the transmission and completion of the probes.
Signed-off-by: David Howells <dhowells@redhat.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: linux-afs@lists.infradead.org
show more ...
|
#
d14cf8ed |
| 30-Oct-2023 |
David Howells <dhowells@redhat.com> |
afs: Mark address lists with configured priorities
Add a field to each address in an address list (afs_addr_list struct) that records the current priority for that address according to the address p
afs: Mark address lists with configured priorities
Add a field to each address in an address list (afs_addr_list struct) that records the current priority for that address according to the address preference table. We don't want to do this every time we use an address list, so the version number of the address preference table is recorded in the address list too and we only re-mark the list when we see the version change.
These numbers are then displayed through /proc/net/afs/servers.
Signed-off-by: David Howells <dhowells@redhat.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: linux-afs@lists.infradead.org
show more ...
|
#
f94f70d3 |
| 27-Oct-2023 |
David Howells <dhowells@redhat.com> |
afs: Provide a way to configure address priorities
AFS servers may have multiple addresses, but the client can't easily judge between them as to which one is best. For instance, an address that has
afs: Provide a way to configure address priorities
AFS servers may have multiple addresses, but the client can't easily judge between them as to which one is best. For instance, an address that has a larger RTT might actually have a better bandwidth because it goes through a switch rather than being directly connected - but we can't work this out dynamically unless we push through sufficient data that we can measure it.
To allow the administrator to configure this, add a list of preference weightings for server addresses by IPv4/IPv6 address or subnet and allow this to be viewed through a procfile and altered by writing text commands to that same file. Preference rules can be added/updated by:
echo "add <proto> <addr>[/<subnet>] <prior>" >/proc/fs/afs/addr_prefs echo "add udp 1.2.3.4 1000" >/proc/fs/afs/addr_prefs echo "add udp 192.168.0.0/16 3000" >/proc/fs/afs/addr_prefs echo "add udp 1001:2002:0:6::/64 4000" >/proc/fs/afs/addr_prefs
and removed by:
echo "del <proto> <addr>[/<subnet>]" >/proc/fs/afs/addr_prefs echo "del udp 1.2.3.4" >/proc/fs/afs/addr_prefs
where the priority is a number between 0 and 65535.
The list is split between IPv4 and IPv6 addresses and each sublist is kept in numerical order, with rules that would otherwise match but have different subnet masking being ordered with the most specific submatch first.
A subsequent patch will apply these rules.
Signed-off-by: David Howells <dhowells@redhat.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: linux-afs@lists.infradead.org
show more ...
|
#
3560358a |
| 15-Feb-2022 |
David Howells <dhowells@redhat.com> |
afs: Use the netfs write helpers
Make afs use the netfs write helpers.
Signed-off-by: David Howells <dhowells@redhat.com> Reviewed-by: Jeff Layton <jlayton@kernel.org> cc: Marc Dionne <marc.dionne@
afs: Use the netfs write helpers
Make afs use the netfs write helpers.
Signed-off-by: David Howells <dhowells@redhat.com> Reviewed-by: Jeff Layton <jlayton@kernel.org> cc: Marc Dionne <marc.dionne@auristor.com> cc: linux-afs@lists.infradead.org cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org
show more ...
|
#
98f9fda2 |
| 20-Oct-2023 |
David Howells <dhowells@redhat.com> |
afs: Fold the afs_addr_cursor struct in
Fold the afs_addr_cursor struct into the afs_operation struct and the afs_vl_cursor struct and fold its operations into their callers also.
Signed-off-by: Da
afs: Fold the afs_addr_cursor struct in
Fold the afs_addr_cursor struct into the afs_operation struct and the afs_vl_cursor struct and fold its operations into their callers also.
Signed-off-by: David Howells <dhowells@redhat.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: linux-afs@lists.infradead.org
show more ...
|
#
e38f299e |
| 26-Oct-2023 |
David Howells <dhowells@redhat.com> |
afs: Use peer + service_id as call address
Use the rxrpc_peer plus the service ID as the call address instead of passing in a sockaddr_srx down to rxrpc. The peer record is obtained by using rxrpc_
afs: Use peer + service_id as call address
Use the rxrpc_peer plus the service ID as the call address instead of passing in a sockaddr_srx down to rxrpc. The peer record is obtained by using rxrpc_kernel_get_peer(). This avoids the need to repeatedly look up the peer and allows rxrpc to hold on to resources for it.
Signed-off-by: David Howells <dhowells@redhat.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: linux-afs@lists.infradead.org
show more ...
|
#
905b8615 |
| 26-Oct-2023 |
David Howells <dhowells@redhat.com> |
afs: Rename some fields
Rename the ->index and ->untried fields of the afs_vl_cursor and afs_operation struct to ->server_index and ->untried_servers to avoid confusion with address iteration fields
afs: Rename some fields
Rename the ->index and ->untried fields of the afs_vl_cursor and afs_operation struct to ->server_index and ->untried_servers to avoid confusion with address iteration fields when those get folded in.
Signed-off-by: David Howells <dhowells@redhat.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: linux-afs@lists.infradead.org
show more ...
|
#
1e5d8493 |
| 19-Oct-2023 |
David Howells <dhowells@redhat.com> |
afs: Add a tracepoint for struct afs_addr_list
Add a tracepoint to track the lifetime of the afs_addr_list struct.
Signed-off-by: David Howells <dhowells@redhat.com> cc: Marc Dionne <marc.dionne@au
afs: Add a tracepoint for struct afs_addr_list
Add a tracepoint to track the lifetime of the afs_addr_list struct.
Signed-off-by: David Howells <dhowells@redhat.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: linux-afs@lists.infradead.org
show more ...
|