Message ID | 20241029151922.459139-1-kbusch@meta.com |
---|---|
Headers | show |
Series | write hints with nvme fdp, scsi streams | expand |
On Tue, Oct 29, 2024 at 08:19:22AM -0700, Keith Busch wrote: > From: Keith Busch <kbusch@kernel.org> > > The block limits exports the number of write hints, so set this limit if > the device reports support for the lifetime hints. Not only does this > inform the user of which hints are possible, it also allows scsi devices > supporting the feature to utilize the full range through raw block > device direct-io. > > Reviewed-by: Bart Van Assche <bvanassche@acm.org> > Reviewed-by: Hannes Reinecke <hare@suse.de> > Signed-off-by: Keith Busch <kbusch@kernel.org> Despite the reviews this is still incorrect. The permanent streams have a relative data temperature associated with them as pointed out last round and are not arbitrary write stream contexts despite (ab)using the SBC streams facilities. Bart, btw: I think the current sd implementation is buggy as well, as it assumes the permanent streams are ordered by their data temperature in the IO Advise hints mode page, but I can't find anything in the spec that requires a particular ordering.
On Tue, Oct 29, 2024 at 09:34:07AM -0600, Keith Busch wrote: > So then don't use it that way? I still don't know what change you're > expecting to happen with this feedback. What do you want the kernel to > do differently here? Same as before: don't expose them as write streams, because they aren't. A big mess in this series going back to the versions before your involvement is that they somehow want to tie up the temperature hints with the stream separation, which just ends up very messy.
On Tue, Oct 29, 2024 at 09:38:44AM -0600, Keith Busch wrote: > They're not exposed as write streams. Patch 7/9 sets the feature if it > is a placement id or not, and only nvme sets it, so scsi's attributes > are not claiming to be a write stream. So it shows up in sysfs, but: - queue_max_write_hints (which really should be queue_max_write_streams) still picks it up, and from there the statx interface - per-inode fcntl hint that encode a temperature still magically get dumpted into the write streams if they are set. In other words it's a really leaky half-backed abstraction. Let's brainstorm how it could be done better: - the max_write_streams values only set by block devices that actually do support write streams, and not the fire and forget temperature hints. They way this is queried is by having a non-zero value there, not need for an extra flag. - but the struct file (or maybe inode) gets a supported flag, as stream separation needs to be supported by the file system - a separate fcntl is used to set per-inode streams (if you care about that, seem like the bdev use case focusses on per-I/O). In that case we'd probably also need a separate inode field for them, or a somewhat complicated scheme to decide what is stored in the inode field if there is only one. - for block devices bdev/fops.c maps the temperature hints into write streams if write streams are supported, any user that mixes and matches write streams and temperature hints gets what they deserve - this could also be a helper for file systems that want to do the same. Just a quick writeup while I'm on the run, there's probably a hole or two that could be poked into it.
On 10/29/24 08:19, Keith Busch wrote: > From: Kanchan Joshi <joshi.k@samsung.com> > > Flexible Data Placement (FDP), as ratified in TP 4146a, allows the host > to control the placement of logical blocks so as to reduce the SSD WAF. > Userspace can send the write hint information using io_uring or fcntl. > > Fetch the placement-identifiers if the device supports FDP. The incoming > write-hint is mapped to a placement-identifier, which in turn is set in > the DSPEC field of the write command. > > Signed-off-by: Kanchan Joshi <joshi.k@samsung.com> > Signed-off-by: Hui Qi <hui81.qi@samsung.com> > Signed-off-by: Nitesh Shetty <nj.shetty@samsung.com> > Reviewed-by: Hannes Reinecke <hare@suse.de> > Signed-off-by: Keith Busch <kbusch@kernel.org> > --- > drivers/nvme/host/core.c | 84 ++++++++++++++++++++++++++++++++++++++++ > drivers/nvme/host/nvme.h | 5 +++ > include/linux/nvme.h | 19 +++++++++ > 3 files changed, 108 insertions(+) > > diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c > index 3de7555a7de74..bd7b89912ddb9 100644 > --- a/drivers/nvme/host/core.c > +++ b/drivers/nvme/host/core.c > @@ -44,6 +44,20 @@ struct nvme_ns_info { > bool is_removed; > }; > > +struct nvme_fdp_ruh_status_desc { > + u16 pid; > + u16 ruhid; > + u32 earutr; > + u64 ruamw; > + u8 rsvd16[16]; > +}; > + > +struct nvme_fdp_ruh_status { > + u8 rsvd0[14]; > + __le16 nruhsd; > + struct nvme_fdp_ruh_status_desc ruhsd[]; > +}; > + > unsigned int admin_timeout = 60; > module_param(admin_timeout, uint, 0644); > MODULE_PARM_DESC(admin_timeout, "timeout in seconds for admin commands"); > @@ -657,6 +671,7 @@ static void nvme_free_ns_head(struct kref *ref) > ida_free(&head->subsys->ns_ida, head->instance); > cleanup_srcu_struct(&head->srcu); > nvme_put_subsystem(head->subsys); > + kfree(head->plids); > kfree(head); > } > > @@ -974,6 +989,13 @@ static inline blk_status_t nvme_setup_rw(struct nvme_ns *ns, > if (req->cmd_flags & REQ_RAHEAD) > dsmgmt |= NVME_RW_DSM_FREQ_PREFETCH; > > + if (req->write_hint && ns->head->nr_plids) { > + u16 hint = max(req->write_hint, ns->head->nr_plids); > + > + dsmgmt |= ns->head->plids[hint - 1] << 16; > + control |= NVME_RW_DTYPE_DPLCMT; > + } > + > if (req->cmd_flags & REQ_ATOMIC && !nvme_valid_atomic_write(req)) > return BLK_STS_INVAL; > > @@ -2105,6 +2127,52 @@ static int nvme_update_ns_info_generic(struct nvme_ns *ns, > return ret; > } > > +static int nvme_fetch_fdp_plids(struct nvme_ns *ns, u32 nsid) > +{ > + struct nvme_fdp_ruh_status_desc *ruhsd; > + struct nvme_ns_head *head = ns->head; > + struct nvme_fdp_ruh_status *ruhs; > + struct nvme_command c = {}; > + int size, ret, i; > + > + if (head->plids) > + return 0; > + > + size = struct_size(ruhs, ruhsd, NVME_MAX_PLIDS); > + ruhs = kzalloc(size, GFP_KERNEL); > + if (!ruhs) > + return -ENOMEM; > + > + c.imr.opcode = nvme_cmd_io_mgmt_recv; > + c.imr.nsid = cpu_to_le32(nsid); > + c.imr.mo = 0x1; can we please add some comment where values are hardcoded ? > + c.imr.numd = cpu_to_le32((size >> 2) - 1); > + > + ret = nvme_submit_sync_cmd(ns->queue, &c, ruhs, size); > + if (ret) > + goto out; > + > + i = le16_to_cpu(ruhs->nruhsd); instead of i why can't we use local variable nr_plids ? > + if (!i) > + goto out; > + > + ns->head->nr_plids = min_t(u16, i, NVME_MAX_PLIDS); > + head->plids = kcalloc(ns->head->nr_plids, sizeof(head->plids), > + GFP_KERNEL); > + if (!head->plids) { > + ret = -ENOMEM; > + goto out; > + } > + > + for (i = 0; i < ns->head->nr_plids; i++) { > + ruhsd = &ruhs->ruhsd[i]; > + head->plids[i] = le16_to_cpu(ruhsd->pid); > + } > +out: > + kfree(ruhs); > + return ret; > +} > + > static int nvme_update_ns_info_block(struct nvme_ns *ns, > struct nvme_ns_info *info) > { > @@ -2141,6 +2209,19 @@ static int nvme_update_ns_info_block(struct nvme_ns *ns, > goto out; > } > > + if (ns->ctrl->ctratt & NVME_CTRL_ATTR_FDPS) { > + ret = nvme_fetch_fdp_plids(ns, info->nsid); > + if (ret) > + dev_warn(ns->ctrl->device, > + "FDP failure status:0x%x\n", ret); > + if (ret < 0) > + goto out; > + } else { > + ns->head->nr_plids = 0; > + kfree(ns->head->plids); > + ns->head->plids = NULL; > + } > + > blk_mq_freeze_queue(ns->disk->queue); > ns->head->lba_shift = id->lbaf[lbaf].ds; > ns->head->nuse = le64_to_cpu(id->nuse); > @@ -2171,6 +2252,9 @@ static int nvme_update_ns_info_block(struct nvme_ns *ns, > if (!nvme_init_integrity(ns->head, &lim, info)) > capacity = 0; > > + lim.max_write_hints = ns->head->nr_plids; > + if (lim.max_write_hints) > + lim.features |= BLK_FEAT_PLACEMENT_HINTS; > ret = queue_limits_commit_update(ns->disk->queue, &lim); > if (ret) { > blk_mq_unfreeze_queue(ns->disk->queue); > diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h > index 093cb423f536b..cec8e5d96377b 100644 > --- a/drivers/nvme/host/nvme.h > +++ b/drivers/nvme/host/nvme.h > @@ -454,6 +454,8 @@ struct nvme_ns_ids { > u8 csi; > }; > > +#define NVME_MAX_PLIDS (NVME_CTRL_PAGE_SIZE / sizeof(16)) this calculates how many plids can fit into the ctrl page size ? sorry but I didn't understand sizeof(16) here, since plids are u16 nvme_ns_head -> u16 *plidsshould this be sizeof(u16) ? -ck
On Tue, Oct 29, 2024 at 10:22:56AM -0600, Keith Busch wrote: > On Tue, Oct 29, 2024 at 04:53:30PM +0100, Christoph Hellwig wrote: > > On Tue, Oct 29, 2024 at 09:38:44AM -0600, Keith Busch wrote: > > > They're not exposed as write streams. Patch 7/9 sets the feature if it > > > is a placement id or not, and only nvme sets it, so scsi's attributes > > > are not claiming to be a write stream. > > > > So it shows up in sysfs, but: > > > > - queue_max_write_hints (which really should be queue_max_write_streams) > > still picks it up, and from there the statx interface > > > > - per-inode fcntl hint that encode a temperature still magically > > get dumpted into the write streams if they are set. > > > > In other words it's a really leaky half-backed abstraction. > > Exactly why I asked last time: "who uses it and how do you want them to > use it" :) For the temperature hints the only public user I known is rocksdb, and that only started working when Hans fixed a brown paperbag bug in the rocksdb code a while ago. Given that f2fs interprets the hints I suspect something in the Android world does as well, maybe Bart knows more. For the separate write streams the usage I want for them is poor mans zones - e.g. write N LBAs sequentially into a separate write streams and then eventually discard them together. This will fit nicely into f2fs and the pending xfs work as well as quite a few userspace storage systems. For that the file system or application needs to query the number of available write streams (and in the bitmap world their numbers of they are distontigous) and the size your can fit into the "reclaim unit" in FDP terms. I've not been bothering you much with the latter as it is an easy retrofit once the I/O path bits lands. > > Let's brainstorm how it could be done better: > > > > - the max_write_streams values only set by block devices that actually > > do support write streams, and not the fire and forget temperature > > hints. They way this is queried is by having a non-zero value > > there, not need for an extra flag. > > So we need a completely different attribute for SCSI's permanent write > streams? You'd mentioned earlier you were okay with having SCSI be able > to utilized per-io raw block write hints. Having multiple things to > check for what are all just write classifiers seems unnecessarily > complicated. I don't think the multiple write streams interface applies to SCSIs write streams, as they enforce a relative temperature, and they don't have the concept of how much you can write into an "reclaim unit". OTOH there isn't much you need to query for them anyway, as the temperature hints have always been defined as pure hints with all up and downsides of that. > No need to create a new fcntl. The people already testing this are > successfully using FDP with the existing fcntl hints. Their applications > leverage FDP as way to separate files based on expected lifetime. It is > how they want to use it and it is working above expectations. FYI, I think it's always fine and easy to map the temperature hits to write streams if that's all the driver offers. It loses a lot of the capapilities, but as long as it doesn't enforce a lower level interface that never exposes more that's fine.
On Tue, Oct 29, 2024 at 10:18:31AM -0700, Bart Van Assche wrote: > On 10/29/24 8:26 AM, Christoph Hellwig wrote: >> Bart, btw: I think the current sd implementation is buggy as well, as >> it assumes the permanent streams are ordered by their data temperature >> in the IO Advise hints mode page, but I can't find anything in the >> spec that requires a particular ordering. > > How about modifying sd_read_io_hints() such that permanent stream > information is ignored if the order of the RELATIVE LIFETIME information > reported by the GET STREAM STATUS command does not match the permanent > stream order? That seems odd as there is nothing implying that they should be ordered. The logic thing to do would be to a little array mapping the linux temperature levels to the streams ids.
On Wed, Oct 30, 2024 at 05:55:26AM +0100, Christoph Hellwig wrote: > On Tue, Oct 29, 2024 at 10:22:56AM -0600, Keith Busch wrote: > > > No need to create a new fcntl. The people already testing this are > > successfully using FDP with the existing fcntl hints. Their applications > > leverage FDP as way to separate files based on expected lifetime. It is > > how they want to use it and it is working above expectations. > > FYI, I think it's always fine and easy to map the temperature hits to > write streams if that's all the driver offers. It loses a lot of the > capapilities, but as long as it doesn't enforce a lower level interface > that never exposes more that's fine. But that's just the v2 from this sequence: https://lore.kernel.org/linux-nvme/20240528150233.55562-1-joshi.k@samsung.com/ If you're okay with it now, then let's just go with that and I'm happy continue iterating on the rest separately.
On Wed, Oct 30, 2024 at 09:41:39AM -0600, Keith Busch wrote: > On Wed, Oct 30, 2024 at 05:55:26AM +0100, Christoph Hellwig wrote: > > On Tue, Oct 29, 2024 at 10:22:56AM -0600, Keith Busch wrote: > > > > > No need to create a new fcntl. The people already testing this are > > > successfully using FDP with the existing fcntl hints. Their applications > > > leverage FDP as way to separate files based on expected lifetime. It is > > > how they want to use it and it is working above expectations. > > > > FYI, I think it's always fine and easy to map the temperature hits to > > write streams if that's all the driver offers. It loses a lot of the > > capapilities, but as long as it doesn't enforce a lower level interface > > that never exposes more that's fine. > > But that's just the v2 from this sequence: > > https://lore.kernel.org/linux-nvme/20240528150233.55562-1-joshi.k@samsung.com/ > > If you're okay with it now, then let's just go with that and I'm happy > continue iterating on the rest separately. That's exactly what I do not want - it takes the temperature hints and force them into the write streams down in the driver with no way to make actually useful use of the stream separation.
On Wed, Oct 30, 2024 at 04:45:56PM +0100, Christoph Hellwig wrote: > On Wed, Oct 30, 2024 at 09:41:39AM -0600, Keith Busch wrote: > > On Wed, Oct 30, 2024 at 05:55:26AM +0100, Christoph Hellwig wrote: > > > On Tue, Oct 29, 2024 at 10:22:56AM -0600, Keith Busch wrote: > > > > > > > No need to create a new fcntl. The people already testing this are > > > > successfully using FDP with the existing fcntl hints. Their applications > > > > leverage FDP as way to separate files based on expected lifetime. It is > > > > how they want to use it and it is working above expectations. > > > > > > FYI, I think it's always fine and easy to map the temperature hits to > > > write streams if that's all the driver offers. It loses a lot of the > > > capapilities, but as long as it doesn't enforce a lower level interface > > > that never exposes more that's fine. > > > > But that's just the v2 from this sequence: > > > > https://lore.kernel.org/linux-nvme/20240528150233.55562-1-joshi.k@samsung.com/ > > > > If you're okay with it now, then let's just go with that and I'm happy > > continue iterating on the rest separately. > > That's exactly what I do not want - it takes the temperature hints > and force them into the write streams down in the driver What??? You said to map the temperature hints to a write stream. The driver offers that here. But you specifically don't want that? I'm so confused. > with no way to make actually useful use of the stream separation. Have you tried it? The people who actually do easily demonstrate it is in fact very useful.
On Wed, Oct 30, 2024 at 04:50:52PM +0100, Christoph Hellwig wrote: > On Wed, Oct 30, 2024 at 09:48:39AM -0600, Keith Busch wrote: > > What??? You said to map the temperature hints to a write stream. The > > driver offers that here. But you specifically don't want that? I'm so > > confused. > > In bdev/fops.c (or file systems if they want to do that) not down in the > driver forced down everyones throat. Which was the whole point of the > discussion that we're running in circles here. That makes no sense. A change completely isolated to a driver isn't forcing anything on anyone. It's the upper layers that's forcing this down, whether the driver uses it or not: the hints are already getting to the driver, but the driver currently doesn't use it. Finding a way to use them is not some force to be demonized... > > > with no way to make actually useful use of the stream separation. > > > > Have you tried it? The people who actually do easily demonstrate it is > > in fact very useful. > > While I've read the claim multiple times, I've not actually seen any > numbers. Here's something recent from rocksdb developers running ycsb worklada benchmark. The filesystem used is XFS. It sets temperature hints for different SST levels, which already happens today. The last data point made some minor changes with level-to-hint mapping. Without FDP: WAF: 2.72 IOPS: 1465 READ LAT: 2681us UPDATE LAT: 3115us With FDP (rocksdb unmodified): WAF: 2.26 IOPS: 1473 READ LAT: 2415us UPDATE LAT: 2807us With FDP (with some minor rocksdb changes): WAF: 1.67 IOPS: 1547 READ LAT: 1978us UPDATE LAT: 2267us
On 10/29/24 9:55 PM, Christoph Hellwig wrote: > For the temperature hints the only public user I known is rocksdb, and > that only started working when Hans fixed a brown paperbag bug in the > rocksdb code a while ago. Given that f2fs interprets the hints I suspect > something in the Android world does as well, maybe Bart knows more. UFS devices typically have less internal memory (SRAM) than the size of a single zone. Hence, it helps UFS devices if it can be decided at the time a write command is received where to send the data (SRAM, SLC NAND or TLC NAND). This is why UFS vendors asked to provide data lifetime information to zoned logical units. More information about UFS device internals is available in this paper: Hwang, Joo-Young, Seokhwan Kim, Daejun Park, Yong-Gil Song, Junyoung Han, Seunghyun Choi, Sangyeun Cho, and Youjip Won. "{ZMS}: Zone Abstraction for Mobile Flash Storage." In 2024 USENIX Annual Technical Conference (USENIX ATC 24), pp. 173-189. 2024 (https://www.usenix.org/system/files/atc24-hwang.pdf). Bart.
On Wed, Oct 30, 2024 at 09:59:24AM -0700, Bart Van Assche wrote: > > On 10/29/24 9:55 PM, Christoph Hellwig wrote: >> For the temperature hints the only public user I known is rocksdb, and >> that only started working when Hans fixed a brown paperbag bug in the >> rocksdb code a while ago. Given that f2fs interprets the hints I suspect >> something in the Android world does as well, maybe Bart knows more. > > UFS devices typically have less internal memory (SRAM) than the size of a > single zone. That wasn't quite the question. Do you know what application in android set the fcntl temperature hints?
On Wed, Oct 30, 2024 at 05:57:08PM +0100, Christoph Hellwig wrote: > > On Wed, Oct 30, 2024 at 04:50:52PM +0100, Christoph Hellwig wrote: > > > It sets temperature hints for different SST levels, which already > > happens today. The last data point made some minor changes with > > level-to-hint mapping. > > Do you have a pointer to the changes? The change moves levels 2 and 3 to "MEDIUM" (along with 0 and 1 already there), 4 to "LONG", and >= 5 remain "EXTREME". WAL continues to be "SHORT", as before. > > Without FDP: > > > > WAF: 2.72 > > IOPS: 1465 > > READ LAT: 2681us > > UPDATE LAT: 3115us > > > > With FDP (rocksdb unmodified): > > > > WAF: 2.26 > > IOPS: 1473 > > READ LAT: 2415us > > UPDATE LAT: 2807us > > > > With FDP (with some minor rocksdb changes): > > > > WAF: 1.67 > > IOPS: 1547 > > READ LAT: 1978us > > UPDATE LAT: 2267us > > Compared to the Numbers Hans presented at Plumbers for the Zoned XFS code, > which should work just fine with FDP IFF we exposed real write streams, > which roughly double read nad wirte IOPS and reduce the WAF to almost > 1 this doesn't look too spectacular to be honest, but it sure it something. > > I just wish we could get the real infraѕtructure instead of some band > aid, which makes it really hard to expose the real thing because now > it's been taken up and directly wired to a UAPI. > one This doesn't have to be the end placement streams development. I fundamentally disagree that this locks anyone in to anything.
On 10/30/24 10:14 AM, Christoph Hellwig wrote: > On Wed, Oct 30, 2024 at 09:59:24AM -0700, Bart Van Assche wrote: >> >> On 10/29/24 9:55 PM, Christoph Hellwig wrote: >>> For the temperature hints the only public user I known is rocksdb, and >>> that only started working when Hans fixed a brown paperbag bug in the >>> rocksdb code a while ago. Given that f2fs interprets the hints I suspect >>> something in the Android world does as well, maybe Bart knows more. >> >> UFS devices typically have less internal memory (SRAM) than the size of a >> single zone. > > That wasn't quite the question. Do you know what application in android > set the fcntl temperature hints? I do not know whether there are any Android apps that use the F_SET_(FILE_|)RW_HINT fcntls. The only use case in Android platform code I know of is this one: Daejun Park, "f2fs-tools: add write hint support", f2fs-dev mailing list, September 2024 (https://lore.kernel.org/all/20240904011217epcms2p5a1b15db8e0ae50884429da7be4af4de4@epcms2p5/T/). As you probably know f2fs-tools is a software package that includes fsck.f2fs. Jaegeuk, please correct me if necessary. Bart.
On Wed, Oct 30, 2024 at 05:57:08PM +0100, Christoph Hellwig wrote: > On Wed, Oct 30, 2024 at 10:42:59AM -0600, Keith Busch wrote: > > With FDP (with some minor rocksdb changes): > > > > WAF: 1.67 > > IOPS: 1547 > > READ LAT: 1978us > > UPDATE LAT: 2267us > > Compared to the Numbers Hans presented at Plumbers for the Zoned XFS code, > which should work just fine with FDP IFF we exposed real write streams, > which roughly double read nad wirte IOPS and reduce the WAF to almost > 1 this doesn't look too spectacular to be honest, but it sure it something. Hold up... I absolutely appreciate the work Hans is and has done. But are you talking about this talk? https://lpc.events/event/18/contributions/1822/attachments/1464/3105/Zoned%20XFS%20LPC%20Zoned%20MC%202024%20V1.pdf That is very much apples-to-oranges. The B+ isn't on the same device being evaluated for WAF, where this has all that mixed in. I think the results are pretty good, all things considered. > I just wish we could get the real infraѕtructure instead of some band > aid, which makes it really hard to expose the real thing because now > it's been taken up and directly wired to a UAPI. > one I don't know what make of this. I think we're talking past each other.
On Wed, Oct 30, 2024 at 11:33 PM Keith Busch <kbusch@kernel.org> wrote: > > On Wed, Oct 30, 2024 at 05:57:08PM +0100, Christoph Hellwig wrote: > > On Wed, Oct 30, 2024 at 10:42:59AM -0600, Keith Busch wrote: > > > With FDP (with some minor rocksdb changes): > > > > > > WAF: 1.67 > > > IOPS: 1547 > > > READ LAT: 1978us > > > UPDATE LAT: 2267us > > > > Compared to the Numbers Hans presented at Plumbers for the Zoned XFS code, > > which should work just fine with FDP IFF we exposed real write streams, > > which roughly double read nad wirte IOPS and reduce the WAF to almost > > 1 this doesn't look too spectacular to be honest, but it sure it something. > > Hold up... I absolutely appreciate the work Hans is and has done. But > are you talking about this talk? > > https://lpc.events/event/18/contributions/1822/attachments/1464/3105/Zoned%20XFS%20LPC%20Zoned%20MC%202024%20V1.pdf > > That is very much apples-to-oranges. The B+ isn't on the same device > being evaluated for WAF, where this has all that mixed in. I think the > results are pretty good, all things considered. No. The meta data IO is just 0.1% of all writes, so that we use a separate device for that in the benchmark really does not matter. Since we can achieve a WAF of ~1 for RocksDB on flash, why should we be content with another 67% of unwanted device side writes on top of that? It's of course impossible to compare your benchmark figures and mine directly since we are using different devices, but hey, we definitely have an opportunity here to make significant gains for FDP if we just provide the right kernel interfaces. Why shouldn't we expose the hardware in a way that enables the users to make the most out of it?
On Thu, Oct 31, 2024 at 09:19:51AM +0100, Hans Holmberg wrote: > No. The meta data IO is just 0.1% of all writes, so that we use a > separate device for that in the benchmark really does not matter. > > Since we can achieve a WAF of ~1 for RocksDB on flash, why should we > be content with another 67% of unwanted device side writes on top of > that? > > It's of course impossible to compare your benchmark figures and mine > directly since we are using different devices, but hey, we definitely > have an opportunity here to make significant gains for FDP if we just > provide the right kernel interfaces. I'll write code to do a 1:1 single device comparism over the weekend and Hans will test it once he is back.
On Thu, Oct 31, 2024 at 09:19:51AM +0100, Hans Holmberg wrote: > On Wed, Oct 30, 2024 at 11:33 PM Keith Busch <kbusch@kernel.org> wrote: > > That is very much apples-to-oranges. The B+ isn't on the same device > > being evaluated for WAF, where this has all that mixed in. I think the > > results are pretty good, all things considered. > > No. The meta data IO is just 0.1% of all writes, so that we use a > separate device for that in the benchmark really does not matter. It's very little spatially, but they overwrite differently than other data, creating many small holes in large erase blocks. > Since we can achieve a WAF of ~1 for RocksDB on flash, why should we > be content with another 67% of unwanted device side writes on top of > that? > > It's of course impossible to compare your benchmark figures and mine > directly since we are using different devices, but hey, we definitely > have an opportunity here to make significant gains for FDP if we just > provide the right kernel interfaces. > > Why shouldn't we expose the hardware in a way that enables the users > to make the most out of it? Because the people using this want this interface. Stalling for the last 6 months hasn't produced anything better, appealing to non-existent vaporware to block something ready-to-go that satisfies a need right now is just wasting everyone's time. Again, I absolutely disagree that this locks anyone in to anything. That's an overly dramatic excuse.
On 10/30, Bart Van Assche wrote: > On 10/30/24 10:14 AM, Christoph Hellwig wrote: > > On Wed, Oct 30, 2024 at 09:59:24AM -0700, Bart Van Assche wrote: > > > > > > On 10/29/24 9:55 PM, Christoph Hellwig wrote: > > > > For the temperature hints the only public user I known is rocksdb, and > > > > that only started working when Hans fixed a brown paperbag bug in the > > > > rocksdb code a while ago. Given that f2fs interprets the hints I suspect > > > > something in the Android world does as well, maybe Bart knows more. > > > > > > UFS devices typically have less internal memory (SRAM) than the size of a > > > single zone. > > > > That wasn't quite the question. Do you know what application in android > > set the fcntl temperature hints? > > I do not know whether there are any Android apps that use the > F_SET_(FILE_|)RW_HINT fcntls. > > The only use case in Android platform code I know of is this one: Daejun > Park, "f2fs-tools: add write hint support", f2fs-dev mailing list, > September 2024 (https://lore.kernel.org/all/20240904011217epcms2p5a1b15db8e0ae50884429da7be4af4de4@epcms2p5/T/). > As you probably know f2fs-tools is a software package that includes > fsck.f2fs. > > Jaegeuk, please correct me if necessary. Yes, f2fs-tools in Android calls fcntl(fd, F_SET_RW_HINT, &hint); > > Bart. > >
On Thu, Oct 31, 2024 at 3:06 PM Keith Busch <kbusch@kernel.org> wrote: > > On Thu, Oct 31, 2024 at 09:19:51AM +0100, Hans Holmberg wrote: > > On Wed, Oct 30, 2024 at 11:33 PM Keith Busch <kbusch@kernel.org> wrote: > > > That is very much apples-to-oranges. The B+ isn't on the same device > > > being evaluated for WAF, where this has all that mixed in. I think the > > > results are pretty good, all things considered. > > > > No. The meta data IO is just 0.1% of all writes, so that we use a > > separate device for that in the benchmark really does not matter. > > It's very little spatially, but they overwrite differently than other > data, creating many small holes in large erase blocks. I don't really get how this could influence anything significantly.(If at all). > > > Since we can achieve a WAF of ~1 for RocksDB on flash, why should we > > be content with another 67% of unwanted device side writes on top of > > that? > > > > It's of course impossible to compare your benchmark figures and mine > > directly since we are using different devices, but hey, we definitely > > have an opportunity here to make significant gains for FDP if we just > > provide the right kernel interfaces. > > > > Why shouldn't we expose the hardware in a way that enables the users > > to make the most out of it? > > Because the people using this want this interface. Stalling for the last > 6 months hasn't produced anything better, appealing to non-existent > vaporware to block something ready-to-go that satisfies a need right > now is just wasting everyone's time. > > Again, I absolutely disagree that this locks anyone in to anything. > That's an overly dramatic excuse. Locking in or not, to constructively move things forward (if we are now stuck on how to wire up fs support) I believe it would be worthwhile to prototype active fdp data placement in xfs and evaluate it. Happy to help out with that. Fdp and zns are different beasts, so I don't expect the results in the presentation to be directly translatable but we can see what we can do. Is RocksDB the only file system user at the moment? Is the benchmark setup/config something that could be shared?
On 01.11.2024 08:16, Hans Holmberg wrote: >Locking in or not, to constructively move things forward (if we are >now stuck on how to wire up fs support) I believe it would be >worthwhile to prototype active fdp data placement in xfs and evaluate >it. Happy to help out with that. I appreciate you willingness to move things forward. I really mean it. I have talked several times in this thread about collaborating in the API that you have in mind. I would _very_ much like to have a common abstraction for ZNS, ZUFS, FDP, and whatever people build on other protocols. But without tangible patches showing this, we simply cannot block this anymore. > >Fdp and zns are different beasts, so I don't expect the results in the >presentation to be directly translatable but we can see what we can >do. > >Is RocksDB the only file system user at the moment? >Is the benchmark setup/config something that could be shared? It is a YCSB workload. You have the scripts here: https://github.com/brianfrankcooper/YCSB/blob/master/workloads/workloada If you have other standard workload you want us to run, let me know and we will post the results in the list too. We will post the changes to the L3 placement in RocksDB. I think we can make them available somewhere for you to test before that. Let me come back to you on this.
On Fri, Nov 01, 2024 at 08:16:30AM +0100, Hans Holmberg wrote: > On Thu, Oct 31, 2024 at 3:06 PM Keith Busch <kbusch@kernel.org> wrote: > > On Thu, Oct 31, 2024 at 09:19:51AM +0100, Hans Holmberg wrote: > > > No. The meta data IO is just 0.1% of all writes, so that we use a > > > separate device for that in the benchmark really does not matter. > > > > It's very little spatially, but they overwrite differently than other > > data, creating many small holes in large erase blocks. > > I don't really get how this could influence anything significantly.(If at all). Fill your filesystem to near capacity, then continue using it for a few months. While the filesystem will report some available space, there may not be many good blocks available to erase. Maybe. > > Again, I absolutely disagree that this locks anyone in to anything. > > That's an overly dramatic excuse. > > Locking in or not, to constructively move things forward (if we are > now stuck on how to wire up fs support) But we're not stuck on how to wire up to fs. That part was settled and in kernel 10 years ago. We're stuck on wiring it down to the driver, which should have been the easiest part. > I believe it would be worthwhile to prototype active fdp data > placement in xfs and evaluate it. Happy to help out with that. When are we allowed to conclude evaluation? We have benefits my customers want on well tested kernels, and wish to proceed now. I'm not discouraing anyone from continuing further prototypes, innovations, and improvements. I'd like to spend more time doing that too, and merging something incrementally better doesn't prevent anyone from doing that. > Fdp and zns are different beasts, so I don't expect the results in the > presentation to be directly translatable but we can see what we can > do. > > Is RocksDB the only file system user at the moment? Rocks is the only open source one I know about. There are propietary users, too.
I've pushed my branch that tries to make this work with the XFS data separation here: http://git.infradead.org/?p=users/hch/xfs.git;a=shortlog;h=refs/heads/xfs-zoned-streams This is basically my current WIP xfs zoned (aka always write out place) work optimistically destined for 6.14 + the patch set in this thread + a little fix to make it work for nvme-multipath plus the tiny patch to wire it up. The good news is that the API from Keith mostly works. I don't really know how to cope with the streams per partition bitmap, and I suspect this will need to be dealt with a bit better. One option might be to always have a bitmap, which would also support discontiguous write stream numbers as actually supported by the underlying NVMe implementation, another option would be to always map to consecutive numbers. The bad news is that for file systems or applications to make full use of the API we also really need an API to expose how much space is left in a write stream, as otherwise they can easily get out of sync on a power fail. I've left that code in as a TODO, it should not affect basic testing. We get the same kind of performance numbers as the ZNS support on comparable hardware platforms, which is expected. Testing on an actual state of the art non-prototype hardware will take more time as the capacities are big enough that getting serious numbers will take a lot more time.
On Fri, Nov 1, 2024 at 3:49 PM Keith Busch <kbusch@kernel.org> wrote: > > On Fri, Nov 01, 2024 at 08:16:30AM +0100, Hans Holmberg wrote: > > On Thu, Oct 31, 2024 at 3:06 PM Keith Busch <kbusch@kernel.org> wrote: > > > On Thu, Oct 31, 2024 at 09:19:51AM +0100, Hans Holmberg wrote: > > > > No. The meta data IO is just 0.1% of all writes, so that we use a > > > > separate device for that in the benchmark really does not matter. > > > > > > It's very little spatially, but they overwrite differently than other > > > data, creating many small holes in large erase blocks. > > > > I don't really get how this could influence anything significantly.(If at all). > > Fill your filesystem to near capacity, then continue using it for a few > months. While the filesystem will report some available space, there > may not be many good blocks available to erase. Maybe. For *this* benchmark workload, the metadata io is such a tiny fraction so I doubt the impact on wa could be measured. I completely agree it's a good idea to separate metadata from data blocks in general. It is actually a good reason for letting the file system control write stream allocation for all blocks :) > > I believe it would be worthwhile to prototype active fdp data > > placement in xfs and evaluate it. Happy to help out with that. > > When are we allowed to conclude evaluation? We have benefits my > customers want on well tested kernels, and wish to proceed now. Christoph has now wired up prototype support for FDP on top of the xfs-rt-zoned work + this patch set, and I have had time to look over it and started doing some testing on HW. In addition to the FDP support, metadata can also be stored on the same block device as the data. Now that all placement handles are available, we can use the full data separation capabilities of the underlying storage, so that's good. We can map out the placement handles to different write streams much like we assign open zones for zoned storage and this opens up for supporting data placement heuristics for a wider range use cases (not just the RocksDB use case discussed here). The big pieces that are missing from the FDP plumbing as I see it is the ability to read reclaim unit size and syncing up the remaining capacity of the placement units with the file system allocation groups, but I guess that can be added later. I've started benchmarking on the hardware I have at hand, iterating on a good workload configuration. It will take some time to get to some robust write amp measurements since the drives are very big and require a painfully long warmup time.
On Tue, Nov 05, 2024 at 04:50:14PM +0100, Christoph Hellwig wrote: > I've pushed my branch that tries to make this work with the XFS > data separation here: > > http://git.infradead.org/?p=users/hch/xfs.git;a=shortlog;h=refs/heads/xfs-zoned-streams > > This is basically my current WIP xfs zoned (aka always write out place) > work optimistically destined for 6.14 + the patch set in this thread + > a little fix to make it work for nvme-multipath plus the tiny patch to > wire it up. > > The good news is that the API from Keith mostly works. I don't really > know how to cope with the streams per partition bitmap, and I suspect > this will need to be dealt with a bit better. One option might be > to always have a bitmap, which would also support discontiguous > write stream numbers as actually supported by the underlying NVMe > implementation, another option would be to always map to consecutive > numbers. Thanks for sharing that. Seeing the code makes it much easier to understand where you're trying to steer this. I'll take a look and probably have some feedback after a couple days going through it.
On 10/29/24 9:19 AM, Keith Busch wrote: > diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h > index 0247452837830..6e1985d3b306c 100644 > --- a/include/uapi/linux/io_uring.h > +++ b/include/uapi/linux/io_uring.h > @@ -92,6 +92,10 @@ struct io_uring_sqe { > __u16 addr_len; > __u16 __pad3[1]; > }; > + struct { > + __u16 write_hint; > + __u16 __pad4[1]; > + }; Might make more sense to have this overlap further down, with the passthrough command. That'd put it solidly out of anything that isn't passthrough or needs addr3. > diff --git a/io_uring/rw.c b/io_uring/rw.c > index 7ce1cbc048faf..b5dea58356d93 100644 > --- a/io_uring/rw.c > +++ b/io_uring/rw.c > @@ -279,7 +279,8 @@ static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe, > rw->kiocb.ki_ioprio = get_current_ioprio(); > } > rw->kiocb.dio_complete = NULL; > - > + if (ddir == ITER_SOURCE) > + rw->kiocb.ki_write_hint = READ_ONCE(sqe->write_hint); > rw->addr = READ_ONCE(sqe->addr); > rw->len = READ_ONCE(sqe->len); > rw->flags = READ_ONCE(sqe->rw_flags); Can't we just read it unconditionally? I know it's a write hint, hence why checking for ITER_SOURCE, but if we can just set it regardless, then we don't need to branch around that.
On Tue, Nov 05, 2024 at 04:50:14PM +0100, Christoph Hellwig wrote: > I've pushed my branch that tries to make this work with the XFS > data separation here: > > http://git.infradead.org/?p=users/hch/xfs.git;a=shortlog;h=refs/heads/xfs-zoned-streams The zone block support all looks pretty neat, but I think you're making this harder than necessary to support streams. You don't need to treat these like a sequential write device. The controller side does its own garbage collection, so no need to duplicate the effort on the host. And it looks like the host side gc potentially merges multiple streams into a single gc stream, so that's probably not desirable.
On Thu, Nov 07, 2024 at 01:36:35PM -0700, Keith Busch wrote: > The zone block support all looks pretty neat, but I think you're making > this harder than necessary to support streams. You don't need to treat > these like a sequential write device. The controller side does its own > garbage collection, so no need to duplicate the effort on the host. And > it looks like the host side gc potentially merges multiple streams into > a single gc stream, so that's probably not desirable. We're not really duplicating much. Writing sequential is pretty easy, and tracking reclaim units separately means you need another tracking data structure, and either that or the LBA one is always going to be badly fragmented if they aren't the same.
On Fri, Nov 08, 2024 at 03:18:52PM +0100, Christoph Hellwig wrote: > On Thu, Nov 07, 2024 at 01:36:35PM -0700, Keith Busch wrote: > > The zone block support all looks pretty neat, but I think you're making > > this harder than necessary to support streams. You don't need to treat > > these like a sequential write device. The controller side does its own > > garbage collection, so no need to duplicate the effort on the host. And > > it looks like the host side gc potentially merges multiple streams into > > a single gc stream, so that's probably not desirable. > > We're not really duplicating much. Writing sequential is pretty easy, > and tracking reclaim units separately means you need another tracking > data structure, and either that or the LBA one is always going to be > badly fragmented if they aren't the same. You're getting fragmentation anyway, which is why you had to implement gc. You're just shifting who gets to deal with it from the controller to the host. The host is further from the media, so you're starting from a disadvantage. The host gc implementation would have to be quite a bit better to justify the link and memory usage necessary for the copies (...queue a copy-offload discussion? oom?). This xfs implementation also has logic to recover from a power fail. The device already does that if you use the LBA abstraction instead of tracking sequential write pointers and free blocks. I think you are underestimating the duplication of efforts going on here.
On Fri, Nov 08, 2024 at 08:51:31AM -0700, Keith Busch wrote: > On Fri, Nov 08, 2024 at 03:18:52PM +0100, Christoph Hellwig wrote: > > We're not really duplicating much. Writing sequential is pretty easy, > > and tracking reclaim units separately means you need another tracking > > data structure, and either that or the LBA one is always going to be > > badly fragmented if they aren't the same. > > You're getting fragmentation anyway, which is why you had to implement > gc. You're just shifting who gets to deal with it from the controller to > the host. The host is further from the media, so you're starting from a > disadvantage. The host gc implementation would have to be quite a bit > better to justify the link and memory usage necessary for the copies > (...queue a copy-offload discussion? oom?). But the filesystem knows which blocks are actually in use. Sending TRIM/DISCARD information to the drive at block-level granularity hasn't worked out so well in the past. So the drive is the one at a disadvantage because it has to copy blocks which aren't actually in use. I like the idea of using copy-offload though.
> -----Original Message----- > From: Matthew Wilcox <willy@infradead.org> > Sent: Friday, November 8, 2024 5:55 PM > To: Keith Busch <kbusch@kernel.org> > Cc: Christoph Hellwig <hch@lst.de>; Keith Busch <kbusch@meta.com>; linux- > block@vger.kernel.org; linux-nvme@lists.infradead.org; linux-scsi@vger.kernel.org; > io-uring@vger.kernel.org; linux-fsdevel@vger.kernel.org; joshi.k@samsung.com; > Javier Gonzalez <javier.gonz@samsung.com>; bvanassche@acm.org > Subject: Re: [PATCHv10 0/9] write hints with nvme fdp, scsi streams > > On Fri, Nov 08, 2024 at 08:51:31AM -0700, Keith Busch wrote: > > On Fri, Nov 08, 2024 at 03:18:52PM +0100, Christoph Hellwig wrote: > > > We're not really duplicating much. Writing sequential is pretty easy, > > > and tracking reclaim units separately means you need another tracking > > > data structure, and either that or the LBA one is always going to be > > > badly fragmented if they aren't the same. > > > > You're getting fragmentation anyway, which is why you had to implement > > gc. You're just shifting who gets to deal with it from the controller to > > the host. The host is further from the media, so you're starting from a > > disadvantage. The host gc implementation would have to be quite a bit > > better to justify the link and memory usage necessary for the copies > > (...queue a copy-offload discussion? oom?). > > But the filesystem knows which blocks are actually in use. Sending > TRIM/DISCARD information to the drive at block-level granularity hasn't > worked out so well in the past. So the drive is the one at a disadvantage > because it has to copy blocks which aren't actually in use. It is true that trim has not been great. I would say that at least enterprise SSDs have fixed this in general. For FDP, DSM Deallocate is respected, which Provides a good "erase" interface to the host. It is true though that this is not properly described in the spec and we should fix it. > > I like the idea of using copy-offload though. We have been iterating in the patches for years, but it is unfortunately one of these series that go in circles forever. I don't think it is due to any specific problem, but mostly due to unaligned requests form different folks reviewing. Last time I talked to Damien he asked me to send the patches again; we have not followed through due to bandwidth. If there is an interest, we can re-spin this again...
On 11/8/24 9:43 AM, Javier Gonzalez wrote:
> If there is an interest, we can re-spin this again...
I'm interested. Work is ongoing in JEDEC on support for copy offloading
for UFS devices. This work involves standardizing which SCSI copy
offloading features should be supported and which features are not
required. Implementations are expected to be available soon.
Thanks,
Bart.
On Fri, Nov 08, 2024 at 08:51:31AM -0700, Keith Busch wrote: > You're getting fragmentation anyway, which is why you had to implement > gc. A general purpose file system always has fragmentation of some kind, even it manages to avoid those for certain workloads with cooperative applications. If there was magic pixies dust to ensure freespace never fragments file system development would be solved problem :) > You're just shifting who gets to deal with it from the controller to > the host. The host is further from the media, so you're starting from a > disadvantage. And the controller is further from the application and misses a lot of information like say the file structure, so it inherently is at a disadvantage. > The host gc implementation would have to be quite a bit > better to justify the link and memory usage necessary for the copies That assumes you still have to device GC. If you do align to the zone/erase (super)block/reclaim unit boundaries you don't. > This xfs implementation also has logic to recover from a power fail. The > device already does that if you use the LBA abstraction instead of > tracking sequential write pointers and free blocks. Every file system has logic to recover from a power fail. I'm not sure what kind of discussion you're trying to kick off here. > I think you are underestimating the duplication of efforts going on > here. I'm still not sure what discussion you're trying to to start here. There is very little work in here, and it is work required to support SMR drives. It turns out for a fair amount of workloads it actually works really well on SSDs as well beating everything else we've tried.
On Fri, Nov 08, 2024 at 04:54:34PM +0000, Matthew Wilcox wrote:
> I like the idea of using copy-offload though.
FYI, the XFS GC code is written so that copy offload can be easily
plugged into it. We'll have to see how beneficial it actually is,
but at least it should give us a good test platform.
On Fri, Nov 08, 2024 at 05:43:44PM +0000, Javier Gonzalez wrote: > We have been iterating in the patches for years, but it is unfortunately > one of these series that go in circles forever. I don't think it is due > to any specific problem, but mostly due to unaligned requests form > different folks reviewing. Last time I talked to Damien he asked me to > send the patches again; we have not followed through due to bandwidth. A big problem is that it actually lacks a killer use case. If you'd actually manage to plug it into an in-kernel user and show a real speedup people might actually be interested in it and help optimizing for it.
On 11.11.2024 07:51, Christoph Hellwig wrote: >On Fri, Nov 08, 2024 at 05:43:44PM +0000, Javier Gonzalez wrote: >> We have been iterating in the patches for years, but it is unfortunately >> one of these series that go in circles forever. I don't think it is due >> to any specific problem, but mostly due to unaligned requests form >> different folks reviewing. Last time I talked to Damien he asked me to >> send the patches again; we have not followed through due to bandwidth. > >A big problem is that it actually lacks a killer use case. If you'd >actually manage to plug it into an in-kernel user and show a real >speedup people might actually be interested in it and help optimizing >for it. > Agree. Initially it was all about ZNS. Seems ZUFS can use it. Then we saw good results in offload to target on NVMe-OF, similar to copy_file_range, but that does not seem to be enough. You seem to indicacte too that XFS can use it for GC. We can try putting a new series out to see where we are...
On 08.11.2024 10:51, Bart Van Assche wrote: >On 11/8/24 9:43 AM, Javier Gonzalez wrote: >>If there is an interest, we can re-spin this again... > >I'm interested. Work is ongoing in JEDEC on support for copy offloading >for UFS devices. This work involves standardizing which SCSI copy >offloading features should be supported and which features are not >required. Implementations are expected to be available soon. > Do you have any specific blockers on the last series? I know you have left comments in many of the patches already, but I think we are all a bit confused on where we are ATM.
On 11.11.24 10:31, Javier Gonzalez wrote: > On 11.11.2024 07:51, Christoph Hellwig wrote: >> On Fri, Nov 08, 2024 at 05:43:44PM +0000, Javier Gonzalez wrote: >>> We have been iterating in the patches for years, but it is unfortunately >>> one of these series that go in circles forever. I don't think it is due >>> to any specific problem, but mostly due to unaligned requests form >>> different folks reviewing. Last time I talked to Damien he asked me to >>> send the patches again; we have not followed through due to bandwidth. >> >> A big problem is that it actually lacks a killer use case. If you'd >> actually manage to plug it into an in-kernel user and show a real >> speedup people might actually be interested in it and help optimizing >> for it. >> > > Agree. Initially it was all about ZNS. Seems ZUFS can use it. > > Then we saw good results in offload to target on NVMe-OF, similar to > copy_file_range, but that does not seem to be enough. You seem to > indicacte too that XFS can use it for GC. > > We can try putting a new series out to see where we are... I don't want to sound like a broken record, but I've said more than once, that btrfs (regardless of zoned or non-zoned) would be very interested in that as well and I'd be willing to help with the code or even do it myself once the block bits are in. But apparently my voice doesn't count here
On 11.11.2024 09:37, Johannes Thumshirn wrote: >On 11.11.24 10:31, Javier Gonzalez wrote: >> On 11.11.2024 07:51, Christoph Hellwig wrote: >>> On Fri, Nov 08, 2024 at 05:43:44PM +0000, Javier Gonzalez wrote: >>>> We have been iterating in the patches for years, but it is unfortunately >>>> one of these series that go in circles forever. I don't think it is due >>>> to any specific problem, but mostly due to unaligned requests form >>>> different folks reviewing. Last time I talked to Damien he asked me to >>>> send the patches again; we have not followed through due to bandwidth. >>> >>> A big problem is that it actually lacks a killer use case. If you'd >>> actually manage to plug it into an in-kernel user and show a real >>> speedup people might actually be interested in it and help optimizing >>> for it. >>> >> >> Agree. Initially it was all about ZNS. Seems ZUFS can use it. >> >> Then we saw good results in offload to target on NVMe-OF, similar to >> copy_file_range, but that does not seem to be enough. You seem to >> indicacte too that XFS can use it for GC. >> >> We can try putting a new series out to see where we are... > >I don't want to sound like a broken record, but I've said more than >once, that btrfs (regardless of zoned or non-zoned) would be very >interested in that as well and I'd be willing to help with the code or >even do it myself once the block bits are in. > >But apparently my voice doesn't count here You are right. Sorry I forgot. Would this be through copy_file_range or something different?
On Mon, Nov 11, 2024 at 10:41:33AM +0100, Javier Gonzalez wrote: > You are right. Sorry I forgot. > > Would this be through copy_file_range or something different? Just like for f2fs, nilfs2, or the upcoming zoned xfs the prime user would be the file system GC code.
On 11.11.24 10:41, Javier Gonzalez wrote: > On 11.11.2024 09:37, Johannes Thumshirn wrote: >> On 11.11.24 10:31, Javier Gonzalez wrote: >>> On 11.11.2024 07:51, Christoph Hellwig wrote: >>>> On Fri, Nov 08, 2024 at 05:43:44PM +0000, Javier Gonzalez wrote: >>>>> We have been iterating in the patches for years, but it is unfortunately >>>>> one of these series that go in circles forever. I don't think it is due >>>>> to any specific problem, but mostly due to unaligned requests form >>>>> different folks reviewing. Last time I talked to Damien he asked me to >>>>> send the patches again; we have not followed through due to bandwidth. >>>> >>>> A big problem is that it actually lacks a killer use case. If you'd >>>> actually manage to plug it into an in-kernel user and show a real >>>> speedup people might actually be interested in it and help optimizing >>>> for it. >>>> >>> >>> Agree. Initially it was all about ZNS. Seems ZUFS can use it. >>> >>> Then we saw good results in offload to target on NVMe-OF, similar to >>> copy_file_range, but that does not seem to be enough. You seem to >>> indicacte too that XFS can use it for GC. >>> >>> We can try putting a new series out to see where we are... >> >> I don't want to sound like a broken record, but I've said more than >> once, that btrfs (regardless of zoned or non-zoned) would be very >> interested in that as well and I'd be willing to help with the code or >> even do it myself once the block bits are in. >> >> But apparently my voice doesn't count here > > You are right. Sorry I forgot. > > Would this be through copy_file_range or something different? > Unfortunately not, brtfs' reclaim/balance path is a wrapper on top of buffered read and write (plus some extra things). _BUT_ this makes it possible to switch the read/write part and do copy offload (where possible).
On 11.11.2024 09:43, Johannes Thumshirn wrote: >On 11.11.24 10:41, Javier Gonzalez wrote: >> On 11.11.2024 09:37, Johannes Thumshirn wrote: >>> On 11.11.24 10:31, Javier Gonzalez wrote: >>>> On 11.11.2024 07:51, Christoph Hellwig wrote: >>>>> On Fri, Nov 08, 2024 at 05:43:44PM +0000, Javier Gonzalez wrote: >>>>>> We have been iterating in the patches for years, but it is unfortunately >>>>>> one of these series that go in circles forever. I don't think it is due >>>>>> to any specific problem, but mostly due to unaligned requests form >>>>>> different folks reviewing. Last time I talked to Damien he asked me to >>>>>> send the patches again; we have not followed through due to bandwidth. >>>>> >>>>> A big problem is that it actually lacks a killer use case. If you'd >>>>> actually manage to plug it into an in-kernel user and show a real >>>>> speedup people might actually be interested in it and help optimizing >>>>> for it. >>>>> >>>> >>>> Agree. Initially it was all about ZNS. Seems ZUFS can use it. >>>> >>>> Then we saw good results in offload to target on NVMe-OF, similar to >>>> copy_file_range, but that does not seem to be enough. You seem to >>>> indicacte too that XFS can use it for GC. >>>> >>>> We can try putting a new series out to see where we are... >>> >>> I don't want to sound like a broken record, but I've said more than >>> once, that btrfs (regardless of zoned or non-zoned) would be very >>> interested in that as well and I'd be willing to help with the code or >>> even do it myself once the block bits are in. >>> >>> But apparently my voice doesn't count here >> >> You are right. Sorry I forgot. >> >> Would this be through copy_file_range or something different? >> > >Unfortunately not, brtfs' reclaim/balance path is a wrapper on top of >buffered read and write (plus some extra things). _BUT_ this makes it >possible to switch the read/write part and do copy offload (where possible). On 11.11.2024 10:42, hch wrote: >On Mon, Nov 11, 2024 at 10:41:33AM +0100, Javier Gonzalez wrote: >> You are right. Sorry I forgot. >> >> Would this be through copy_file_range or something different? > >Just like for f2fs, nilfs2, or the upcoming zoned xfs the prime user >would be the file system GC code. Replying to both. Thanks. Makes sense. Now that we can talke a look at your branch, we can think how this would look like.
On 11/11/24 1:31 AM, Javier Gonzalez wrote: > On 08.11.2024 10:51, Bart Van Assche wrote: >> On 11/8/24 9:43 AM, Javier Gonzalez wrote: >>> If there is an interest, we can re-spin this again... >> >> I'm interested. Work is ongoing in JEDEC on support for copy offloading >> for UFS devices. This work involves standardizing which SCSI copy >> offloading features should be supported and which features are not >> required. Implementations are expected to be available soon. > > Do you have any specific blockers on the last series? I know you have > left comments in many of the patches already, but I think we are all a > bit confused on where we are ATM. Nobody replied to this question that was raised 4 months ago: https://lore.kernel.org/linux-block/4c7f30af-9fbc-4f19-8f48-ad741aa557c4@acm.org/ I think we need to agree about the answer to that question before we can continue with implementing copy offloading. Thanks, Bart.
On 11/11/24 09:45AM, Bart Van Assche wrote: >On 11/11/24 1:31 AM, Javier Gonzalez wrote: >>On 08.11.2024 10:51, Bart Van Assche wrote: >>>On 11/8/24 9:43 AM, Javier Gonzalez wrote: >>>>If there is an interest, we can re-spin this again... >>> >>>I'm interested. Work is ongoing in JEDEC on support for copy offloading >>>for UFS devices. This work involves standardizing which SCSI copy >>>offloading features should be supported and which features are not >>>required. Implementations are expected to be available soon. >> >>Do you have any specific blockers on the last series? I know you have >>left comments in many of the patches already, but I think we are all a >>bit confused on where we are ATM. > >Nobody replied to this question that was raised 4 months ago: >https://lore.kernel.org/linux-block/4c7f30af-9fbc-4f19-8f48-ad741aa557c4@acm.org/ > >I think we need to agree about the answer to that question before we can >continue with implementing copy offloading. > Yes, even I feel the same. Blocker with copy has been how we should plumb things in block layer. A couple of approaches we tried in the past[1]. Restating for reference, 1.payload based approach: a. Based on Mikulas patch, here a common payload is used for both source and destination bio. b. Initially we send source bio, upon reaching driver we update payload and complete the bio. c. Send destination bio, in driver layer we recover the source info from the payload and send the copy command to device. Drawback: Request payload contains IO information rather than data. Based on past experience Christoph and Bart suggested not a good way forward. Alternate suggestion from Christoph was to used separate BIOs for src and destination and match them using token/id. As Bart pointed, I find it hard how to match when the IO split happens. 2. Plug based approach: a. Take a plug, send destination bio, form request and wait for src bio b. send source bio, merge with destination bio c. Upon release of plug send request down to driver. Drawback: Doesn't work for stacked devices which has async submission. Bart suggested this is not good solution overall. Alternate suggestion was to use list based approach. But we observed lifetime management problems, especially in failure handling. 3. Single bio approach: a. Use single bio to represent both src and dst info. b. Use abnormal IO handling similar to discard. Drawback: Christoph pointed out that, this will have issue of payload containing information for both IO stack and wire. I am really torn on how to proceed further ? -- Nitesh Shetty [1] https://lore.kernel.org/linux-block/20240624103212.2donuac5apwwqaor@nj.shetty@samsung.com/
On 11/18/24 6:03 PM, Martin K. Petersen wrote: > In my experience the payload-based approach was what made things work. I > tried many things before settling on that. Also note that to support > token-based SCSI devices, you inevitably need to separate the > read/copy_in operation from the write/copy_out ditto and carry the token > in the payload. > > For "single copy command" devices, you can just synthesize the token in > the driver. Although I don't really know what the point of the token is > in that case because as far as I'm concerned, the only interesting > information is that the read/copy_in operation made it down the stack > without being split. Hi Martin, There are some strong arguments in this thread from May 2024 in favor of representing the entire copy operation as a single REQ_OP_ operation: https://lore.kernel.org/linux-block/20240520102033.9361-1-nj.shetty@samsung.com/ Token-based copy offloading (called ODX by Microsoft) could be implemented by maintaining a state machine in the SCSI sd driver and using a single block layer request to submit the four following SCSI commands: * POPULATE TOKEN * RECEIVE ROD TOKEN INFORMATION * WRITE USING TOKEN I'm assuming that the IMMED bit will be set to zero in the WRITE USING TOKEN command. Otherwise one or more additional RECEIVE ROD TOKEN INFORMATION commands would be required to poll for the WRITE USING TOKEN completion status. I guess that the block layer maintainer wouldn't be happy if all block drivers would have to deal with three or four phases for copy offloading just because ODX is this complicated. Thanks, Bart.
Bart, > There are some strong arguments in this thread from May 2024 in favor of > representing the entire copy operation as a single REQ_OP_ operation: > https://lore.kernel.org/linux-block/20240520102033.9361-1-nj.shetty@samsung.com/ As has been discussed many times, a copy operation is semantically a read operation followed by a write operation. And, based on my experience implementing support for both types of copy offload in Linux, what made things elegant was treating the operation as a read followed by a write throughout the stack. Exactly like the token-based offload specification describes. > Token-based copy offloading (called ODX by Microsoft) could be > implemented by maintaining a state machine in the SCSI sd driver I suspect the SCSI maintainer would object strongly to the idea of maintaining cross-device copy offload state and associated object lifetime issues in the sd driver. > I'm assuming that the IMMED bit will be set to zero in the WRITE USING > TOKEN command. Otherwise one or more additional RECEIVE ROD TOKEN > INFORMATION commands would be required to poll for the WRITE USING TOKEN > completion status. What would the benefit of making WRITE USING TOKEN be a background operation? That seems like a completely unnecessary complication. > I guess that the block layer maintainer wouldn't be happy if all block > drivers would have to deal with three or four phases for copy > offloading just because ODX is this complicated. Last I looked, EXTENDED COPY consumed something like 70 pages in the spec. Token-based copy is trivially simple and elegant by comparison.
On 11/26/24 6:54 PM, Martin K. Petersen wrote: > Bart wrote: >> There are some strong arguments in this thread from May 2024 in favor of >> representing the entire copy operation as a single REQ_OP_ operation: >> https://lore.kernel.org/linux-block/20240520102033.9361-1-nj.shetty@samsung.com/ > > As has been discussed many times, a copy operation is semantically a > read operation followed by a write operation. And, based on my > experience implementing support for both types of copy offload in Linux, > what made things elegant was treating the operation as a read followed > by a write throughout the stack. Exactly like the token-based offload > specification describes. Submitting a copy operation as two bios or two requests means that there is a risk that one of the two operations never reaches the block driver at the bottom of the storage stack and hence that a deadlock occurs. I prefer not to introduce any mechanisms that can cause a deadlock. As one can see here, Damien Le Moal and Keith Busch both prefer to submit copy operations as a single operation: Keith Busch, Re: [PATCH v20 02/12] Add infrastructure for copy offload in block and request layer, linux-block mailing list, 2024-06-24 (https://lore.kernel.org/all/Znn6C-C73Tps3WJk@kbusch-mbp.dhcp.thefacebook.com/). >> Token-based copy offloading (called ODX by Microsoft) could be >> implemented by maintaining a state machine in the SCSI sd driver > > I suspect the SCSI maintainer would object strongly to the idea of > maintaining cross-device copy offload state and associated object > lifetime issues in the sd driver. Such information wouldn't have to be maintained inside the sd driver. A new kernel module could be introduced that tracks the state of copy operations and that interacts with the sd driver. >> I'm assuming that the IMMED bit will be set to zero in the WRITE USING >> TOKEN command. Otherwise one or more additional RECEIVE ROD TOKEN >> INFORMATION commands would be required to poll for the WRITE USING TOKEN >> completion status. > > What would the benefit of making WRITE USING TOKEN be a background > operation? That seems like a completely unnecessary complication. If a single copy operation takes significantly more time than the time required to switch between power states, power can be saved by using IMMED=1. Mechanisms like run-time power management (RPM) or the UFS host controller auto-hibernation mechanism can only be activated if no commands are in progress. With IMMED=0, the link between the host and the storage device will remain powered as long as the copy operation is in progress. With IMMED=1, the link between the host and the storage device can be powered down after the copy operation has been submitted until the host decides to check whether or not the copy operation has completed. >> I guess that the block layer maintainer wouldn't be happy if all block >> drivers would have to deal with three or four phases for copy >> offloading just because ODX is this complicated. > > Last I looked, EXTENDED COPY consumed something like 70 pages in the > spec. Token-based copy is trivially simple and elegant by comparison. I don't know of any storage device vendor who has implemented all EXTENDED COPY features that have been standardized. Assuming that 50 lines of code fit on a single page, here is an example of an EXTENDED COPY implementation that can be printed on 21 pages of paper: $ wc -l drivers/target/target_core_xcopy.c 1041 $ echo $(((1041 + 49) / 50)) 21 The advantages of EXTENDED COPY over ODX are as follows: - EXTENDED COPY is a single SCSI command and hence better suited for devices with a limited queue depth. While the UFS 3.0 standard restricts the queue depth to 32, most UFS 4.0 devices support a queue depth of 64. - The latency of setting up a copy command with EXTENDED COPY is lower since only a single command has to be sent to the device. ODX requires three round-trips to the device (assuming IMMED=0). - EXTENDED COPY requires less memory in storage devices. Each ODX token occupies some memory and the rules around token lifetimes are nontrivial. Thanks, Bart.
Bart, > Submitting a copy operation as two bios or two requests means that > there is a risk that one of the two operations never reaches the block > driver at the bottom of the storage stack and hence that a deadlock > occurs. I prefer not to introduce any mechanisms that can cause a > deadlock. How do you copy a block range without offload? You perform a READ to read the data into memory. And once the READ completes, you do a WRITE of the data to the new location. Token-based copy offload works exactly the same way. You do a POPULATE TOKEN which is identical to a READ except you get a cookie instead of the actual data. And then once you have the cookie, you perform a WRITE USING TOKEN to perform the write operation. Semantically, it's exactly the same as a normal copy except for the lack of data movement. That's the whole point! Once I had support for token-based copy offload working, it became clear to me that this approach is much simpler than pointer matching, bio pairs, etc. The REQ_OP_COPY_IN operation and the REQ_OP_COPY_OUT operation are never in flight at the same time. There are no synchronization hassles, no lifetimes, no lookup tables in the sd driver, no nonsense. Semantically, it's a read followed by a write. For devices that implement single-command copy offload, the REQ_OP_COPY_IN operation only serves as a validation that no splitting took place. Once the bio reaches the ULD, the I/O is completed without ever sending a command to the device. blk-lib then issues a REQ_OP_COPY_OUT which gets turned into EXTENDED COPY or NVMe Copy and sent to the destination device. Aside from making things trivially simple, the COPY_IN/COPY_OUT semantic is a *requirement* for token-based offload devices. Why would we even consider having two incompatible sets of copy offload semantics coexist in the block layer?
On 11/27/24 12:14 PM, Martin K. Petersen wrote: > Once I had support for token-based copy offload working, it became clear > to me that this approach is much simpler than pointer matching, bio > pairs, etc. The REQ_OP_COPY_IN operation and the REQ_OP_COPY_OUT > operation are never in flight at the same time. There are no > synchronization hassles, no lifetimes, no lookup tables in the sd > driver, no nonsense. Semantically, it's a read followed by a write. What if the source LBA range does not require splitting but the destination LBA range requires splitting, e.g. because it crosses a chunk_sectors boundary? Will the REQ_OP_COPY_IN operation succeed in this case and the REQ_OP_COPY_OUT operation fail? Does this mean that a third operation is needed to cancel REQ_OP_COPY_IN operations if the REQ_OP_COPY_OUT operation fails? Additionally, how to handle bugs in REQ_OP_COPY_* submitters where a large number of REQ_OP_COPY_IN operations is submitted without corresponding REQ_OP_COPY_OUT operation? Is perhaps a mechanism required to discard unmatched REQ_OP_COPY_IN operations after a certain time? > Aside from making things trivially simple, the COPY_IN/COPY_OUT semantic > is a *requirement* for token-based offload devices. Hmm ... we may each have a different opinion about whether or not the COPY_IN/COPY_OUT semantics are a requirement for token-based copy offloading. Additionally, I'm not convinced that implementing COPY_IN/COPY_OUT for ODX devices is that simple. The COPY_IN and COPY_OUT operations have to be translated into three SCSI commands, isn't it? I'm referring to the POPULATE TOKEN, RECEIVE ROD TOKEN INFORMATION and WRITE USING TOKEN commands. What is your opinion about how to translate the two block layer operations into these three SCSI commands? > Why would we even consider having two incompatible sets of copy > offload semantics coexist in the block layer? I am not aware of any proposal to implement two sets of copy operations in the block layer. All proposals I have seen so far involve adding a single set of copy operations to the block layer. Opinions differ however about whether to add a single copy operation primitive or separate IN and OUT primitives. Thanks, Bart.
Bart, > What if the source LBA range does not require splitting but the > destination LBA range requires splitting, e.g. because it crosses a > chunk_sectors boundary? Will the REQ_OP_COPY_IN operation succeed in > this case and the REQ_OP_COPY_OUT operation fail? Yes. I experimented with approaching splitting in an iterative fashion. And thus, if there was a split halfway through the COPY_IN I/O, we'd issue a corresponding COPY_OUT up to the split point and hope that the write subsequently didn't need a split. And then deal with the next segment. However, given that copy offload offers diminishing returns for small I/Os, it was not worth the hassle for the devices I used for development. It was cleaner and faster to just fall back to regular read/write when a split was required. > Does this mean that a third operation is needed to cancel > REQ_OP_COPY_IN operations if the REQ_OP_COPY_OUT operation fails? No. The device times out the token. > Additionally, how to handle bugs in REQ_OP_COPY_* submitters where a > large number of REQ_OP_COPY_IN operations is submitted without > corresponding REQ_OP_COPY_OUT operation? Is perhaps a mechanism > required to discard unmatched REQ_OP_COPY_IN operations after a > certain time? See above. For your EXTENDED COPY use case there is no token and thus the COPY_IN completes immediately. And for the token case, if you populate a million tokens and don't use them before they time out, it sounds like your submitting code is badly broken. But it doesn't matter because there are no I/Os in flight and thus nothing to discard. > Hmm ... we may each have a different opinion about whether or not the > COPY_IN/COPY_OUT semantics are a requirement for token-based copy > offloading. Maybe. But you'll have a hard time convincing me to add any kind of state machine or bio matching magic to the SCSI stack when the simplest solution is to treat copying like a read followed by a write. There is no concurrency, no kernel state, no dependency between two commands, nor two scsi_disk/scsi_device object lifetimes to manage. > Additionally, I'm not convinced that implementing COPY_IN/COPY_OUT for > ODX devices is that simple. The COPY_IN and COPY_OUT operations have > to be translated into three SCSI commands, isn't it? I'm referring to > the POPULATE TOKEN, RECEIVE ROD TOKEN INFORMATION and WRITE USING > TOKEN commands. What is your opinion about how to translate the two > block layer operations into these three SCSI commands? COPY_IN is translated to a NOP for devices implementing EXTENDED COPY and a POPULATE TOKEN for devices using tokens. COPY_OUT is translated to an EXTENDED COPY (or NVMe Copy) for devices using a single command approach and WRITE USING TOKEN for devices using tokens. There is no need for RECEIVE ROD TOKEN INFORMATION. I am not aware of UFS devices using the token-based approach. And for EXTENDED COPY there is only a single command sent to the device. If you want to do power management while that command is being processed, please deal with that in UFS. The block layer doesn't deal with the async variants of any of the other SCSI commands either...
On Wed, Nov 27, 2024 at 03:14:09PM -0500, Martin K. Petersen wrote: > How do you copy a block range without offload? You perform a READ to > read the data into memory. And once the READ completes, you do a WRITE > of the data to the new location. Yes. I.e. this is code that makes this pattern very clearm and for which I'd love to be able to use copy offload when available: http://git.infradead.org/?p=users/hch/xfs.git;a=blob;f=fs/xfs/xfs_zone_gc.c;h=ed8aa08b3c18d50afe79326e697d83e09542a9b6;hb=refs/heads/xfs-zoned#l820
On 11/28/24 11:09, Martin K. Petersen wrote: > > Bart, > >> What if the source LBA range does not require splitting but the >> destination LBA range requires splitting, e.g. because it crosses a >> chunk_sectors boundary? Will the REQ_OP_COPY_IN operation succeed in >> this case and the REQ_OP_COPY_OUT operation fail? > > Yes. > > I experimented with approaching splitting in an iterative fashion. And > thus, if there was a split halfway through the COPY_IN I/O, we'd issue a > corresponding COPY_OUT up to the split point and hope that the write > subsequently didn't need a split. And then deal with the next segment. > > However, given that copy offload offers diminishing returns for small > I/Os, it was not worth the hassle for the devices I used for > development. It was cleaner and faster to just fall back to regular > read/write when a split was required. > >> Does this mean that a third operation is needed to cancel >> REQ_OP_COPY_IN operations if the REQ_OP_COPY_OUT operation fails? > > No. The device times out the token. > >> Additionally, how to handle bugs in REQ_OP_COPY_* submitters where a >> large number of REQ_OP_COPY_IN operations is submitted without >> corresponding REQ_OP_COPY_OUT operation? Is perhaps a mechanism >> required to discard unmatched REQ_OP_COPY_IN operations after a >> certain time? > > See above. > > For your EXTENDED COPY use case there is no token and thus the COPY_IN > completes immediately. > > And for the token case, if you populate a million tokens and don't use > them before they time out, it sounds like your submitting code is badly > broken. But it doesn't matter because there are no I/Os in flight and > thus nothing to discard. > >> Hmm ... we may each have a different opinion about whether or not the >> COPY_IN/COPY_OUT semantics are a requirement for token-based copy >> offloading. > > Maybe. But you'll have a hard time convincing me to add any kind of > state machine or bio matching magic to the SCSI stack when the simplest > solution is to treat copying like a read followed by a write. There is > no concurrency, no kernel state, no dependency between two commands, nor > two scsi_disk/scsi_device object lifetimes to manage. And that also would allow supporting a fake copy offload with regular read/write BIOs very easily, I think. So all block devices can be presented as supporting "copy offload". That is nice for FSes. > >> Additionally, I'm not convinced that implementing COPY_IN/COPY_OUT for >> ODX devices is that simple. The COPY_IN and COPY_OUT operations have >> to be translated into three SCSI commands, isn't it? I'm referring to >> the POPULATE TOKEN, RECEIVE ROD TOKEN INFORMATION and WRITE USING >> TOKEN commands. What is your opinion about how to translate the two >> block layer operations into these three SCSI commands? > > COPY_IN is translated to a NOP for devices implementing EXTENDED COPY > and a POPULATE TOKEN for devices using tokens. > > COPY_OUT is translated to an EXTENDED COPY (or NVMe Copy) for devices > using a single command approach and WRITE USING TOKEN for devices using > tokens. ATA WRITE GATHERED command is also a single copy command. That matches and while I have not checked SAT, translation would likely work. While I was initially worried that the 2 BIO based approach would be overly complicated, it seems that I was wrong :) > > There is no need for RECEIVE ROD TOKEN INFORMATION. > > I am not aware of UFS devices using the token-based approach. And for > EXTENDED COPY there is only a single command sent to the device. If you > want to do power management while that command is being processed, > please deal with that in UFS. The block layer doesn't deal with the > async variants of any of the other SCSI commands either... >
On Wed, Nov 27, 2024 at 03:14:09PM -0500, Martin K. Petersen wrote: > > Bart, > > > Submitting a copy operation as two bios or two requests means that > > there is a risk that one of the two operations never reaches the block > > driver at the bottom of the storage stack and hence that a deadlock > > occurs. I prefer not to introduce any mechanisms that can cause a > > deadlock. > > How do you copy a block range without offload? You perform a READ to > read the data into memory. And once the READ completes, you do a WRITE > of the data to the new location. > > Token-based copy offload works exactly the same way. You do a POPULATE > TOKEN which is identical to a READ except you get a cookie instead of > the actual data. And then once you have the cookie, you perform a WRITE > USING TOKEN to perform the write operation. Semantically, it's exactly > the same as a normal copy except for the lack of data movement. That's > the whole point! I think of copy a little differently. When you do a normal write command, the host provides the controller a vector of sources and lengths. A copy command is like a write command, but the sources are just logical block addresses instead of memory addresses. Whatever solution happens, it would be a real shame if it doesn't allow vectored LBAs. The token based source bio doesn't seem to extend to that.
On Thu, Nov 28, 2024 at 08:21:16AM -0700, Keith Busch wrote: > I think of copy a little differently. When you do a normal write > command, the host provides the controller a vector of sources and > lengths. A copy command is like a write command, but the sources are > just logical block addresses instead of memory addresses. > > Whatever solution happens, it would be a real shame if it doesn't allow > vectored LBAs. The token based source bio doesn't seem to extend to > that. POPULATE TOKEN as defined by SCSI/SBC takes a list of LBA ranges as well.
On Thu, Nov 28, 2024 at 05:51:52PM +0900, Damien Le Moal wrote: > > Maybe. But you'll have a hard time convincing me to add any kind of > > state machine or bio matching magic to the SCSI stack when the simplest > > solution is to treat copying like a read followed by a write. There is > > no concurrency, no kernel state, no dependency between two commands, nor > > two scsi_disk/scsi_device object lifetimes to manage. > > And that also would allow supporting a fake copy offload with regular > read/write BIOs very easily, I think. So all block devices can be > presented as supporting "copy offload". That is nice for FSes. Just as when that showed up in one of the last copy offload series I'm still very critical of a stateless copy offload emulation. The reason for that is that a host based copy helper needs scratch space to read into, and doing these large allocation on every copy puts a lot of pressure onto the allocator. Allocating the buffer once at mount time and the just cycling through it is generally a lot more efficient.
Nitesh, > This approach looks simpler to me as well. > But where do we store the read sector info before sending write. > I see 2 approaches here, > 1. Should it be part of a payload along with write ? We did something > similar in previous series which was not liked by Christoph and Bart. > 2. Or driver should store it as part of an internal list inside > namespace/ctrl data structure ? As Bart pointed out, here we might > need to send one more fail request later if copy_write fails to land > in same driver. The problem with option 2 is that when you're doing copy between two different LUNs, then you suddenly have to maintain state in one kernel object about stuff relating to another kernel object. I think that is messy. Seems unnecessarily complex. With option 1, for single command offload, there is no payload to worry about. Only command completion status matters for the COPY_IN phase. And once you have completion, you can issue a COPY_OUT. Done. For token based offload, I really don't understand the objection to storing the cookie in the bio. I fail to see the benefit of storing the cookie in the driver and then have the bio refer to something else which maps to the actual cookie returned by the storage. Again that introduces object lifetime complexity. It's much simpler to just have the cookie be part of the very command that is being executed. Once the COPY_IN completes, you can either use the cookie or throw it away. Doesn't matter. The device will time it out if you sit on it too long. And there is zero state in the kernel outside of the memory for the cookie that you, as the submitter, are responsible for deallocating.
On 12/10/24 07:13, Bart Van Assche wrote: > On 12/5/24 12:03 AM, Nitesh Shetty wrote: >> But where do we store the read sector info before sending write. >> I see 2 approaches here, >> 1. Should it be part of a payload along with write ? >> We did something similar in previous series which was not liked >> by Christoph and Bart. >> 2. Or driver should store it as part of an internal list inside >> namespace/ctrl data structure ? >> As Bart pointed out, here we might need to send one more fail >> request later if copy_write fails to land in same driver. > > Hi Nitesh, > > Consider the following example: dm-linear is used to concatenate two > block devices. An NVMe device (LBA 0..999) and a SCSI device (LBA > 1000..1999). Suppose that a copy operation is submitted to the dm-linear > device to copy LBAs 1..998 to LBAs 2..1998. If the copy operation is > submitted as two separate operations (REQ_OP_COPY_SRC and > REQ_OP_COPY_DST) then the NVMe device will receive the REQ_OP_COPY_SRC > operation and the SCSI device will receive the REQ_OP_COPY_DST > operation. The NVMe and SCSI device drivers should fail the copy > operations after a timeout because they only received half of the copy > operation. After the timeout the block layer core can switch from > offloading to emulating a copy operation. Waiting for a timeout is > necessary because requests may be reordered. > > I think this is a strong argument in favor of representing copy > operations as a single operation. This will allow stacking drivers > as dm-linear to deal in an elegant way with copy offload requests > where source and destination LBA ranges map onto different block > devices and potentially different block drivers. Why ? As long as REQ_OP_COPY_SRC carries both source and destination information, DM can trivially detect that the copy is not within a single device and either return ENOTSUPP or switch to using a regular read+write operations using block layer helpers. Or the block layer can fallback to that emulation itself if it gets a ENOTSUPP from the device. I am not sure how a REQ_OP_COPY_SRC BIO definition would look like. Ideally, we want to be able to describe several source LBA ranges with it and for the above issue also have the destination LBA range as well. If we can do that in a nice way, I do not see the need for switching back to a single BIO, though we could too I guess. From what Martin said for scsi token-based copy, it seems that 2 operations is easier. Knowing how the scsi stack works, I can see that too.
On 12/9/24 3:31 PM, Matthew Wilcox wrote: > On Mon, Dec 09, 2024 at 02:13:40PM -0800, Bart Van Assche wrote: >> Consider the following example: dm-linear is used to concatenate two >> block devices. An NVMe device (LBA 0..999) and a SCSI device (LBA >> 1000..1999). Suppose that a copy operation is submitted to the dm-linear >> device to copy LBAs 1..998 to LBAs 2..1998. If the copy operation is > > Sorry, I don't think that's a valid operation -- 1998 - 2 = 1996 and 998 > - 1 is 997, so these ranges are of different lengths. Agreed that the ranges should have the same length. I have been traveling and I'm under jet lag, hence the range length mismatch. I wanted to construct a copy operation from the first to the second block device: 1..998 to 1001..1998. >> submitted as two separate operations (REQ_OP_COPY_SRC and >> REQ_OP_COPY_DST) then the NVMe device will receive the REQ_OP_COPY_SRC >> operation and the SCSI device will receive the REQ_OP_COPY_DST >> operation. The NVMe and SCSI device drivers should fail the copy operations >> after a timeout because they only received half of the copy >> operation. > > ... no? The SRC operation succeeds, but then the DM driver gets the DST > operation and sees that it crosses the boundary and fails the DST op. > Then the pair of ops can be retried using an in-memory buffer. Since the second range can be mapped onto the second block device, the dm-linear driver can only fail the REQ_OP_COPY_DST operation if it keeps track of the source LBA regions of pending copy operations. Which would be an unnecessary complexity. A possible alternative is to specify the source and destination range information in every REQ_OP_COPY_SRC and in every REQ_OP_COPY_DST operation (see also Damien's email). Thanks, Bart.
On 12/5/24 12:37 PM, Martin K. Petersen wrote: > For token based offload, I really don't understand the objection to > storing the cookie in the bio. I fail to see the benefit of storing the > cookie in the driver and then have the bio refer to something else which > maps to the actual cookie returned by the storage. Does "cookie" refer to the SCSI ROD token? Storing the ROD token in the REQ_OP_COPY_DST bio implies that the REQ_OP_COPY_DST bio is only submitted after the REQ_OP_COPY_SRC bio has completed. NVMe users may prefer that REQ_OP_COPY_SRC and REQ_OP_COPY_DST bios are submitted simultaneously. Thanks, Bart.
Bart, > Does "cookie" refer to the SCSI ROD token? Storing the ROD token in > the REQ_OP_COPY_DST bio implies that the REQ_OP_COPY_DST bio is only > submitted after the REQ_OP_COPY_SRC bio has completed. Obviously. You can't issue a WRITE USING TOKEN until you have the token. > NVMe users may prefer that REQ_OP_COPY_SRC and REQ_OP_COPY_DST bios > are submitted simultaneously. What would be the benefit of submitting these operations concurrently? As I have explained, it adds substantial complexity and object lifetime issues throughout the stack. To what end?
On Thu, Dec 05, 2024 at 03:37:25PM -0500, Martin K. Petersen wrote: > The problem with option 2 is that when you're doing copy between two > different LUNs, then you suddenly have to maintain state in one kernel > object about stuff relating to another kernel object. I think that is > messy. Seems unnecessarily complex. Generally agreeing with all you said, but do we actually have any serious use case for cross-LU copies? They just seem incredibly complex any not all that useful.
On 10.12.24 08:13, Christoph Hellwig wrote: > On Thu, Dec 05, 2024 at 03:37:25PM -0500, Martin K. Petersen wrote: >> The problem with option 2 is that when you're doing copy between two >> different LUNs, then you suddenly have to maintain state in one kernel >> object about stuff relating to another kernel object. I think that is >> messy. Seems unnecessarily complex. > > Generally agreeing with all you said, but do we actually have any > serious use case for cross-LU copies? They just seem incredibly > complex any not all that useful. One use case I can think of is (again) btrfs balance (GC, convert, etc) on a multi drive filesystem. BUT this use case is something that can just use the fallback read-write path as it is doing now.
On 09/12/24 09:20PM, Martin K. Petersen wrote: > >Bart, > >> Does "cookie" refer to the SCSI ROD token? Storing the ROD token in >> the REQ_OP_COPY_DST bio implies that the REQ_OP_COPY_DST bio is only >> submitted after the REQ_OP_COPY_SRC bio has completed. > >Obviously. You can't issue a WRITE USING TOKEN until you have the token. > >> NVMe users may prefer that REQ_OP_COPY_SRC and REQ_OP_COPY_DST bios >> are submitted simultaneously. > >What would be the benefit of submitting these operations concurrently? >As I have explained, it adds substantial complexity and object lifetime >issues throughout the stack. To what end? > >-- Bart, We did implement payload based approach in the past[1] which aligns with this. Since we wait till the REQ_OP_COPY_SRC completes, there won't be issue with async type of dm IOs. Since this would be an internal kernel plumbing, we can optimize/change the approach moving forward. If you are okay with the approach, I can give a respin to that version. Thanks, Nitesh Shetty [1] https://lore.kernel.org/linux-block/20230605121732.28468-1-nj.shetty@samsung.com/T/#mecd04c060cd4285a4b036ca79cc58713308771fe
On Tue, Dec 10, 2024 at 08:05:31AM +0000, Johannes Thumshirn wrote: > > Generally agreeing with all you said, but do we actually have any > > serious use case for cross-LU copies? They just seem incredibly > > complex any not all that useful. > > One use case I can think of is (again) btrfs balance (GC, convert, etc) > on a multi drive filesystem. BUT this use case is something that can > just use the fallback read-write path as it is doing now. Who uses multi-device file systems on multiple LUs of the same SCSI target ơr multiple namespaces on the same nvme subsystem?
On 12/10/24 2:58 AM, hch wrote: > On Tue, Dec 10, 2024 at 08:05:31AM +0000, Johannes Thumshirn wrote: >>> Generally agreeing with all you said, but do we actually have any >>> serious use case for cross-LU copies? They just seem incredibly >>> complex any not all that useful. >> >> One use case I can think of is (again) btrfs balance (GC, convert, etc) >> on a multi drive filesystem. BUT this use case is something that can >> just use the fallback read-write path as it is doing now. > > Who uses multi-device file systems on multiple LUs of the same SCSI > target ơr multiple namespaces on the same nvme subsystem? On Android systems F2FS combines a small conventional logical unit and a large zoned logical unit into a single filesystem. This use case will benefit from copy offloading between different logical units on the same SCSI device. While there may be disagreement about how desirable this setup is from a technical point of view, there is a real use case today for offloading data copying between different logical units. Bart.
On 12/9/24 6:20 PM, Martin K. Petersen wrote: > What would be the benefit of submitting these operations concurrently? I expect that submitting the two copy operations concurrently would result in lower latency for NVMe devices because the REQ_OP_COPY_DST operation can be submitted without waiting for the REQ_OP_COPY_SRC result. > As I have explained, it adds substantial complexity and object lifetime > issues throughout the stack. To what end? I think the approach of embedding the ROD token in the bio payload would add complexity in the block layer. The token-based copy offload approach involves submitting at least the following commands to the SCSI device: * POPULATE TOKEN with a list identifier and source data ranges as parameters to send the source data ranges to the device. * RECEIVE ROD TOKEN INFORMATION with a list identifier as parameter to receive the ROD token. * WRITE USING TOKEN with the ROD token and the destination ranges as parameters to tell the device to start the copy operation. If the block layer would have to manage the ROD token, how would the ROD token be provided to the block layer? Bidirectional commands have been removed from the Linux kernel a while ago so the REQ_OP_COPY_IN parameter data would have to be used to pass parameters to the SCSI driver and also to pass the ROD token back to the block layer. A possible approach is to let the SCSI core allocate memory for the ROD token with kmalloc and to pass that pointer back to the block layer by writing that pointer into the REQ_OP_COPY_IN parameter data. While this can be implemented, I'm not sure that we should integrate support in the block layer for managing ROD tokens since ROD tokens are a concept that is specific to the SCSI protocol. Thanks, Bart.
On 12/10/24 1:53 AM, Nitesh Shetty wrote: > We did implement payload based approach in the past[1] which aligns > with this. Since we wait till the REQ_OP_COPY_SRC completes, there won't > be issue with async type of dm IOs. > Since this would be an internal kernel plumbing, we can optimize/change > the approach moving forward. > If you are okay with the approach, I can give a respin to that version. Yes, I remember this. Let's wait with respinning/reposting until there is agreement about the approach for copy offloading. Thanks, Bart.
On 12/11/24 4:21 AM, Bart Van Assche wrote: > On 12/10/24 2:58 AM, hch wrote: >> On Tue, Dec 10, 2024 at 08:05:31AM +0000, Johannes Thumshirn wrote: >>>> Generally agreeing with all you said, but do we actually have any >>>> serious use case for cross-LU copies? They just seem incredibly >>>> complex any not all that useful. >>> >>> One use case I can think of is (again) btrfs balance (GC, convert, etc) >>> on a multi drive filesystem. BUT this use case is something that can >>> just use the fallback read-write path as it is doing now. >> >> Who uses multi-device file systems on multiple LUs of the same SCSI >> target ơr multiple namespaces on the same nvme subsystem? > > On Android systems F2FS combines a small conventional logical unit and a > large zoned logical unit into a single filesystem. This use case will > benefit from copy offloading between different logical units on the same > SCSI device. While there may be disagreement about how desirable this > setup is from a technical point of view, there is a real use case today > for offloading data copying between different logical units. But for F2FS, the conventional unit is used for metadata and the other zoned LU for data. How come copying from one to the other can be useful ? > > Bart. >
On 10/12/24 11:41AM, Bart Van Assche wrote: >On 12/9/24 6:20 PM, Martin K. Petersen wrote: >>What would be the benefit of submitting these operations concurrently? > >I expect that submitting the two copy operations concurrently would >result in lower latency for NVMe devices because the REQ_OP_COPY_DST >operation can be submitted without waiting for the REQ_OP_COPY_SRC >result. > >>As I have explained, it adds substantial complexity and object lifetime >>issues throughout the stack. To what end? > >I think the approach of embedding the ROD token in the bio payload would >add complexity in the block layer. The token-based copy offload approach >involves submitting at least the following commands to the SCSI device: >* POPULATE TOKEN with a list identifier and source data ranges as > parameters to send the source data ranges to the device. >* RECEIVE ROD TOKEN INFORMATION with a list identifier as parameter to > receive the ROD token. >* WRITE USING TOKEN with the ROD token and the destination ranges as > parameters to tell the device to start the copy operation. > >If the block layer would have to manage the ROD token, how would the ROD >token be provided to the block layer? Bidirectional commands have been >removed from the Linux kernel a while ago so the REQ_OP_COPY_IN >parameter data would have to be used to pass parameters to the SCSI >driver and also to pass the ROD token back to the block layer. A >possible approach is to let the SCSI core allocate memory for the ROD >token with kmalloc and to pass that pointer back to the block layer >by writing that pointer into the REQ_OP_COPY_IN parameter data. While >this can be implemented, I'm not sure that we should integrate support >in the block layer for managing ROD tokens since ROD tokens are a >concept that is specific to the SCSI protocol. > Block layer can allocate a buffer and send this as part of copy operation. Driver can store token/custom info inside the buffer sent along with REQ_OP_COPY_SRC and expect that block layer sends back this info/buffer again in REQ_OP_COPY_DST ? This will reduce the effort for block layer to manage the lifetime issues. Is there any reason, why we cant store the info inside this buffer in driver ? This scheme will require sequential submission of SRC and DST bio's. This might increase in latency, but allows to have simpler design. Main use case for copy is GC, which is mostly a background operation. -- Nitesh Shetty
On 12/11/24 1:36 AM, Nitesh Shetty wrote: > Block layer can allocate a buffer and send this as part of copy > operation. The block layer can only do that if it knows how large the buffer should be. Or in other words, if knowledge of a SCSI buffer size is embedded in the block layer. That doesn't sound ideal to me. > This scheme will require sequential submission of SRC and DST > bio's. This might increase in latency, but allows to have simpler design. > Main use case for copy is GC, which is mostly a background operation. I still prefer a single REQ_OP_COPY operation instead of separate REQ_OP_COPY_SRC and REQ_OP_COPY_DST operations. While this will require additional work in the SCSI disk (sd) driver (implementation of a state machine), it prevents that any details about the SCSI copy offloading approach have to be known by the block layer. Even if copy offloading would be implemented as two operations (REQ_OP_COPY_SRC and REQ_OP_COPY_DST), a state machine is required anyway in the SCSI disk driver because REQ_OP_COPY_SRC would have to be translated into two SCSI commands (POPULATE TOKEN + RECEIVE ROD TOKEN INFORMATION). Thanks, Bart.
Bart, >> What would be the benefit of submitting these operations concurrently? > > I expect that submitting the two copy operations concurrently would > result in lower latency for NVMe devices because the REQ_OP_COPY_DST > operation can be submitted without waiting for the REQ_OP_COPY_SRC > result. Perhaps you are engaging in premature optimization? > If the block layer would have to manage the ROD token, how would the > ROD token be provided to the block layer? In the data buffer described by the bio, of course. Just like the data buffer when we do a READ. Only difference here is that the data is compressed to a fixed size and thus only 512 bytes long regardless of the amount of logical blocks described by the operation. > Bidirectional commands have been removed from the Linux kernel a while > ago so the REQ_OP_COPY_IN parameter data would have to be used to pass > parameters to the SCSI driver and also to pass the ROD token back to > the block layer. A normal READ operation also passes parameters to the SCSI driver. These are the start LBA and the transfer length. That does not make it a bidirectional command. > While this can be implemented, I'm not sure that we should integrate > support in the block layer for managing ROD tokens since ROD tokens > are a concept that is specific to the SCSI protocol. A well-known commercial operating system supports copy offload via the token-based approach. I don't see any reason why our implementation should exclude a wide variety of devices in the industry supported by that platform. And obviously, given that this other operating system uses a token-based implementation in their stack, one could perhaps envision this capability appearing in other protocols in the future? In any case. I only have two horses in this race: 1. Make sure that our user API and block layer implementation are flexible enough to accommodate current and future offload specifications. 2. Make sure our implementation is as simple as possible. Splitting the block layer implementation into a semantic read followed by a semantic write permits token-based offload to be supported. It also makes the implementation simple because there is no concurrency element. The only state is owned by the entity which issues the bio. No lookups, no timeouts, no allocating things in sd.c and hoping that somebody remembers to free them later despite the disk suddenly going away. Even if we were to not support the token-based approach and only do single-command offload, I still think the two-phase operation makes things simpler and more elegant.
Christoph, > Generally agreeing with all you said, but do we actually have any > serious use case for cross-LU copies? They just seem incredibly > complex any not all that useful. It's still widely used to populate a new LUN from a golden image.
On 12/10/24 8:07 PM, Damien Le Moal wrote: > But for F2FS, the conventional unit is used for metadata and the other zoned LU > for data. How come copying from one to the other can be useful ? Hi Damien, What you wrote is correct in general. If a conventional and zoned LU are combined, data is only written to the conventional LU once the zoned LU is full. The data on the conventional LU may be migrated to the zoned LU during garbage collection. This is why copying from the conventional LU to the zoned LU is useful. Jaegeuk, please correct me if I got this wrong. Bart.
On 12/11, Bart Van Assche wrote: > On 12/10/24 8:07 PM, Damien Le Moal wrote: > > But for F2FS, the conventional unit is used for metadata and the other zoned LU > > for data. How come copying from one to the other can be useful ? > > Hi Damien, > > What you wrote is correct in general. If a conventional and zoned LU are > combined, data is only written to the conventional LU once the zoned LU > is full. The data on the conventional LU may be migrated to the zoned LU > during garbage collection. This is why copying from the conventional LU > to the zoned LU is useful. > > Jaegeuk, please correct me if I got this wrong. Bart is correct. It doesn't make sense forcing to use conventional LU for metadata only, but shows the remaining conventional LU space to user as well. In order to do that, we exploited the multi-partition support in F2FS by aligning the section to zoned partition offset simply. > > Bart.
From: Keith Busch <kbusch@kernel.org> Changes from v9: Document the partition hint mask Use bitmap_alloc API Fixup bitmap memory leak Return invalid value if user requests an invalid write hint Added and exported a block device feature flag for indicating generic placement hint support Added statx write hint max field Added BUILD_BUG_ON check for new io_uring SQE fields. Added reviews Kanchan Joshi (2): io_uring: enable per-io hinting capability nvme: enable FDP support Keith Busch (7): block: use generic u16 for write hints block: introduce max_write_hints queue limit statx: add write hint information block: allow ability to limit partition write hints block, fs: add write hint to kiocb block: export placement hint feature scsi: set permanent stream count in block limits Documentation/ABI/stable/sysfs-block | 13 +++++ block/bdev.c | 18 ++++++ block/blk-settings.c | 5 ++ block/blk-sysfs.c | 6 ++ block/fops.c | 31 +++++++++- block/partitions/core.c | 44 ++++++++++++++- drivers/nvme/host/core.c | 84 ++++++++++++++++++++++++++++ drivers/nvme/host/nvme.h | 5 ++ drivers/scsi/sd.c | 2 + fs/stat.c | 1 + include/linux/blk-mq.h | 3 +- include/linux/blk_types.h | 4 +- include/linux/blkdev.h | 15 +++++ include/linux/fs.h | 1 + include/linux/nvme.h | 19 +++++++ include/linux/stat.h | 1 + include/uapi/linux/io_uring.h | 4 ++ include/uapi/linux/stat.h | 3 +- io_uring/io_uring.c | 2 + io_uring/rw.c | 3 +- 20 files changed, 253 insertions(+), 11 deletions(-)