diff mbox series

[RFC,v2] statx, inode: document the new STATX_INO_VERSION field

Message ID 20220901121714.20051-1-jlayton@kernel.org
State New
Headers show
Series [RFC,v2] statx, inode: document the new STATX_INO_VERSION field | expand

Commit Message

Jeff Layton Sept. 1, 2022, 12:17 p.m. UTC
I'm proposing to expose the inode change attribute via statx [1]. Document
what this value means and what an observer can infer from it changing.

Signed-off-by: Jeff Layton <jlayton@kernel.org>

[1]: https://lore.kernel.org/linux-nfs/20220826214703.134870-1-jlayton@kernel.org/T/#t
---
 man2/statx.2 | 17 +++++++++++++++++
 man7/inode.7 | 12 ++++++++++++
 2 files changed, 29 insertions(+)

v2: revised the definition to be more strict, since that seemed to be
    consensus on desired behavior. Spurious i_version bumps would now
    be considered bugs, by this definition.

Comments

Florian Weimer Sept. 6, 2022, 12:17 p.m. UTC | #1
* Jeff Layton:

> All of the existing implementations use all 64 bits. If you were to
> increment a 64 bit value every nanosecond, it will take >500 years for
> it to wrap. I'm hoping that's good enough. ;)
>
> The implementation that all of the local Linux filesystems use track
> whether the value has been queried using one bit, so there you only get
> 63 bits of counter.
>
> My original thinking here was that we should leave the spec "loose" to
> allow for implementations that may not be based on a counter. E.g. could
> some filesystem do this instead by hashing certain metadata?

Hashing might have collisions that could be triggered deliberately, so
probably not a good idea.  It's also hard to argue that random
collisions are unlikely.

> It's arguable though that the NFSv4 spec requires that this be based on
> a counter, as the client is required to increment it in the case of
> write delegations.

Yeah, I think it has to be monotonic.

>> If the system crashes without flushing disks, is it possible to observe
>> new file contents without a change of i_version?
>
> Yes, I think that's possible given the current implementations.
>
> We don't have a great scheme to combat that at the moment, other than
> looking at this in conjunction with the ctime. As long as the clock
> doesn't jump backward after the crash and it takes more than one jiffy
> to get the host back up, then you can be reasonably sure that
> i_version+ctime should never repeat.
>
> Maybe that's worth adding to the NOTES section of the manpage?

I'd appreciate that.

Thanks,
Florian
Jeff Layton Sept. 6, 2022, 4:41 p.m. UTC | #2
On Tue, 2022-09-06 at 14:17 +0200, Florian Weimer wrote:
> * Jeff Layton:
> 
> > All of the existing implementations use all 64 bits. If you were to
> > increment a 64 bit value every nanosecond, it will take >500 years for
> > it to wrap. I'm hoping that's good enough. ;)
> > 
> > The implementation that all of the local Linux filesystems use track
> > whether the value has been queried using one bit, so there you only get
> > 63 bits of counter.
> > 
> > My original thinking here was that we should leave the spec "loose" to
> > allow for implementations that may not be based on a counter. E.g. could
> > some filesystem do this instead by hashing certain metadata?
> 
> Hashing might have collisions that could be triggered deliberately, so
> probably not a good idea.  It's also hard to argue that random
> collisions are unlikely.
> 

In principle, if a filesystem could guarantee enough timestamp
resolution, it's possible collisions could be hard to achieve. It's also
possible you could factor in other metadata that wasn't necessarily
visible to userland to try and ensure uniqueness in the counter.

Still...

> > It's arguable though that the NFSv4 spec requires that this be based on
> > a counter, as the client is required to increment it in the case of
> > write delegations.
> 
> Yeah, I think it has to be monotonic.
> 

I think so too. NFSv4 sort of needs that anyway.

> > > If the system crashes without flushing disks, is it possible to observe
> > > new file contents without a change of i_version?
> > 
> > Yes, I think that's possible given the current implementations.
> > 
> > We don't have a great scheme to combat that at the moment, other than
> > looking at this in conjunction with the ctime. As long as the clock
> > doesn't jump backward after the crash and it takes more than one jiffy
> > to get the host back up, then you can be reasonably sure that
> > i_version+ctime should never repeat.
> > 
> > Maybe that's worth adding to the NOTES section of the manpage?
> 
> I'd appreciate that.

Ok! New version of the manpage patch sent. If no one has strong
objections to the proposed docs, I'll send out new kernel patches in the
next day or two.

Thanks!
Jeff Layton Sept. 6, 2022, 5:04 p.m. UTC | #3
On Tue, 2022-09-06 at 12:41 -0400, Jeff Layton wrote:
> On Tue, 2022-09-06 at 14:17 +0200, Florian Weimer wrote:
> > * Jeff Layton:
> > 
> > > All of the existing implementations use all 64 bits. If you were to
> > > increment a 64 bit value every nanosecond, it will take >500 years for
> > > it to wrap. I'm hoping that's good enough. ;)
> > > 
> > > The implementation that all of the local Linux filesystems use track
> > > whether the value has been queried using one bit, so there you only get
> > > 63 bits of counter.
> > > 
> > > My original thinking here was that we should leave the spec "loose" to
> > > allow for implementations that may not be based on a counter. E.g. could
> > > some filesystem do this instead by hashing certain metadata?
> > 
> > Hashing might have collisions that could be triggered deliberately, so
> > probably not a good idea.  It's also hard to argue that random
> > collisions are unlikely.
> > 
> 
> In principle, if a filesystem could guarantee enough timestamp
> resolution, it's possible collisions could be hard to achieve. It's also
> possible you could factor in other metadata that wasn't necessarily
> visible to userland to try and ensure uniqueness in the counter.
> 
> Still...
> 

Actually, Bruce brought up a good point on IRC. The main danger here is
that we might do this:

Start (i_version is at 1)
write data (i_version goes to 2)
statx+read data (observer associates data with i_version of 2)
Crash, but before new i_version made it to disk
Machine comes back up (i_version back at 1)
write data (i_version goes to 2)
statx (observer assumes his cache is valid)

We can mitigate this by factoring in the ctime when we do the statx.
Another option though would be to factor in the ctime when we generate
the new value and store it.

Here's what nfsd does today:

      chattr =  stat->ctime.tv_sec;
      chattr <<= 30;
      chattr += stat->ctime.tv_nsec;
      chattr += inode_query_iversion(inode);

Instead of doing this after we query it, we could do that before storing
it. After a crash, we might see the value go backward, but if a new
write later happens, the new value would be very unlikely to match the
one that got lost.

That seems quite doable, and might be better for userland consumers
overall.

> > > It's arguable though that the NFSv4 spec requires that this be based on
> > > a counter, as the client is required to increment it in the case of
> > > write delegations.
> > 
> > Yeah, I think it has to be monotonic.
> > 
> 
> I think so too. NFSv4 sort of needs that anyway.
> 
> > > > If the system crashes without flushing disks, is it possible to observe
> > > > new file contents without a change of i_version?
> > > 
> > > Yes, I think that's possible given the current implementations.
> > > 
> > > We don't have a great scheme to combat that at the moment, other than
> > > looking at this in conjunction with the ctime. As long as the clock
> > > doesn't jump backward after the crash and it takes more than one jiffy
> > > to get the host back up, then you can be reasonably sure that
> > > i_version+ctime should never repeat.
> > > 
> > > Maybe that's worth adding to the NOTES section of the manpage?
> > 
> > I'd appreciate that.
> 
> Ok! New version of the manpage patch sent. If no one has strong
> objections to the proposed docs, I'll send out new kernel patches in the
> next day or two.
> 
> Thanks!
J. Bruce Fields Sept. 6, 2022, 7:29 p.m. UTC | #4
On Tue, Sep 06, 2022 at 01:04:05PM -0400, Jeff Layton wrote:
> On Tue, 2022-09-06 at 12:41 -0400, Jeff Layton wrote:
> > On Tue, 2022-09-06 at 14:17 +0200, Florian Weimer wrote:
> > > * Jeff Layton:
> > > 
> > > > All of the existing implementations use all 64 bits. If you were to
> > > > increment a 64 bit value every nanosecond, it will take >500 years for
> > > > it to wrap. I'm hoping that's good enough. ;)
> > > > 
> > > > The implementation that all of the local Linux filesystems use track
> > > > whether the value has been queried using one bit, so there you only get
> > > > 63 bits of counter.
> > > > 
> > > > My original thinking here was that we should leave the spec "loose" to
> > > > allow for implementations that may not be based on a counter. E.g. could
> > > > some filesystem do this instead by hashing certain metadata?
> > > 
> > > Hashing might have collisions that could be triggered deliberately, so
> > > probably not a good idea.  It's also hard to argue that random
> > > collisions are unlikely.
> > > 
> > 
> > In principle, if a filesystem could guarantee enough timestamp
> > resolution, it's possible collisions could be hard to achieve. It's also
> > possible you could factor in other metadata that wasn't necessarily
> > visible to userland to try and ensure uniqueness in the counter.
> > 
> > Still...

I've got one other nagging worry, about the ordering of change attribute
updates with respect to their corresponding changes.  I think with
current implementations it's possible that the only change attribute
update(s) may happen while the old file data is still visible, which
means a concurrent reader could cache the old data with the new change
attribute, and be left with a stale cache indefinitely.

For the purposes of close-to-open semantics I think that's not a
problem, though.

There may be some previous discussion of this in mailing list archives.

--b.
Jeff Layton Sept. 6, 2022, 7:55 p.m. UTC | #5
On Tue, 2022-09-06 at 15:29 -0400, J. Bruce Fields wrote:
> On Tue, Sep 06, 2022 at 01:04:05PM -0400, Jeff Layton wrote:
> > On Tue, 2022-09-06 at 12:41 -0400, Jeff Layton wrote:
> > > On Tue, 2022-09-06 at 14:17 +0200, Florian Weimer wrote:
> > > > * Jeff Layton:
> > > > 
> > > > > All of the existing implementations use all 64 bits. If you were to
> > > > > increment a 64 bit value every nanosecond, it will take >500 years for
> > > > > it to wrap. I'm hoping that's good enough. ;)
> > > > > 
> > > > > The implementation that all of the local Linux filesystems use track
> > > > > whether the value has been queried using one bit, so there you only get
> > > > > 63 bits of counter.
> > > > > 
> > > > > My original thinking here was that we should leave the spec "loose" to
> > > > > allow for implementations that may not be based on a counter. E.g. could
> > > > > some filesystem do this instead by hashing certain metadata?
> > > > 
> > > > Hashing might have collisions that could be triggered deliberately, so
> > > > probably not a good idea.  It's also hard to argue that random
> > > > collisions are unlikely.
> > > > 
> > > 
> > > In principle, if a filesystem could guarantee enough timestamp
> > > resolution, it's possible collisions could be hard to achieve. It's also
> > > possible you could factor in other metadata that wasn't necessarily
> > > visible to userland to try and ensure uniqueness in the counter.
> > > 
> > > Still...
> 
> I've got one other nagging worry, about the ordering of change attribute
> updates with respect to their corresponding changes.  I think with
> current implementations it's possible that the only change attribute
> update(s) may happen while the old file data is still visible, which
> means a concurrent reader could cache the old data with the new change
> attribute, and be left with a stale cache indefinitely.
> 

Yeah, that's a potential issue. The i_version is updated in
inode_update_time, which does happen before the write to the pagecache.

We should probably add a note to the manpage that one should not expect
any sort of atomicity between the change to the inode and the change in
the value. I'm not sure we can offer much in the way of mitigation for
that problem, otherwise.

> For the purposes of close-to-open semantics I think that's not a
> problem, though.
> 
> There may be some previous discussion of this in mailing list archives.
>
diff mbox series

Patch

diff --git a/man2/statx.2 b/man2/statx.2
index 0d1b4591f74c..493e4e234809 100644
--- a/man2/statx.2
+++ b/man2/statx.2
@@ -62,6 +62,7 @@  struct statx {
     __u32 stx_dev_major;   /* Major ID */
     __u32 stx_dev_minor;   /* Minor ID */
     __u64 stx_mnt_id;      /* Mount ID */
+    __u64 stx_ino_version; /* Inode change attribute */
 };
 .EE
 .in
@@ -247,6 +248,7 @@  STATX_BTIME	Want stx_btime
 STATX_ALL	The same as STATX_BASIC_STATS | STATX_BTIME.
 	It is deprecated and should not be used.
 STATX_MNT_ID	Want stx_mnt_id (since Linux 5.8)
+STATX_INO_VERSION	Want stx_ino_version (DRAFT)
 .TE
 .in
 .PP
@@ -411,6 +413,21 @@  and corresponds to the number in the first field in one of the records in
 For further information on the above fields, see
 .BR inode (7).
 .\"
+.TP
+.I stx_ino_version
+The inode version, also known as the inode change attribute. This
+value must change any time there is an inode status change. Any
+operation that would cause the
+.I stx_ctime
+to change must also cause
+.I stx_ino_version
+to change, even when there is no apparent change to the
+.I stx_ctime
+due to coarse timestamp granularity.
+.IP
+An observer cannot infer anything about the nature or magnitude of the change
+from the value of this field. A change in this value only indicates that
+there has been an explicit change in the inode.
 .SS File attributes
 The
 .I stx_attributes
diff --git a/man7/inode.7 b/man7/inode.7
index 9b255a890720..d5e0890a52c0 100644
--- a/man7/inode.7
+++ b/man7/inode.7
@@ -184,6 +184,18 @@  Last status change timestamp (ctime)
 This is the file's last status change timestamp.
 It is changed by writing or by setting inode information
 (i.e., owner, group, link count, mode, etc.).
+.TP
+Inode version (i_version)
+(not returned in the \fIstat\fP structure); \fIstatx.stx_ino_version\fP
+.IP
+This is the inode change attribute. Any operation that would result in a change
+to \fIstatx.stx_ctime\fP must result in a change to this value. The value must
+change even in the case where the ctime change is not evident due to coarse
+timestamp granularity.
+.IP
+An observer cannot infer anything from the returned value about the nature or
+magnitude of the change. If the returned value is different from the last time
+it was checked, then something has made an explicit change to the inode.
 .PP
 The timestamp fields report time measured with a zero point at the
 .IR Epoch ,