mbox series

[v2,00/20] backup performance: block_status + async

Message ID 20200601181118.579-1-vsementsov@virtuozzo.com
Headers show
Series backup performance: block_status + async | expand

Message

Vladimir Sementsov-Ogievskiy June 1, 2020, 6:10 p.m. UTC
Hi all!

This a last part of original
"[RFC 00/24] backup performance: block_status + async", prepartions are
already merged.

The series turn backup into series of block_copy_async calls, covering
the whole disk, so we get block-status based paralallel async requests
out of the box, which gives performance gain:

-----------------  ----------------  -------------  --------------------------  --------------------------  ----------------  -------------------------------
                   mirror(upstream)  backup(new)    backup(new, no-copy-range)  backup(new, copy-range-1w)  backup(upstream)  backup(upstream, no-copy-range)
hdd-ext4:hdd-ext4  18.86 +- 0.11     45.50 +- 2.35  19.22 +- 0.09               19.51 +- 0.09               22.85 +- 5.98     19.72 +- 0.35
hdd-ext4:ssd-ext4  8.99 +- 0.02      9.30 +- 0.01   8.97 +- 0.02                9.02 +- 0.02                9.68 +- 0.26      9.84 +- 0.12
ssd-ext4:hdd-ext4  9.09 +- 0.11      9.34 +- 0.10   9.34 +- 0.10                8.99 +- 0.01                11.37 +- 0.37     11.47 +- 0.30
ssd-ext4:ssd-ext4  4.07 +- 0.02      5.41 +- 0.05   4.05 +- 0.01                8.35 +- 0.58                9.83 +- 0.64      8.62 +- 0.35
hdd-xfs:hdd-xfs    18.90 +- 0.19     43.26 +- 2.47  19.62 +- 0.14               19.38 +- 0.16               19.55 +- 0.26     19.62 +- 0.12
hdd-xfs:ssd-xfs    8.93 +- 0.12      9.35 +- 0.03   8.93 +- 0.08                8.93 +- 0.05                9.79 +- 0.30      9.55 +- 0.15
ssd-xfs:hdd-xfs    9.15 +- 0.07      9.74 +- 0.28   9.29 +- 0.03                9.08 +- 0.05                10.85 +- 0.31     10.91 +- 0.30
ssd-xfs:ssd-xfs    4.06 +- 0.01      4.93 +- 0.02   4.04 +- 0.01                8.17 +- 0.42                9.52 +- 0.49      8.85 +- 0.46
ssd-ext4:nbd       9.96 +- 0.11      11.45 +- 0.15  11.45 +- 0.02               17.22 +- 0.06               34.45 +- 1.35     35.16 +- 0.37
nbd:ssd-ext4       9.84 +- 0.02      9.84 +- 0.04   9.80 +- 0.06                18.96 +- 0.06               30.89 +- 0.73     31.46 +- 0.21
-----------------  ----------------  -------------  --------------------------  --------------------------  ----------------  -------------------------------


The table shows, that copy_range is in bad relation with parallel async
requests. copy_range brings real performance gain only on supporting fs,
like btrfs. But even on such fs, I'm not sure that this is a good
default behavior: if we do offload copy, so, that no real copy but just
link block in backup the same blocks as in original, this means that
further write from guest will lead to fragmentation of guest disk, when
the aim of backup is to operate transparently for the guest.

So, in addition to these series I also suggest to disable copy_range by
default.

===

How to test:

prepare images:
In a directories, where you want to place source and target images,
prepare images by:

for img in test-source test-target; do
 ./qemu-img create -f raw $img 1000M;
 ./qemu-img bench -c 1000 -d 1 -f raw -s 1M -w --pattern=0xff $img
done

prepare similar image for nbd server, and start it somewhere by

 qemu-nbd --persistent --nocache -f raw IMAGE

Then, run benchmark, like this:
./bench-backup.py --qemu new:../../x86_64-softmmu/qemu-system-x86_64 upstream:/work/src/qemu/up-backup-block-copy-master/x86_64-softmmu/qemu-system-x86_64 --dir hdd-ext4:/test-a hdd-xfs:/test-b ssd-ext4:/ssd ssd-xfs:/ssd-b --test $(for fs in ext4 xfs; do echo hdd-$fs:hdd-$fs hdd-$fs:ssd-$fs ssd-$fs:hdd-$fs ssd-$fs:ssd-$fs; done) --nbd 192.168.100.2 --test ssd-ext4:nbd nbd:ssd-ext4

(you may simply reduce number of directories/test-cases, use --help for
 help)

===

Note, that I included here
"[PATCH] block/block-copy: block_copy_dirty_clusters: fix failure check"
which was previously sent in separate, but still untouched in mailing
list. It still may be applied separately.

Vladimir Sementsov-Ogievskiy (20):
  block/block-copy: block_copy_dirty_clusters: fix failure check
  iotests: 129 don't check backup "busy"
  qapi: backup: add x-use-copy-range parameter
  block/block-copy: More explicit call_state
  block/block-copy: implement block_copy_async
  block/block-copy: add max_chunk and max_workers parameters
  block/block-copy: add ratelimit to block-copy
  block/block-copy: add block_copy_cancel
  blockjob: add set_speed to BlockJobDriver
  job: call job_enter from job_user_pause
  qapi: backup: add x-max-chunk and x-max-workers parameters
  iotests: 56: prepare for backup over block-copy
  iotests: 129: prepare for backup over block-copy
  iotests: 185: prepare for backup over block-copy
  iotests: 219: prepare for backup over block-copy
  iotests: 257: prepare for backup over block-copy
  backup: move to block-copy
  block/block-copy: drop unused argument of block_copy()
  simplebench: bench_block_job: add cmd_options argument
  simplebench: add bench-backup.py

 qapi/block-core.json                   |  11 +-
 block/backup-top.h                     |   1 +
 include/block/block-copy.h             |  45 +++-
 include/block/block_int.h              |   8 +
 include/block/blockjob_int.h           |   2 +
 block/backup-top.c                     |   6 +-
 block/backup.c                         | 170 ++++++++------
 block/block-copy.c                     | 183 ++++++++++++---
 block/replication.c                    |   1 +
 blockdev.c                             |  10 +
 blockjob.c                             |   6 +
 job.c                                  |   1 +
 scripts/simplebench/bench-backup.py    | 132 +++++++++++
 scripts/simplebench/bench-example.py   |   2 +-
 scripts/simplebench/bench_block_job.py |  13 +-
 tests/qemu-iotests/056                 |   8 +-
 tests/qemu-iotests/129                 |   3 +-
 tests/qemu-iotests/185                 |   3 +-
 tests/qemu-iotests/185.out             |   2 +-
 tests/qemu-iotests/219                 |  13 +-
 tests/qemu-iotests/257                 |   1 +
 tests/qemu-iotests/257.out             | 306 ++++++++++++-------------
 22 files changed, 640 insertions(+), 287 deletions(-)
 create mode 100755 scripts/simplebench/bench-backup.py

Comments

Vladimir Sementsov-Ogievskiy Sept. 18, 2020, 8:11 p.m. UTC | #1
17.07.2020 16:45, Max Reitz wrote:
> On 01.06.20 20:11, Vladimir Sementsov-Ogievskiy wrote:
>> Refactor common path to use BlockCopyCallState pointer as parameter, to
>> prepare it for use in asynchronous block-copy (at least, we'll need to
>> run block-copy in a coroutine, passing the whole parameters as one
>> pointer).
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>   block/block-copy.c | 51 ++++++++++++++++++++++++++++++++++------------
>>   1 file changed, 38 insertions(+), 13 deletions(-)
>>
>> diff --git a/block/block-copy.c b/block/block-copy.c
>> index 43a018d190..75882a094c 100644
>> --- a/block/block-copy.c
>> +++ b/block/block-copy.c
> 
> [...]
> 
>> @@ -646,16 +653,16 @@ out:
>>    * it means that some I/O operation failed in context of _this_ block_copy call,
>>    * not some parallel operation.
>>    */
>> -int coroutine_fn block_copy(BlockCopyState *s, int64_t offset, int64_t bytes,
>> -                            bool *error_is_read)
>> +static int coroutine_fn block_copy_common(BlockCopyCallState *call_state)
>>   {
>>       int ret;
>>   
>>       do {
>> -        ret = block_copy_dirty_clusters(s, offset, bytes, error_is_read);
>> +        ret = block_copy_dirty_clusters(call_state);
> 
> It’s possible that much of this code will change in a future patch of
> this series, but as it is, it might be nice to make
> block_copy_dirty_clusters’s argument a const pointer so it’s clear that
> the call to block_copy_wait_one() below will use the original @offset
> and @bytes values.

Hm. I'm trying this, and it doesn't work:

block_copy_task_entry() wants to change call_state:

    t->call_state->failed = true;

> 
> *shrug*
> 
> Reviewed-by: Max Reitz <mreitz@redhat.com>
> 
>>   
>>           if (ret == 0) {
>> -            ret = block_copy_wait_one(s, offset, bytes);
>> +            ret = block_copy_wait_one(call_state->s, call_state->offset,
>> +                                      call_state->bytes);
>>           }
>>   
>>           /*
>
Max Reitz Sept. 21, 2020, 8:54 a.m. UTC | #2
On 18.09.20 22:11, Vladimir Sementsov-Ogievskiy wrote:
> 17.07.2020 16:45, Max Reitz wrote:

>> On 01.06.20 20:11, Vladimir Sementsov-Ogievskiy wrote:

>>> Refactor common path to use BlockCopyCallState pointer as parameter, to

>>> prepare it for use in asynchronous block-copy (at least, we'll need to

>>> run block-copy in a coroutine, passing the whole parameters as one

>>> pointer).

>>>

>>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>

>>> ---

>>>   block/block-copy.c | 51 ++++++++++++++++++++++++++++++++++------------

>>>   1 file changed, 38 insertions(+), 13 deletions(-)

>>>

>>> diff --git a/block/block-copy.c b/block/block-copy.c

>>> index 43a018d190..75882a094c 100644

>>> --- a/block/block-copy.c

>>> +++ b/block/block-copy.c

>>

>> [...]

>>

>>> @@ -646,16 +653,16 @@ out:

>>>    * it means that some I/O operation failed in context of _this_

>>> block_copy call,

>>>    * not some parallel operation.

>>>    */

>>> -int coroutine_fn block_copy(BlockCopyState *s, int64_t offset,

>>> int64_t bytes,

>>> -                            bool *error_is_read)

>>> +static int coroutine_fn block_copy_common(BlockCopyCallState

>>> *call_state)

>>>   {

>>>       int ret;

>>>         do {

>>> -        ret = block_copy_dirty_clusters(s, offset, bytes,

>>> error_is_read);

>>> +        ret = block_copy_dirty_clusters(call_state);

>>

>> It’s possible that much of this code will change in a future patch of

>> this series, but as it is, it might be nice to make

>> block_copy_dirty_clusters’s argument a const pointer so it’s clear that

>> the call to block_copy_wait_one() below will use the original @offset

>> and @bytes values.

> 

> Hm. I'm trying this, and it doesn't work:

> 

> block_copy_task_entry() wants to change call_state:

> 

>    t->call_state->failed = true;


Too bad :)

>> *shrug*

>>

>> Reviewed-by: Max Reitz <mreitz@redhat.com>

>>

>>>             if (ret == 0) {

>>> -            ret = block_copy_wait_one(s, offset, bytes);

>>> +            ret = block_copy_wait_one(call_state->s,

>>> call_state->offset,

>>> +                                      call_state->bytes);

>>>           }

>>>             /*

>>

> 

>
Vladimir Sementsov-Ogievskiy Sept. 21, 2020, 1:58 p.m. UTC | #3
23.07.2020 12:47, Max Reitz wrote:
> On 01.06.20 20:11, Vladimir Sementsov-Ogievskiy wrote:
>> This brings async request handling and block-status driven chunk sizes
>> to backup out of the box, which improves backup performance.
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>   include/block/block-copy.h |   9 +--
>>   block/backup.c             | 145 +++++++++++++++++++------------------
>>   block/block-copy.c         |  21 ++----
>>   3 files changed, 83 insertions(+), 92 deletions(-)
> 
> This patch feels like it should be multiple ones.  I don’t see why a
> patch that lets backup use block-copy (more) should have to modify the
> block-copy code.
> 
> More specifically: I think that block_copy_set_progress_callback()
> should be removed in a separate patch.  Also, moving
> @cb_opaque/@progress_opaque from BlockCopyState into BlockCopyCallState
> seems like a separate patch to me, too.
> 
> (I presume if the cb had had its own opaque object from patch 5 on,
> there wouldn’t be this problem now.  We’d just stop using the progress
> callback in backup, and remove it in one separate patch.)
> 
> [...]
> 
>> diff --git a/block/backup.c b/block/backup.c
>> index ec2676abc2..59c00f5293 100644
>> --- a/block/backup.c
>> +++ b/block/backup.c
>> @@ -44,42 +44,19 @@ typedef struct BackupBlockJob {
>>       BlockdevOnError on_source_error;
>>       BlockdevOnError on_target_error;
>>       uint64_t len;
>> -    uint64_t bytes_read;
>>       int64_t cluster_size;
>>       int max_workers;
>>       int64_t max_chunk;
>>   
>>       BlockCopyState *bcs;
>> +
>> +    BlockCopyCallState *bcs_call;
> 
> Can this be more descriptive?  E.g. background_bcs?  bg_bcs_call?  bg_bcsc?
> 
>> +    int ret;
>> +    bool error_is_read;
>>   } BackupBlockJob;
>>   
>>   static const BlockJobDriver backup_job_driver;
>>   
> 
> [...]
> 
>>   static int coroutine_fn backup_loop(BackupBlockJob *job)
>>   {
>> -    bool error_is_read;
>> -    int64_t offset;
>> -    BdrvDirtyBitmapIter *bdbi;
>> -    int ret = 0;
>> +    while (true) { /* retry loop */
>> +        assert(!job->bcs_call);
>> +        job->bcs_call = block_copy_async(job->bcs, 0,
>> +                                         QEMU_ALIGN_UP(job->len,
>> +                                                       job->cluster_size),
>> +                                         true, job->max_workers, job->max_chunk,
>> +                                         backup_block_copy_callback, job);
>>   
>> -    bdbi = bdrv_dirty_iter_new(block_copy_dirty_bitmap(job->bcs));
>> -    while ((offset = bdrv_dirty_iter_next(bdbi)) != -1) {
>> -        do {
>> -            if (yield_and_check(job)) {
>> -                goto out;
>> +        while (job->bcs_call && !job->common.job.cancelled) {
>> +            /* wait and handle pauses */
> 
> Doesn’t someone need to reset BlockCopyCallState.cancelled when the job
> is unpaused?  I can’t see anyone doing that.
> 
> Well, or even just reentering the block-copy operation after
> backup_pause() has cancelled it.  Is there some magic I’m missing?
> 
> Does pausing (which leads to cancelling the block-copy operation) just
> yield to the CB being invoked, so the copy operation is considered done,
> and then we just enter the next iteration of the loop and try again?

Yes, that's my idea: we cancel on pause and just run new block_copy operation
on resume.

> But cancelling the block-copy operation leads to it returning 0, AFAIR,
> so...

Looks like it should be fixed. By returning ECANCELED or by checking the bitmap.

> 
>> +
>> +            job_pause_point(&job->common.job);
>> +
>> +            if (job->bcs_call && !job->common.job.cancelled) {
>> +                job_yield(&job->common.job);
>>               }
>> -            ret = backup_do_cow(job, offset, job->cluster_size, &error_is_read);
>> -            if (ret < 0 && backup_error_action(job, error_is_read, -ret) ==
>> -                           BLOCK_ERROR_ACTION_REPORT)
>> -            {
>> -                goto out;
>> +        }
>> +
>> +        if (!job->bcs_call && job->ret == 0) {
>> +            /* Success */
>> +            return 0;
> 
> ...I would assume we return here when the job is paused.
> 
>> +        }
>> +
>> +        if (job->common.job.cancelled) {
>> +            if (job->bcs_call) {
>> +                block_copy_cancel(job->bcs_call);
>>               }
>> -        } while (ret < 0);
>> +            return 0;
>> +        }
>> +
>> +        if (!job->bcs_call && job->ret < 0 &&
> 
> Is it even possible for bcs_call to be non-NULL here?
> 
>> +            (backup_error_action(job, job->error_is_read, -job->ret) ==
>> +             BLOCK_ERROR_ACTION_REPORT))
>> +        {
>> +            return job->ret;
>> +        }
> 
> So if we get an error, and the error action isn’t REPORT, we just try
> the whole operation again?  That doesn’t sound very IGNORE-y to me.

Not the whole. We have the dirty bitmap: clusters that was already copied are not touched more.
Vladimir Sementsov-Ogievskiy Oct. 22, 2020, 8:35 p.m. UTC | #4
22.07.2020 15:22, Max Reitz wrote:
> On 01.06.20 20:11, Vladimir Sementsov-Ogievskiy wrote:
>> Add new parameters to configure future backup features. The patch
>> doesn't introduce aio backup requests (so we actually have only one
>> worker) neither requests larger than one cluster. Still, formally we
>> satisfy these maximums anyway, so add the parameters now, to facilitate
>> further patch which will really change backup job behavior.
>>
>> Options are added with x- prefixes, as the only use for them are some
>> very conservative iotests which will be updated soon.
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>   qapi/block-core.json      |  9 ++++++++-
>>   include/block/block_int.h |  7 +++++++
>>   block/backup.c            | 21 +++++++++++++++++++++
>>   block/replication.c       |  2 +-
>>   blockdev.c                |  5 +++++
>>   5 files changed, 42 insertions(+), 2 deletions(-)
>>

[..]

>> @@ -422,6 +436,11 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
>>       if (cluster_size < 0) {
>>           goto error;
>>       }
>> +    if (max_chunk && max_chunk < cluster_size) {
>> +        error_setg(errp, "Required max-chunk (%" PRIi64") is less than backup "
> 
> (missing a space after PRIi64)
> 
>> +                   "cluster size (%" PRIi64 ")", max_chunk, cluster_size);
> 
> Should this be noted in the QAPI documentation?

Hmm.. It makes sense, but I don't know what to write: should be >= job cluster_size? But then I'll have to describe the logic of backup_calculate_cluster_size()...

>  (And perhaps the fact
> that without copy offloading, we’ll never copy anything bigger than one
> cluster at a time anyway?)

This is a parameter for background copying. Look at block_copy_task_create(), if call_state has own max_chunk, it's used instead of common copy_size (derived from cluster_size). But at a moment of this patch background process through async block-copy is not  yet implemented, and the parameter doesn't work, which is described in commit message.

> 
>> +        return NULL;
>> +    }
>>   
>>       /*

[..]
Vladimir Sementsov-Ogievskiy Oct. 22, 2020, 9:20 p.m. UTC | #5
23.07.2020 11:35, Max Reitz wrote:
> On 01.06.20 20:11, Vladimir Sementsov-Ogievskiy wrote:
>> The further change of moving backup to be a on block-copy call will
> 
> -on?
> 
>> make copying chunk-size and cluster-size a separate things. So, even
> 
> s/a/two/
> 
>> with 64k cluster sized qcow2 image, default chunk would be 1M.
>> Test 219 depends on specified chunk-size. Update it for explicit
>> chunk-size for backup as for mirror.
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>   tests/qemu-iotests/219 | 13 +++++++------
>>   1 file changed, 7 insertions(+), 6 deletions(-)
>>
>> diff --git a/tests/qemu-iotests/219 b/tests/qemu-iotests/219
>> index db272c5249..2bbed28f39 100755
>> --- a/tests/qemu-iotests/219
>> +++ b/tests/qemu-iotests/219
>> @@ -203,13 +203,13 @@ with iotests.FilePath('disk.img') as disk_path, \
>>       # but related to this also automatic state transitions like job
>>       # completion), but still get pause points often enough to avoid making this
>>       # test very slow, it's important to have the right ratio between speed and
>> -    # buf_size.
>> +    # copy-chunk-size.
>>       #
>> -    # For backup, buf_size is hard-coded to the source image cluster size (64k),
>> -    # so we'll pick the same for mirror. The slice time, i.e. the granularity
>> -    # of the rate limiting is 100ms. With a speed of 256k per second, we can
>> -    # get four pause points per second. This gives us 250ms per iteration,
>> -    # which should be enough to stay deterministic.
>> +    # Chose 64k copy-chunk-size both for mirror (by buf_size) and backup (by
>> +    # x-max-chunk). The slice time, i.e. the granularity of the rate limiting
>> +    # is 100ms. With a speed of 256k per second, we can get four pause points
>> +    # per second. This gives us 250ms per iteration, which should be enough to
>> +    # stay deterministic.
> 
> Don’t we also have to limit the number of workers to 1 so we actually
> keep 250 ms per iteration instead of just finishing four requests
> immediately, then pausing for a second?

Block-copy rate limiter works good enough: it will not start too much requests.

> 
>>       test_job_lifecycle(vm, 'drive-mirror', has_ready=True, job_args={
>>           'device': 'drive0-node',
>> @@ -226,6 +226,7 @@ with iotests.FilePath('disk.img') as disk_path, \
>>                   'target': copy_path,
>>                   'sync': 'full',
>>                   'speed': 262144,
>> +                'x-max-chunk': 65536,
>>                   'auto-finalize': auto_finalize,
>>                   'auto-dismiss': auto_dismiss,
>>               })
>>
> 
>
Vladimir Sementsov-Ogievskiy Oct. 26, 2020, 3:18 p.m. UTC | #6
23.07.2020 12:47, Max Reitz wrote:
>> +static void coroutine_fn backup_set_speed(BlockJob *job, int64_t speed)
>> +{
>> +    BackupBlockJob *s = container_of(job, BackupBlockJob, common);
>> +
>> +    if (s->bcs) {
>> +        /* In block_job_create we yet don't have bcs */
> Shouldn’t hurt to make it conditional, but how can we get here in
> block_job_create()?
> 

block_job_set_speed is called from block_job_create.
Max Reitz Nov. 4, 2020, 5:19 p.m. UTC | #7
On 22.10.20 22:35, Vladimir Sementsov-Ogievskiy wrote:
> 22.07.2020 15:22, Max Reitz wrote:

>> On 01.06.20 20:11, Vladimir Sementsov-Ogievskiy wrote:

>>> Add new parameters to configure future backup features. The patch

>>> doesn't introduce aio backup requests (so we actually have only one

>>> worker) neither requests larger than one cluster. Still, formally we

>>> satisfy these maximums anyway, so add the parameters now, to facilitate

>>> further patch which will really change backup job behavior.

>>>

>>> Options are added with x- prefixes, as the only use for them are some

>>> very conservative iotests which will be updated soon.

>>>

>>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>

>>> ---

>>>   qapi/block-core.json      |  9 ++++++++-

>>>   include/block/block_int.h |  7 +++++++

>>>   block/backup.c            | 21 +++++++++++++++++++++

>>>   block/replication.c       |  2 +-

>>>   blockdev.c                |  5 +++++

>>>   5 files changed, 42 insertions(+), 2 deletions(-)

>>>

> 

> [..]

> 

>>> @@ -422,6 +436,11 @@ BlockJob *backup_job_create(const char *job_id,

>>> BlockDriverState *bs,

>>>       if (cluster_size < 0) {

>>>           goto error;

>>>       }

>>> +    if (max_chunk && max_chunk < cluster_size) {

>>> +        error_setg(errp, "Required max-chunk (%" PRIi64") is less

>>> than backup "

>>

>> (missing a space after PRIi64)

>>

>>> +                   "cluster size (%" PRIi64 ")", max_chunk,

>>> cluster_size);

>>

>> Should this be noted in the QAPI documentation?

> 

> Hmm.. It makes sense, but I don't know what to write: should be >= job

> cluster_size? But then I'll have to describe the logic of

> backup_calculate_cluster_size()...


Isn’t the logic basically just “cluster size of the target image file
(min 64k)”?  The other cases just cover error cases, and they always
return 64k, which would effectively be documented by stating that 64k is
the minimum.

But in the common cases (qcow2 or raw), we’ll never get an error anyway.

I think it’d be good to describe the cluster size somewhere, because
otherwise I find it a bit malicious to throw an error at the user that
they could not have anticipated because they have no idea what valid
values for the parameter are (until they try).

>>  (And perhaps the fact

>> that without copy offloading, we’ll never copy anything bigger than one

>> cluster at a time anyway?)

> 

> This is a parameter for background copying. Look at

> block_copy_task_create(), if call_state has own max_chunk, it's used

> instead of common copy_size (derived from cluster_size). But at a moment

> of this patch background process through async block-copy is not  yet

> implemented, and the parameter doesn't work, which is described in

> commit message.


Ah, OK.  Right.

Max
Max Reitz Nov. 4, 2020, 5:45 p.m. UTC | #8
On 26.10.20 16:18, Vladimir Sementsov-Ogievskiy wrote:
> 23.07.2020 12:47, Max Reitz wrote:

>>> +static void coroutine_fn backup_set_speed(BlockJob *job, int64_t speed)

>>> +{

>>> +    BackupBlockJob *s = container_of(job, BackupBlockJob, common);

>>> +

>>> +    if (s->bcs) {

>>> +        /* In block_job_create we yet don't have bcs */

>> Shouldn’t hurt to make it conditional, but how can we get here in

>> block_job_create()?

>>

> 

> block_job_set_speed is called from block_job_create.


Ah, right.

Max
Vladimir Sementsov-Ogievskiy Nov. 9, 2020, 11:11 a.m. UTC | #9
04.11.2020 20:19, Max Reitz wrote:
> On 22.10.20 22:35, Vladimir Sementsov-Ogievskiy wrote:

>> 22.07.2020 15:22, Max Reitz wrote:

>>> On 01.06.20 20:11, Vladimir Sementsov-Ogievskiy wrote:

>>>> Add new parameters to configure future backup features. The patch

>>>> doesn't introduce aio backup requests (so we actually have only one

>>>> worker) neither requests larger than one cluster. Still, formally we

>>>> satisfy these maximums anyway, so add the parameters now, to facilitate

>>>> further patch which will really change backup job behavior.

>>>>

>>>> Options are added with x- prefixes, as the only use for them are some

>>>> very conservative iotests which will be updated soon.

>>>>

>>>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>

>>>> ---

>>>>    qapi/block-core.json      |  9 ++++++++-

>>>>    include/block/block_int.h |  7 +++++++

>>>>    block/backup.c            | 21 +++++++++++++++++++++

>>>>    block/replication.c       |  2 +-

>>>>    blockdev.c                |  5 +++++

>>>>    5 files changed, 42 insertions(+), 2 deletions(-)

>>>>

>>

>> [..]

>>

>>>> @@ -422,6 +436,11 @@ BlockJob *backup_job_create(const char *job_id,

>>>> BlockDriverState *bs,

>>>>        if (cluster_size < 0) {

>>>>            goto error;

>>>>        }

>>>> +    if (max_chunk && max_chunk < cluster_size) {

>>>> +        error_setg(errp, "Required max-chunk (%" PRIi64") is less

>>>> than backup "

>>>

>>> (missing a space after PRIi64)

>>>

>>>> +                   "cluster size (%" PRIi64 ")", max_chunk,

>>>> cluster_size);

>>>

>>> Should this be noted in the QAPI documentation?

>>

>> Hmm.. It makes sense, but I don't know what to write: should be >= job

>> cluster_size? But then I'll have to describe the logic of

>> backup_calculate_cluster_size()...

> 

> Isn’t the logic basically just “cluster size of the target image file

> (min 64k)”?  The other cases just cover error cases, and they always

> return 64k, which would effectively be documented by stating that 64k is

> the minimum.

> 

> But in the common cases (qcow2 or raw), we’ll never get an error anyway.

> 

> I think it’d be good to describe the cluster size somewhere, because

> otherwise I find it a bit malicious to throw an error at the user that

> they could not have anticipated because they have no idea what valid

> values for the parameter are (until they try).


OK

> 

>>>   (And perhaps the fact

>>> that without copy offloading, we’ll never copy anything bigger than one

>>> cluster at a time anyway?)

>>

>> This is a parameter for background copying. Look at

>> block_copy_task_create(), if call_state has own max_chunk, it's used

>> instead of common copy_size (derived from cluster_size). But at a moment

>> of this patch background process through async block-copy is not  yet

>> implemented, and the parameter doesn't work, which is described in

>> commit message.

> 

> Ah, OK.  Right.

> 

> Max

> 



-- 
Best regards,
Vladimir