Message ID | 20210707183616.5620-10-brijesh.singh@amd.com |
---|---|
State | New |
Headers | show |
Series | [Part2,RFC,v4,01/40] KVM: SVM: Add support to handle AP reset MSR protocol | expand |
> @@ -502,6 +503,81 @@ static void show_ldttss(const struct desc_ptr *gdt, const char *name, u16 index) > name, index, addr, (desc.limit0 | (desc.limit1 << 16))); > } > > +static void dump_rmpentry(unsigned long address) > +{ A comment on this sucker would be nice. I *think* this must be a kernel virtual address. Reflecting that into the naming or a comment would be nice. > + struct rmpentry *e; > + unsigned long pfn; > + pgd_t *pgd; > + pte_t *pte; > + int level; > + > + pgd = __va(read_cr3_pa()); > + pgd += pgd_index(address); > + > + pte = lookup_address_in_pgd(pgd, address, &level); > + if (unlikely(!pte)) > + return; It's a little annoying this is doing *another* separate page walk. Don't we already do this for dumping the page tables themselves at oops time? Also, please get rid of all of the likely/unlikely()s in your patches. They are pure noise unless you have specific knowledge of the compiler getting something so horribly wrong that it affects real-world performance. > + switch (level) { > + case PG_LEVEL_4K: { > + pfn = pte_pfn(*pte); > + break; > + } These superfluous brackets are really strange looking. Could you remove them, please? > + case PG_LEVEL_2M: { > + pfn = pmd_pfn(*(pmd_t *)pte); > + break; > + } > + case PG_LEVEL_1G: { > + pfn = pud_pfn(*(pud_t *)pte); > + break; > + } > + case PG_LEVEL_512G: { > + pfn = p4d_pfn(*(p4d_t *)pte); > + break; > + } > + default: > + return; > + } > + > + e = snp_lookup_page_in_rmptable(pfn_to_page(pfn), &level); So, lookup_address_in_pgd() looks to me like it will return pretty random page table entries as long as the entry isn't p{gd,4d,ud,md,te}_none(). It can certainly return !p*_present() entries. Those are *NOT* safe to call pfn_to_page() on. > + if (unlikely(!e)) > + return; > + > + /* > + * If the RMP entry at the faulting address was not assigned, then > + * dump may not provide any useful debug information. Iterate > + * through the entire 2MB region, and dump the RMP entries if one > + * of the bit in the RMP entry is set. > + */ Some of this comment should be moved down to the loop itself. > + if (rmpentry_assigned(e)) { > + pr_alert("RMPEntry paddr 0x%lx [assigned=%d immutable=%d pagesize=%d gpa=0x%lx" > + " asid=%d vmsa=%d validated=%d]\n", pfn << PAGE_SHIFT, > + rmpentry_assigned(e), rmpentry_immutable(e), rmpentry_pagesize(e), > + rmpentry_gpa(e), rmpentry_asid(e), rmpentry_vmsa(e), > + rmpentry_validated(e)); > + > + pr_alert("RMPEntry paddr 0x%lx %016llx %016llx\n", pfn << PAGE_SHIFT, > + e->high, e->low); Could you please include an entire oops in the changelog that also includes this information? It would be really nice if this was at least consistent in style to the stuff around it. Also, how much of this stuff like rmpentry_asid() is duplicated in the "raw" dump of e->high and e->low? > + } else { > + unsigned long pfn_end; > + > + pfn = pfn & ~0x1ff; There's a nice magic number. Why not: pfn = pfn & ~(PTRS_PER_PMD-1); ? This also needs a comment about *WHY* this case is looking at a 2MB region. > + pfn_end = pfn + PTRS_PER_PMD; > + > + while (pfn < pfn_end) { > + e = snp_lookup_page_in_rmptable(pfn_to_page(pfn), &level); > + > + if (unlikely(!e)) > + return; > + > + if (e->low || e->high) > + pr_alert("RMPEntry paddr 0x%lx: %016llx %016llx\n", > + pfn << PAGE_SHIFT, e->high, e->low); Why does this dump "raw" RMP entries while the above stuff filters them through a bunch of helper macros? > + pfn++; > + } > + } > +} > + > static void > show_fault_oops(struct pt_regs *regs, unsigned long error_code, unsigned long address) > { > @@ -578,6 +654,9 @@ show_fault_oops(struct pt_regs *regs, unsigned long error_code, unsigned long ad > } > > dump_pagetable(address); > + > + if (error_code & X86_PF_RMP) > + dump_rmpentry(address); > } > > static noinline void >
Hi Dave, On 7/7/21 2:21 PM, Dave Hansen wrote: >> @@ -502,6 +503,81 @@ static void show_ldttss(const struct desc_ptr *gdt, const char *name, u16 index) >> name, index, addr, (desc.limit0 | (desc.limit1 << 16))); >> } >> >> +static void dump_rmpentry(unsigned long address) >> +{ > > A comment on this sucker would be nice. I *think* this must be a kernel > virtual address. Reflecting that into the naming or a comment would be > nice. Ack, I will add some comment. > >> + struct rmpentry *e; >> + unsigned long pfn; >> + pgd_t *pgd; >> + pte_t *pte; >> + int level; >> + >> + pgd = __va(read_cr3_pa()); >> + pgd += pgd_index(address); >> + >> + pte = lookup_address_in_pgd(pgd, address, &level); >> + if (unlikely(!pte)) >> + return; > > It's a little annoying this is doing *another* separate page walk. > Don't we already do this for dumping the page tables themselves at oops > time? > Yes, we already do the walk in oops function, I'll extend the dump_rmpentry() to use the level from the oops to avoid the duplicate walk. > Also, please get rid of all of the likely/unlikely()s in your patches. > They are pure noise unless you have specific knowledge of the compiler > getting something so horribly wrong that it affects real-world performance. > >> + switch (level) { >> + case PG_LEVEL_4K: { >> + pfn = pte_pfn(*pte); >> + break; >> + } > > These superfluous brackets are really strange looking. Could you remove > them, please? Noted. > >> + case PG_LEVEL_2M: { >> + pfn = pmd_pfn(*(pmd_t *)pte); >> + break; >> + } >> + case PG_LEVEL_1G: { >> + pfn = pud_pfn(*(pud_t *)pte); >> + break; >> + } >> + case PG_LEVEL_512G: { >> + pfn = p4d_pfn(*(p4d_t *)pte); >> + break; >> + } >> + default: >> + return; >> + } >> + >> + e = snp_lookup_page_in_rmptable(pfn_to_page(pfn), &level); > > So, lookup_address_in_pgd() looks to me like it will return pretty > random page table entries as long as the entry isn't > p{gd,4d,ud,md,te}_none(). It can certainly return !p*_present() > entries. Those are *NOT* safe to call pfn_to_page() on. > I will add some checks to make sure that we are accessing only safe pfn's. >> + if (unlikely(!e)) >> + return; >> + >> + /* >> + * If the RMP entry at the faulting address was not assigned, then >> + * dump may not provide any useful debug information. Iterate >> + * through the entire 2MB region, and dump the RMP entries if one >> + * of the bit in the RMP entry is set. >> + */ > > Some of this comment should be moved down to the loop itself. Noted. > >> + if (rmpentry_assigned(e)) { >> + pr_alert("RMPEntry paddr 0x%lx [assigned=%d immutable=%d pagesize=%d gpa=0x%lx" >> + " asid=%d vmsa=%d validated=%d]\n", pfn << PAGE_SHIFT, >> + rmpentry_assigned(e), rmpentry_immutable(e), rmpentry_pagesize(e), >> + rmpentry_gpa(e), rmpentry_asid(e), rmpentry_vmsa(e), >> + rmpentry_validated(e)); >> + >> + pr_alert("RMPEntry paddr 0x%lx %016llx %016llx\n", pfn << PAGE_SHIFT, >> + e->high, e->low); > > Could you please include an entire oops in the changelog that also > includes this information? It would be really nice if this was at least > consistent in style to the stuff around it. Here is one example: (in this case page was immutable and HV attempted to write to it). BUG: unable to handle page fault for address: ffff98c78ee00000 #PF: supervisor write access in kernel mode #PF: error_code(0x80000003) - rmp violation PGD 304b201067 P4D 304b201067 PUD 20c7f06063 PMD 20c8976063 PTE 80000020cee00163 RMPEntry paddr 0x20cee00000 [assigned=1 immutable=1 pagesize=0 gpa=0x0 asid=0 vmsa=0 validated=0] RMPEntry paddr 0x20cee00000 000000000000000f 8000000000000ffd > > Also, how much of this stuff like rmpentry_asid() is duplicated in the > "raw" dump of e->high and e->low? > Most of the rmpentry_xxx assessors read the e->low. The RMP entry is a 16-bytes. AMD APM defines only a few bits and keeps everything else reserved. We are in the process of updating APM to document few more bits. I am not adding assessors for the undocumented fields. Until then, we dump the entire 16-bytes. I agree that we are duplicating the information. I can live with just a raw dump. That means anyone who is debugging the crash will look at the APM to decode the fields. >> + } else { >> + unsigned long pfn_end; >> + >> + pfn = pfn & ~0x1ff; > > There's a nice magic number. Why not: > > pfn = pfn & ~(PTRS_PER_PMD-1); > > ? Noted. > > This also needs a comment about *WHY* this case is looking at a 2MB region. > Actually the comment above says why we are looking for the 2MB region. Let me rearrange the comment block so that its more clear. The reason for iterating through 2MB region is; if the faulting address is not assigned in the RMP table, and page table walk level is 2MB then one of entry within the large page is the root cause of the fault. Since we don't know which entry hence I dump all the non-zero entries. >> + pfn_end = pfn + PTRS_PER_PMD; >> + >> + while (pfn < pfn_end) { >> + e = snp_lookup_page_in_rmptable(pfn_to_page(pfn), &level); >> + >> + if (unlikely(!e)) >> + return; >> + >> + if (e->low || e->high) >> + pr_alert("RMPEntry paddr 0x%lx: %016llx %016llx\n", >> + pfn << PAGE_SHIFT, e->high, e->low); > > Why does this dump "raw" RMP entries while the above stuff filters them > through a bunch of helper macros? > There are two cases which we need to consider: 1) the faulting page is a guest private (aka assigned) 2) the faulting page is a hypervisor (aka shared) We will be primarily seeing #1. In this case, we know its a assigned page, and we can decode the fields. The #2 will happen in rare conditions, if it happens, one of the undocumented bit in the RMP entry can provide us some useful information hence we dump the raw values. -Brijesh
On 7/8/21 8:02 AM, Brijesh Singh wrote: ... >>> + pgd = __va(read_cr3_pa()); >>> + pgd += pgd_index(address); >>> + >>> + pte = lookup_address_in_pgd(pgd, address, &level); >>> + if (unlikely(!pte)) >>> + return; >> >> It's a little annoying this is doing *another* separate page walk. >> Don't we already do this for dumping the page tables themselves at oops >> time? > > Yes, we already do the walk in oops function, I'll extend the > dump_rmpentry() to use the level from the oops to avoid the duplicate walk. I was even thinking that you could use the pmd/pte entries that come from the walk in dump_pagetable(). BTW, I think the snp_lookup_page_in_rmptable() interface is probably wrong. It takes a 'struct page': +struct rmpentry *snp_lookup_page_in_rmptable(struct page *page, int *level) but then immediately converts it to a paddr: > + unsigned long phys = page_to_pfn(page) << PAGE_SHIFT; If you just had it take a paddr, you wouldn't have to mess with all of this pfn_valid() and phys_to_page() error checking. >>> + case PG_LEVEL_2M: { >>> + pfn = pmd_pfn(*(pmd_t *)pte); >>> + break; >>> + } >>> + case PG_LEVEL_1G: { >>> + pfn = pud_pfn(*(pud_t *)pte); >>> + break; >>> + } >>> + case PG_LEVEL_512G: { >>> + pfn = p4d_pfn(*(p4d_t *)pte); >>> + break; >>> + } >>> + default: >>> + return; >>> + } >>> + >>> + e = snp_lookup_page_in_rmptable(pfn_to_page(pfn), &level); >> >> So, lookup_address_in_pgd() looks to me like it will return pretty >> random page table entries as long as the entry isn't >> p{gd,4d,ud,md,te}_none(). It can certainly return !p*_present() >> entries. Those are *NOT* safe to call pfn_to_page() on. >> > > I will add some checks to make sure that we are accessing only safe pfn's. Or fix the snp_lookup_page_in_rmptable() interface, please. >>> + if (rmpentry_assigned(e)) { >>> + pr_alert("RMPEntry paddr 0x%lx [assigned=%d immutable=%d >>> pagesize=%d gpa=0x%lx" >>> + " asid=%d vmsa=%d validated=%d]\n", pfn << PAGE_SHIFT, >>> + rmpentry_assigned(e), rmpentry_immutable(e), >>> rmpentry_pagesize(e), >>> + rmpentry_gpa(e), rmpentry_asid(e), rmpentry_vmsa(e), >>> + rmpentry_validated(e)); >>> + >>> + pr_alert("RMPEntry paddr 0x%lx %016llx %016llx\n", pfn << >>> PAGE_SHIFT, >>> + e->high, e->low); >> >> Could you please include an entire oops in the changelog that also >> includes this information? It would be really nice if this was at least >> consistent in style to the stuff around it. > > Here is one example: (in this case page was immutable and HV attempted > to write to it). > > BUG: unable to handle page fault for address: ffff98c78ee00000 > #PF: supervisor write access in kernel mode > #PF: error_code(0x80000003) - rmp violation Let's capitalize "RMP" here, please. > PGD 304b201067 P4D 304b201067 PUD 20c7f06063 PMD 20c8976063 PTE > 80000020cee00163 > RMPEntry paddr 0x20cee00000 [assigned=1 immutable=1 pagesize=0 gpa=0x0 > asid=0 vmsa=0 validated=0] > RMPEntry paddr 0x20cee00000 000000000000000f 8000000000000ffd That's a good example, thanks! But, it does make me think that we shouldn't be spitting out "immutable". Should we call it "readonly" or something so that folks have a better chance of figuring out what's wrong? Even better, should we be looking specifically for X86_PF_RMP *and* immutable=1 and spitting out something in english about it? This also *looks* to be spitting out the same "RMPEntry paddr 0x20cee00000" more than once. Maybe we should just indent the extra entries instead of repeating things. The high/low are missing a "0x" prefix, they also don't have any kind of text label. >> Also, how much of this stuff like rmpentry_asid() is duplicated in the >> "raw" dump of e->high and e->low? > > Most of the rmpentry_xxx assessors read the e->low. The RMP entry is a > 16-bytes. AMD APM defines only a few bits and keeps everything else > reserved. We are in the process of updating APM to document few more > bits. I am not adding assessors for the undocumented fields. Until then, > we dump the entire 16-bytes. > > I agree that we are duplicating the information. I can live with just a > raw dump. That means anyone who is debugging the crash will look at the > APM to decode the fields. I actually really like processing the fields. I think it's a good investment to make the error messages as self-documenting as possible and not require the poor souls who are decoding oopses to also keep each vendor's architecture manuals at hand. >> This also needs a comment about *WHY* this case is looking at a 2MB >> region. >> > > Actually the comment above says why we are looking for the 2MB region. > Let me rearrange the comment block so that its more clear. > > The reason for iterating through 2MB region is; if the faulting address > is not assigned in the RMP table, and page table walk level is 2MB then > one of entry within the large page is the root cause of the fault. Since > we don't know which entry hence I dump all the non-zero entries. Logically you can figure this out though, right? Why throw 511 entries at the console when we *know* they're useless? >>> + pfn_end = pfn + PTRS_PER_PMD; >>> + >>> + while (pfn < pfn_end) { >>> + e = snp_lookup_page_in_rmptable(pfn_to_page(pfn), &level); >>> + >>> + if (unlikely(!e)) >>> + return; >>> + >>> + if (e->low || e->high) >>> + pr_alert("RMPEntry paddr 0x%lx: %016llx %016llx\n", >>> + pfn << PAGE_SHIFT, e->high, e->low); >> >> Why does this dump "raw" RMP entries while the above stuff filters them >> through a bunch of helper macros? > > There are two cases which we need to consider: > > 1) the faulting page is a guest private (aka assigned) > 2) the faulting page is a hypervisor (aka shared) > > We will be primarily seeing #1. In this case, we know its a assigned > page, and we can decode the fields. > > The #2 will happen in rare conditions, What rare conditions? > if it happens, one of the undocumented bit in the RMP entry can > provide us some useful information hence we dump the raw values. You're saying that there are things that can cause RMP faults that aren't documented? That's rather nasty for your users, don't you think? I'd be fine if you want to define a mask of unknown bits and spit out to the users that some unknown bits are set.
On 7/8/21 10:30 AM, Dave Hansen wrote: > I was even thinking that you could use the pmd/pte entries that come > from the walk in dump_pagetable(). > > BTW, I think the snp_lookup_page_in_rmptable() interface is probably > wrong. It takes a 'struct page': > In some cases the caller already have 'struct page' so it was easier on them. I can change it to snp_lookup_pfn_in_rmptable() to simplify the things. In the cases where the caller already have 'struct page' will simply do page_to_pfn(). > +struct rmpentry *snp_lookup_page_in_rmptable(struct page *page, int *level) > > but then immediately converts it to a paddr: > >> + unsigned long phys = page_to_pfn(page) << PAGE_SHIFT; > > If you just had it take a paddr, you wouldn't have to mess with all of > this pfn_valid() and phys_to_page() error checking. Noted. > > Or fix the snp_lookup_page_in_rmptable() interface, please. Yes. > > Let's capitalize "RMP" here, please. Noted. > >> PGD 304b201067 P4D 304b201067 PUD 20c7f06063 PMD 20c8976063 PTE >> 80000020cee00163 >> RMPEntry paddr 0x20cee00000 [assigned=1 immutable=1 pagesize=0 gpa=0x0 ^^^^^^^^^^ >> asid=0 vmsa=0 validated=0] >> RMPEntry paddr 0x20cee00000 000000000000000f 8000000000000ffd > > That's a good example, thanks! > > But, it does make me think that we shouldn't be spitting out > "immutable". Should we call it "readonly" or something so that folks > have a better chance of figuring out what's wrong? Even better, should > we be looking specifically for X86_PF_RMP *and* immutable=1 and spitting > out something in english about it? > A write to an assigned page will cause the RMP violation. In this case, the page happen to be firmware page hence the immutable bit was also set. I am trying to use the field name as documented in the APM and SEV-SNP firmware spec. > This also *looks* to be spitting out the same "RMPEntry paddr > 0x20cee00000" more than once. Maybe we should just indent the extra > entries instead of repeating things. The high/low are missing a "0x" > prefix, they also don't have any kind of text label. > Noted, I will fix it. > > I actually really like processing the fields. I think it's a good > investment to make the error messages as self-documenting as possible > and not require the poor souls who are decoding oopses to also keep each > vendor's architecture manuals at hand. > Sounds good, I will keep it as-is. >> >> The reason for iterating through 2MB region is; if the faulting address >> is not assigned in the RMP table, and page table walk level is 2MB then >> one of entry within the large page is the root cause of the fault. Since >> we don't know which entry hence I dump all the non-zero entries. > > Logically you can figure this out though, right? Why throw 511 entries > at the console when we *know* they're useless? Logically its going to be tricky to figure out which exact entry caused the fault, hence I dump any non-zero entry. I understand it may dump some useless. >> There are two cases which we need to consider: >> >> 1) the faulting page is a guest private (aka assigned) >> 2) the faulting page is a hypervisor (aka shared) >> >> We will be primarily seeing #1. In this case, we know its a assigned >> page, and we can decode the fields. >> >> The #2 will happen in rare conditions, > > What rare conditions? > One such condition is RMP "in-use" bit is set; see the patch 20/40. After applying the patch we should not see "in-use" bit set. If we run into similar issues, a full RMP dump will greatly help debug. >> if it happens, one of the undocumented bit in the RMP entry can >> provide us some useful information hence we dump the raw values. > You're saying that there are things that can cause RMP faults that > aren't documented? That's rather nasty for your users, don't you think? > The "in-use" bit in the RMP entry caught me off guard. The AMD APM does says that hardware sets in-use bit but it *never* explained in the detail on how to check if the fault was due to in-use bit in the RMP table. As I said, the documentation folks will be updating the RMP entry to document the in-use bit. I hope we will not see any other undocumented surprises, I am keeping my finger cross :) > I'd be fine if you want to define a mask of unknown bits and spit out to > the users that some unknown bits are set. >
On 7/8/21 11:58 AM, Dave Hansen wrote:>> Logically its going to be tricky to figure out which exact entry caused >> the fault, hence I dump any non-zero entry. I understand it may dump >> some useless. > > What's tricky about it? > > Sure, there's a possibility that more than one entry could contribute to > a fault. But, you always know *IF* an entry could contribute to a fault. > > I'm fine if you run through the logic, don't find a known reason > (specific RMP entry) for the fault, and dump the whole table in that > case. But, unconditionally polluting the kernel log with noise isn't > very nice for debugging. The tricky part is to determine which undocumented bit to check to know that we should stop dump. I can go with your suggestion that first try with the known reasons and fallback to dump whole table for unknown reasons only. thanks
diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index 2715240c757e..195149eae9b6 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -19,6 +19,7 @@ #include <linux/uaccess.h> /* faulthandler_disabled() */ #include <linux/efi.h> /* efi_crash_gracefully_on_page_fault()*/ #include <linux/mm_types.h> +#include <linux/sev.h> /* snp_lookup_page_in_rmptable() */ #include <asm/cpufeature.h> /* boot_cpu_has, ... */ #include <asm/traps.h> /* dotraplinkage, ... */ @@ -502,6 +503,81 @@ static void show_ldttss(const struct desc_ptr *gdt, const char *name, u16 index) name, index, addr, (desc.limit0 | (desc.limit1 << 16))); } +static void dump_rmpentry(unsigned long address) +{ + struct rmpentry *e; + unsigned long pfn; + pgd_t *pgd; + pte_t *pte; + int level; + + pgd = __va(read_cr3_pa()); + pgd += pgd_index(address); + + pte = lookup_address_in_pgd(pgd, address, &level); + if (unlikely(!pte)) + return; + + switch (level) { + case PG_LEVEL_4K: { + pfn = pte_pfn(*pte); + break; + } + case PG_LEVEL_2M: { + pfn = pmd_pfn(*(pmd_t *)pte); + break; + } + case PG_LEVEL_1G: { + pfn = pud_pfn(*(pud_t *)pte); + break; + } + case PG_LEVEL_512G: { + pfn = p4d_pfn(*(p4d_t *)pte); + break; + } + default: + return; + } + + e = snp_lookup_page_in_rmptable(pfn_to_page(pfn), &level); + if (unlikely(!e)) + return; + + /* + * If the RMP entry at the faulting address was not assigned, then + * dump may not provide any useful debug information. Iterate + * through the entire 2MB region, and dump the RMP entries if one + * of the bit in the RMP entry is set. + */ + if (rmpentry_assigned(e)) { + pr_alert("RMPEntry paddr 0x%lx [assigned=%d immutable=%d pagesize=%d gpa=0x%lx" + " asid=%d vmsa=%d validated=%d]\n", pfn << PAGE_SHIFT, + rmpentry_assigned(e), rmpentry_immutable(e), rmpentry_pagesize(e), + rmpentry_gpa(e), rmpentry_asid(e), rmpentry_vmsa(e), + rmpentry_validated(e)); + + pr_alert("RMPEntry paddr 0x%lx %016llx %016llx\n", pfn << PAGE_SHIFT, + e->high, e->low); + } else { + unsigned long pfn_end; + + pfn = pfn & ~0x1ff; + pfn_end = pfn + PTRS_PER_PMD; + + while (pfn < pfn_end) { + e = snp_lookup_page_in_rmptable(pfn_to_page(pfn), &level); + + if (unlikely(!e)) + return; + + if (e->low || e->high) + pr_alert("RMPEntry paddr 0x%lx: %016llx %016llx\n", + pfn << PAGE_SHIFT, e->high, e->low); + pfn++; + } + } +} + static void show_fault_oops(struct pt_regs *regs, unsigned long error_code, unsigned long address) { @@ -578,6 +654,9 @@ show_fault_oops(struct pt_regs *regs, unsigned long error_code, unsigned long ad } dump_pagetable(address); + + if (error_code & X86_PF_RMP) + dump_rmpentry(address); } static noinline void
When SEV-SNP is enabled globally, a write from the host goes through the RMP check. If the hardware encounters the check failure, then it raises the #PF (with RMP set). Dump the RMP table to help the debug. Signed-off-by: Brijesh Singh <brijesh.singh@amd.com> --- arch/x86/mm/fault.c | 79 +++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 79 insertions(+)