From patchwork Fri Oct 11 13:39:21 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Edward Nevill X-Patchwork-Id: 20969 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-oa0-f72.google.com (mail-oa0-f72.google.com [209.85.219.72]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 9C87D2611D for ; Fri, 11 Oct 2013 13:39:27 +0000 (UTC) Received: by mail-oa0-f72.google.com with SMTP id i4sf14135087oah.11 for ; Fri, 11 Oct 2013 06:39:27 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:message-id:subject:from:reply-to:to :cc:date:organization:mime-version:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe:content-type :content-transfer-encoding; bh=EK8l8npJMWX3aliuKdICFLWvkD5MKRJtzUm0sWVbe70=; b=BHHCgg5IVIhbVsyMShV5uOOdKoGNUgP3RZ0N1NEyO0U7axvfW3cPejtgRI+7DwyaZ5 zkKO3vg08ohzgRUxkPbvIAvngjpj/VHJ6+vDZdAEmjcsgSYbqwZniYzbclv26ipX0Yyh 7e5nipWQ8bs4GZtLgVWiQnWlnyRkBzbmgYHFu6V8auRkcVzrGw7SZx4sS9yREdKU1cMd GAp5e+L8XJd59typxuwupoAsK7SRSJcffiAUSUIVtVPtF0VpqFYtVD0iSYPWUxUke8UI 84wOLIVh0JyIdRUS7sT4rC2Q6Is66XWKyo41XHWl9rPvDT4Xu3wDJEN4yTTJuD6Msc9R cBgQ== X-Received: by 10.42.247.68 with SMTP id mb4mr8407980icb.14.1381498766894; Fri, 11 Oct 2013 06:39:26 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.49.42 with SMTP id r10ls1454031qen.86.gmail; Fri, 11 Oct 2013 06:39:26 -0700 (PDT) X-Received: by 10.52.169.37 with SMTP id ab5mr14472585vdc.31.1381498766748; Fri, 11 Oct 2013 06:39:26 -0700 (PDT) Received: from mail-vb0-f43.google.com (mail-vb0-f43.google.com [209.85.212.43]) by mx.google.com with ESMTPS id wh9si16704890vcb.93.1969.12.31.16.00.00 (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 11 Oct 2013 06:39:26 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.212.43 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.212.43; Received: by mail-vb0-f43.google.com with SMTP id h11so2747660vbh.2 for ; Fri, 11 Oct 2013 06:39:26 -0700 (PDT) X-Gm-Message-State: ALoCoQkGlRnLi2krlmg0WamVlecnZRKI/3D6UJvLQeV5rUnrdYxgr1uqq0wMS/tj4mq0zigd8rPj X-Received: by 10.52.120.78 with SMTP id la14mr18267581vdb.9.1381498766558; Fri, 11 Oct 2013 06:39:26 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.220.174.196 with SMTP id u4csp40643vcz; Fri, 11 Oct 2013 06:39:25 -0700 (PDT) X-Received: by 10.180.37.162 with SMTP id z2mr3245127wij.58.1381498764636; Fri, 11 Oct 2013 06:39:24 -0700 (PDT) Received: from mail-wi0-f171.google.com (mail-wi0-f171.google.com [209.85.212.171]) by mx.google.com with ESMTPS id d15si1118064wic.56.1969.12.31.16.00.00 (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 11 Oct 2013 06:39:24 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.212.171 is neither permitted nor denied by best guess record for domain of edward.nevill@linaro.org) client-ip=209.85.212.171; Received: by mail-wi0-f171.google.com with SMTP id hm2so1097026wib.4 for ; Fri, 11 Oct 2013 06:39:24 -0700 (PDT) X-Received: by 10.180.208.49 with SMTP id mb17mr3291063wic.64.1381498764028; Fri, 11 Oct 2013 06:39:24 -0700 (PDT) Received: from [192.168.1.140] (validation.linaro.org. [88.98.47.97]) by mx.google.com with ESMTPSA id e1sm5823417wij.6.2013.10.11.06.39.22 for (version=SSLv3 cipher=RC4-SHA bits=128/128); Fri, 11 Oct 2013 06:39:23 -0700 (PDT) Message-ID: <1381498761.30716.3.camel@localhost.localdomain> Subject: RFR: Merge up to jdk8-b110 From: Edward Nevill Reply-To: edward.nevill@linaro.org To: aarc64@openjdk.java.net Cc: patches@linaro.org Date: Fri, 11 Oct 2013 14:39:21 +0100 Organization: Linaro X-Mailer: Evolution 3.8.3 (3.8.3-2.fc19) Mime-Version: 1.0 X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: edward.nevill@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.212.43 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , Hi, The attached changesets merge the aarch64 port up to jdk8-b110 from jdk8-b90. Tag jdk8-b110 is dated Oct 2nd, 2013. I have built C1 and C2 and tested as follows:- Cross compilation builds, tested on the RTSM model:- client/fastdebug/hotspot 333/16/18 server/release/hotspot (*) 323/32/12 (*) This test used client/release as the test harness Builtin simulator builds:- client/slowdebug/hotspot-sanity 3/0/0 client/release/hotspot 323/27/17 There is a known problem with UseCompressedKlassPointers (or UseCompressedClassPointers as they are now known). These were broken somewhere between b104 and b105. I need to go back and see why they are broken, however, I would like to get this mega merge off my desk before I go back and look at this. I also see sporadic crashes in GC every few hours. However, I also see these with the existing aarch64 tip and they do not seem any more frequent post merge. Again I would like to push this merge before going back to look at these. Because of the size of the merge changesets (34Mb) I have not posted them inline. Instead I have put them on the web at http://people.linaro.org/~edward.nevill/b110/corba_diffs http://people.linaro.org/~edward.nevill/b110/hotspot_diffs http://people.linaro.org/~edward.nevill/b110/jaxp_diffs http://people.linaro.org/~edward.nevill/b110/jdk_diffs http://people.linaro.org/~edward.nevill/b110/jdk8_diffs http://people.linaro.org/~edward.nevill/b110/langtools_diffs http://people.linaro.org/~edward.nevill/b110/nashorn_diffs http://people.linaro.org/~edward.nevill/b110/jaxws_diffs A gzip file containing all theses diffs may be downloaded from http://people.linaro.org/~edward.nevill/b110.tgz The changesets below are the changes to the aarch64 specific code to bring it in line with the merge up to b110. OK to push? Ed. -- cut here --- # HG changeset patch # User Edward Nevill edward.nevill@linaro.org # Date 1381491246 -3600 # Fri Oct 11 12:34:06 2013 +0100 # Node ID fb54b96dadd94c5a316ae6d23ff7157642ecfeeb # Parent eea63b68cd042c4c7cd0f23b74a105f350cda010 aarch64 specific changes for merge to jdk8-b110 diff -r eea63b68cd04 -r fb54b96dadd9 common/autoconf/build-aux/autoconf-config.guess --- a/common/autoconf/build-aux/autoconf-config.guess Fri Oct 11 12:03:26 2013 +0100 +++ b/common/autoconf/build-aux/autoconf-config.guess Fri Oct 11 12:34:06 2013 +0100 @@ -1021,9 +1021,6 @@ x86_64:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-gnu exit ;; - aarch64:Linux:*:*) - echo aarch64-unknown-linux-gnu - exit ;; xtensa*:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-gnu exit ;; diff -r eea63b68cd04 -r fb54b96dadd9 common/autoconf/generated-configure.sh --- a/common/autoconf/generated-configure.sh Fri Oct 11 12:03:26 2013 +0100 +++ b/common/autoconf/generated-configure.sh Fri Oct 11 12:34:06 2013 +0100 @@ -1,6 +1,6 @@ #! /bin/sh # Guess values for system-dependent variables and create Makefiles. -# Generated by GNU Autoconf 2.67 for OpenJDK jdk8. +# Generated by GNU Autoconf 2.69 for OpenJDK jdk8. # # Report bugs to . # @@ -242,11 +242,18 @@ # We cannot yet assume a decent shell, so we have to provide a # neutralization value for shells without unset; and this also # works around shells that cannot unset nonexistent variables. +# Preserve -v and -x to the replacement shell. BASH_ENV=/dev/null ENV=/dev/null (unset BASH_ENV) >/dev/null 2>&1 && unset BASH_ENV ENV case $- in # (((( - exec "$CONFIG_SHELL" "$as_myself" ${1+"$@"} + *v*x* | *x*v* ) as_opts=-vx ;; + *v* ) as_opts=-v ;; + *x* ) as_opts=-x ;; + * ) as_opts= ;; +esac +exec $CONFIG_SHELL $as_opts "$as_myself" ${1+"$@"} +# Admittedly, this is quite paranoid, since all the known shells bail # out after a failed `exec'. $as_echo "$0: could not re-execute with $CONFIG_SHELL" >&2 exit 255 @@ -654,7 +661,6 @@ X_LIBS X_PRE_LIBS X_CFLAGS -XMKMF CXXFLAGS_DEBUG_SYMBOLS CFLAGS_DEBUG_SYMBOLS ZIP_DEBUGINFO_FILES @@ -1023,7 +1029,6 @@ with_override_hotspot with_override_nashorn with_override_jdk -with_override_nashorn with_import_hotspot with_msvcr_dll with_dxsdk @@ -1071,7 +1076,6 @@ OBJCFLAGS CPP CXXCPP -XMKMF FREETYPE2_CFLAGS FREETYPE2_LIBS ALSA_CFLAGS @@ -1778,7 +1782,6 @@ --with-override-hotspot use this hotspot dir for the build --with-override-nashorn use this nashorn dir for the build --with-override-jdk use this jdk dir for the build - --with-override-nashorn use this nashorn dir for the build --with-import-hotspot import hotspot binaries from this jdk image or hotspot build dist dir instead of building from source @@ -1840,7 +1843,6 @@ OBJCFLAGS Objective C compiler flags CPP C preprocessor CXXCPP C++ preprocessor - XMKMF Path to xmkmf, Makefile generator for X Window System FREETYPE2_CFLAGS C compiler flags for FREETYPE2, overriding pkg-config FREETYPE2_LIBS @@ -1919,7 +1921,7 @@ if $ac_init_version; then cat <<\_ACEOF OpenJDK configure jdk8 -generated by GNU Autoconf 2.67 +generated by GNU Autoconf 2.69 Copyright (C) 2012 Free Software Foundation, Inc. This configure script is free software; the Free Software Foundation @@ -2615,7 +2617,7 @@ running configure, to aid debugging if configure makes a mistake. It was created by OpenJDK $as_me jdk8, which was -generated by GNU Autoconf 2.67. Invocation command line was +generated by GNU Autoconf 2.69. Invocation command line was $ $0 $@ @@ -2847,7 +2849,6 @@ # Let the site file select an alternate cache file if it wants to. # Prefer an explicitly selected file to automatically selected ones. ac_site_file1=NONE -ac_site_file2=NONE if test -n "$CONFIG_SITE"; then # We do not want a PATH search for config.site. case $CONFIG_SITE in #(( @@ -2855,14 +2856,8 @@ */*) ac_site_file1=$CONFIG_SITE;; *) ac_site_file1=./$CONFIG_SITE;; esac -elif test "x$prefix" != xNONE; then - ac_site_file1=$prefix/share/config.site - ac_site_file2=$prefix/etc/config.site -else - ac_site_file1=$ac_default_prefix/share/config.site - ac_site_file2=$ac_default_prefix/etc/config.site -fi -for ac_site_file in "$ac_site_file1" "$ac_site_file2" +fi +for ac_site_file in $ac_site_file1 do test "x$ac_site_file" = xNONE && continue if test /dev/null != "$ac_site_file" && test -r "$ac_site_file"; then @@ -3813,7 +3808,7 @@ #CUSTOM_AUTOCONF_INCLUDE # Do not change or remove the following line, it is needed for consistency checks: -DATE_WHEN_GENERATED=1379504921 +DATE_WHEN_GENERATED=1381411006 ############################################################################### # @@ -5202,7 +5197,6 @@ -if test "${ac_cv_path_THEPWDCMD+set}" = set; then : for ac_prog in rm do # Extract the first word of "$ac_prog", so it can be a program name with args. @@ -6677,8 +6671,9 @@ # The aliases save the names the user supplied, while $host etc. # will get canonicalized. test -n "$target_alias" && - test "$program_prefix$program_suffix$program_transform_name" = \ - NONENONEs,x,x, && + test "$target_alias" != "$host_alias" && + test "$program_prefix$program_suffix$program_transform_name" = \ + NONENONEs,x,x, && program_prefix=${target_alias}- # Figure out the build and target systems. # Note that in autoconf terminology, "build" is obvious, but "target" @@ -16225,6 +16220,8 @@ withval=$with_override_jdk; fi + + # Check whether --with-override-nashorn was given. if test "${with_override_nashorn+set}" = set; then : withval=$with_override_nashorn; @@ -29384,6 +29381,9 @@ s390) ZERO_ARCHFLAG="-m31" ;; + aarch64) + ZERO_ARCHFLAG="" + ;; *) ZERO_ARCHFLAG="-m${OPENJDK_TARGET_CPU_BITS}" esac @@ -29759,85 +29759,9 @@ else # One or both of the vars are not set, and there is no cached value. ac_x_includes=no ac_x_libraries=no -rm -f -r conftest.dir -if mkdir conftest.dir; then - cd conftest.dir - cat >Imakefile <<'_ACEOF' -incroot: - @echo incroot='${INCROOT}' -usrlibdir: - @echo usrlibdir='${USRLIBDIR}' -libdir: - @echo libdir='${LIBDIR}' -_ACEOF - if (export CC; ${XMKMF-xmkmf}) >/dev/null 2>/dev/null && test -f Makefile; then - # GNU make sometimes prints "make[1]: Entering ...", which would confuse us. - for ac_var in incroot usrlibdir libdir; do - eval "ac_im_$ac_var=\`\${MAKE-make} $ac_var 2>/dev/null | sed -n 's/^$ac_var=//p'\`" - done - # Open Windows xmkmf reportedly sets LIBDIR instead of USRLIBDIR. - for ac_extension in a so sl dylib la dll; do - if test ! -f "$ac_im_usrlibdir/libX11.$ac_extension" && - test -f "$ac_im_libdir/libX11.$ac_extension"; then - ac_im_usrlibdir=$ac_im_libdir; break - fi - done - # Screen out bogus values from the imake configuration. They are - # bogus both because they are the default anyway, and because - # using them would break gcc on systems where it needs fixed includes. - case $ac_im_incroot in - /usr/include) ac_x_includes= ;; - *) test -f "$ac_im_incroot/X11/Xos.h" && ac_x_includes=$ac_im_incroot;; - esac - case $ac_im_usrlibdir in - /usr/lib | /usr/lib64 | /lib | /lib64) ;; - *) test -d "$ac_im_usrlibdir" && ac_x_libraries=$ac_im_usrlibdir ;; - esac - fi - cd .. - rm -f -r conftest.dir -fi - # Standard set of common directories for X headers. # Check X11 before X11Rn because it is often a symlink to the current release. -ac_x_header_dirs=' -/usr/X11/include -/usr/X11R7/include -/usr/X11R6/include -/usr/X11R5/include -/usr/X11R4/include - -/usr/include/X11 -/usr/include/X11R7 -/usr/include/X11R6 -/usr/include/X11R5 -/usr/include/X11R4 - -/usr/local/X11/include -/usr/local/X11R7/include -/usr/local/X11R6/include -/usr/local/X11R5/include -/usr/local/X11R4/include - -/usr/local/include/X11 -/usr/local/include/X11R7 -/usr/local/include/X11R6 -/usr/local/include/X11R5 -/usr/local/include/X11R4 - -/usr/X386/include -/usr/x386/include -/usr/XFree86/include/X11 - -/usr/include -/usr/local/include -/usr/unsupported/include -/usr/athena/include -/usr/local/x11r5/include -/usr/lpp/Xamples/include - -/usr/openwin/include -/usr/openwin/share/include' +ac_x_header_dirs='' if test "$ac_x_includes" = no; then # Guess where to find include files, by looking for Xlib.h. @@ -33751,7 +33675,7 @@ # values after options handling. ac_log=" This file was extended by OpenJDK $as_me jdk8, which was -generated by GNU Autoconf 2.67. Invocation command line was +generated by GNU Autoconf 2.69. Invocation command line was CONFIG_FILES = $CONFIG_FILES CONFIG_HEADERS = $CONFIG_HEADERS @@ -33814,7 +33738,7 @@ ac_cs_config="`$as_echo "$ac_configure_args" | sed 's/^ //; s/[\\""\`\$]/\\\\&/g'`" ac_cs_version="\\ OpenJDK config.status jdk8 -configured by $0, generated by GNU Autoconf 2.67, +configured by $0, generated by GNU Autoconf 2.69, with options \\"\$ac_cs_config\\" Copyright (C) 2012 Free Software Foundation, Inc. --- cut here --- --- cut here --- # HG changeset patch # User Edward Nevill edward.nevill@linaro.org # Date 1381491589 -3600 # Fri Oct 11 12:39:49 2013 +0100 # Node ID 0b5e450b23211722398beb850e34f144809152e7 # Parent a84cf0dd740c1953eca84ae630b5bf18343076ff aarch64 specific changes for merge to jdk8-b110 diff -r a84cf0dd740c -r 0b5e450b2321 make/linux/makefiles/aarch64.make --- a/make/linux/makefiles/aarch64.make Fri Oct 11 12:06:22 2013 +0100 +++ b/make/linux/makefiles/aarch64.make Fri Oct 11 12:39:49 2013 +0100 @@ -30,7 +30,7 @@ CFLAGS += -DVM_LITTLE_ENDIAN ifeq ($(BUILTIN_SIM), true) -CFLAGS += -DBUILTIN_SIM +CFLAGS += -DBUILTIN_SIM -DALLOW_OPERATOR_NEW_USAGE endif # CFLAGS += -D_LP64=1 diff -r a84cf0dd740c -r 0b5e450b2321 src/cpu/aarch64/vm/aarch64.ad --- a/src/cpu/aarch64/vm/aarch64.ad Fri Oct 11 12:06:22 2013 +0100 +++ b/src/cpu/aarch64/vm/aarch64.ad Fri Oct 11 12:39:49 2013 +0100 @@ -1408,7 +1408,7 @@ void MachUEPNode::format(PhaseRegAlloc* ra_, outputStream* st) const { st->print_cr("# MachUEPNode"); - if (UseCompressedKlassPointers) { + if (UseCompressedClassPointers) { st->print_cr("\tldrw rscratch1, j_rarg0 + oopDesc::klass_offset_in_bytes()]\t# compressed klass"); if (Universe::narrow_klass_shift() != 0) { st->print_cr("\tdecode_klass_not_null rscratch1, rscratch1"); diff -r a84cf0dd740c -r 0b5e450b2321 src/cpu/aarch64/vm/c1_CodeStubs_aarch64.cpp --- a/src/cpu/aarch64/vm/c1_CodeStubs_aarch64.cpp Fri Oct 11 12:06:22 2013 +0100 +++ b/src/cpu/aarch64/vm/c1_CodeStubs_aarch64.cpp Fri Oct 11 12:39:49 2013 +0100 @@ -418,6 +418,10 @@ target = Runtime1::entry_for(Runtime1::load_mirror_patching_id); reloc_type = relocInfo::oop_type; break; + case load_appendix_id: + target = Runtime1::entry_for(Runtime1::load_appendix_patching_id); + reloc_type = relocInfo::oop_type; + break; default: ShouldNotReachHere(); } diff -r a84cf0dd740c -r 0b5e450b2321 src/cpu/aarch64/vm/c1_LIRAssembler_aarch64.cpp --- a/src/cpu/aarch64/vm/c1_LIRAssembler_aarch64.cpp Fri Oct 11 12:06:22 2013 +0100 +++ b/src/cpu/aarch64/vm/c1_LIRAssembler_aarch64.cpp Fri Oct 11 12:39:49 2013 +0100 @@ -294,7 +294,7 @@ Register receiver = FrameMap::receiver_opr->as_register(); Register ic_klass = IC_Klass; const int ic_cmp_size = 4 * 4; - const bool do_post_padding = VerifyOops || UseCompressedKlassPointers; + const bool do_post_padding = VerifyOops || UseCompressedClassPointers; if (!do_post_padding) { // insert some nops so that the verified entry point is aligned on CodeEntryAlignment while ((__ offset() + ic_cmp_size) % CodeEntryAlignment != 0) { @@ -337,7 +337,8 @@ void LIR_Assembler::jobject2reg_with_patching(Register reg, CodeEmitInfo *info) { // Allocate a new index in table to hold the object once it's been patched int oop_index = __ oop_recorder()->allocate_oop_index(NULL); - PatchingStub* patch = new PatchingStub(_masm, PatchingStub::load_mirror_id, oop_index); +// PatchingStub* patch = new PatchingStub(_masm, PatchingStub::load_mirror_id, oop_index); + PatchingStub* patch = new PatchingStub(_masm, patching_id(info), oop_index); RelocationHolder rspec = oop_Relocation::spec(oop_index); address const_ptr = int_constant(-1); @@ -985,7 +986,7 @@ // FIXME: OMG this is a horrible kludge. Any offset from an // address that matches klass_offset_in_bytes() will be loaded // as a word, not a long. - if (UseCompressedKlassPointers && addr->disp() == oopDesc::klass_offset_in_bytes()) { + if (UseCompressedClassPointers && addr->disp() == oopDesc::klass_offset_in_bytes()) { __ ldrw(dest->as_register(), as_Address(from_addr)); } else { __ ldr(dest->as_register(), as_Address(from_addr)); @@ -1032,7 +1033,7 @@ __ verify_oop(dest->as_register()); } else if (type == T_ADDRESS && addr->disp() == oopDesc::klass_offset_in_bytes()) { #ifdef _LP64 - if (UseCompressedKlassPointers) { + if (UseCompressedClassPointers) { __ decode_klass_not_null(dest->as_register()); } #endif @@ -1350,7 +1351,7 @@ } else if (obj == klass_RInfo) { klass_RInfo = dst; } - if (k->is_loaded() && !UseCompressedKlassPointers) { + if (k->is_loaded() && !UseCompressedClassPointers) { select_different_registers(obj, dst, k_RInfo, klass_RInfo); } else { Rtmp1 = op->tmp3()->as_register(); @@ -1358,14 +1359,6 @@ } assert_different_registers(obj, k_RInfo, klass_RInfo); - if (!k->is_loaded()) { - klass2reg_with_patching(k_RInfo, op->info_for_patch()); - } else { -#ifdef _LP64 - __ mov_metadata(k_RInfo, k->constant_encoding()); -#endif // _LP64 - } - assert(obj != k_RInfo, "must be different"); if (op->should_profile()) { Label not_null; @@ -1384,6 +1377,13 @@ __ cbz(obj, *obj_is_null); } + if (!k->is_loaded()) { + klass2reg_with_patching(k_RInfo, op->info_for_patch()); + } else { +#ifdef _LP64 + __ mov_metadata(k_RInfo, k->constant_encoding()); +#endif // _LP64 + } __ verify_oop(obj); if (op->fast_check()) { @@ -2295,7 +2295,7 @@ // We don't know the array types are compatible if (basic_type != T_OBJECT) { // Simple test for basic type arrays - if (UseCompressedKlassPointers) { + if (UseCompressedClassPointers) { __ ldrw(tmp, src_klass_addr); __ ldrw(rscratch1, dst_klass_addr); __ cmpw(tmp, rscratch1); @@ -2426,14 +2426,14 @@ Label known_ok, halt; __ mov_metadata(tmp, default_type->constant_encoding()); #ifdef _LP64 - if (UseCompressedKlassPointers) { + if (UseCompressedClassPointers) { __ encode_klass_not_null(tmp); } #endif if (basic_type != T_OBJECT) { - if (UseCompressedKlassPointers) { + if (UseCompressedClassPointers) { __ ldrw(rscratch1, dst_klass_addr); __ cmpw(tmp, rscratch1); } else { @@ -2441,7 +2441,7 @@ __ cmp(tmp, rscratch1); } __ br(Assembler::NE, halt); - if (UseCompressedKlassPointers) { + if (UseCompressedClassPointers) { __ ldrw(rscratch1, src_klass_addr); __ cmpw(tmp, rscratch1); } else { @@ -2450,7 +2450,7 @@ } __ br(Assembler::EQ, known_ok); } else { - if (UseCompressedKlassPointers) { + if (UseCompressedClassPointers) { __ ldrw(rscratch1, dst_klass_addr); __ cmpw(tmp, rscratch1); } else { @@ -2614,6 +2614,9 @@ __ lea(dst->as_register(), frame_map()->address_for_monitor_lock(monitor_no)); } +void LIR_Assembler::emit_updatecrc32(LIR_OpUpdateCRC32* op) { + fatal("CRC32 intrinsic is not implemented on this platform"); +} void LIR_Assembler::align_backward_branch_target() { } @@ -2828,7 +2831,7 @@ } __ verify_oop(dest->as_register()); } else if (type == T_ADDRESS && from_addr->disp() == oopDesc::klass_offset_in_bytes()) { - if (UseCompressedKlassPointers) { + if (UseCompressedClassPointers) { __ decode_klass_not_null(dest->as_register()); } } diff -r a84cf0dd740c -r 0b5e450b2321 src/cpu/aarch64/vm/c1_LIRGenerator_aarch64.cpp --- a/src/cpu/aarch64/vm/c1_LIRGenerator_aarch64.cpp Fri Oct 11 12:06:22 2013 +0100 +++ b/src/cpu/aarch64/vm/c1_LIRGenerator_aarch64.cpp Fri Oct 11 12:39:49 2013 +0100 @@ -957,6 +957,9 @@ __ arraycopy(src.result(), src_pos.result(), dst.result(), dst_pos.result(), length.result(), tmp, expected_type, flags, info); // does add_safepoint } +void LIRGenerator::do_update_CRC32(Intrinsic* x) { + fatal("CRC32 intrinsic is not implemented on this platform"); +} // _i2l, _i2f, _i2d, _l2i, _l2f, _l2d, _f2i, _f2l, _f2d, _d2i, _d2l, _d2f // _i2b, _i2c, _i2s @@ -1157,7 +1160,7 @@ } LIR_Opr reg = rlock_result(x); LIR_Opr tmp3 = LIR_OprFact::illegalOpr; - if (!x->klass()->is_loaded() || UseCompressedKlassPointers) { + if (!x->klass()->is_loaded() || UseCompressedClassPointers) { tmp3 = new_register(objectType); } __ checkcast(reg, obj.result(), x->klass(), @@ -1178,7 +1181,7 @@ } obj.load_item(); LIR_Opr tmp3 = LIR_OprFact::illegalOpr; - if (!x->klass()->is_loaded() || UseCompressedKlassPointers) { + if (!x->klass()->is_loaded() || UseCompressedClassPointers) { tmp3 = new_register(objectType); } __ instanceof(reg, obj.result(), x->klass(), diff -r a84cf0dd740c -r 0b5e450b2321 src/cpu/aarch64/vm/c1_MacroAssembler_aarch64.cpp --- a/src/cpu/aarch64/vm/c1_MacroAssembler_aarch64.cpp Fri Oct 11 12:06:22 2013 +0100 +++ b/src/cpu/aarch64/vm/c1_MacroAssembler_aarch64.cpp Fri Oct 11 12:39:49 2013 +0100 @@ -188,7 +188,7 @@ } str(t1, Address(obj, oopDesc::mark_offset_in_bytes())); - if (UseCompressedKlassPointers) { // Take care not to kill klass + if (UseCompressedClassPointers) { // Take care not to kill klass encode_klass_not_null(t1, klass); strw(t1, Address(obj, oopDesc::klass_offset_in_bytes())); } else { @@ -197,7 +197,7 @@ if (len->is_valid()) { strw(len, Address(obj, arrayOopDesc::length_offset_in_bytes())); - } else if (UseCompressedKlassPointers) { + } else if (UseCompressedClassPointers) { store_klass_gap(obj, zr); } } @@ -432,7 +432,7 @@ b(RuntimeAddress(SharedRuntime::get_ic_miss_stub())); bind(dont); const int ic_cmp_size = 4 * 4; - assert(UseCompressedKlassPointers || offset() - start_offset == ic_cmp_size, "check alignment in emit_method_entry"); + assert(UseCompressedClassPointers || offset() - start_offset == ic_cmp_size, "check alignment in emit_method_entry"); } diff -r a84cf0dd740c -r 0b5e450b2321 src/cpu/aarch64/vm/c1_Runtime1_aarch64.cpp --- a/src/cpu/aarch64/vm/c1_Runtime1_aarch64.cpp Fri Oct 11 12:06:22 2013 +0100 +++ b/src/cpu/aarch64/vm/c1_Runtime1_aarch64.cpp Fri Oct 11 12:39:49 2013 +0100 @@ -1113,6 +1113,13 @@ } break; + case load_appendix_patching_id: + { StubFrame f(sasm, "load_appendix_patching", dont_gc_arguments); + // we should set up register map + oop_maps = generate_patching(sasm, CAST_FROM_FN_PTR(address, move_appendix_patching)); + } + break; + case handle_exception_nofpu_id: case handle_exception_id: { StubFrame f(sasm, "handle_exception", dont_gc_arguments); @@ -1179,10 +1186,10 @@ Bytecodes::Code code = field_access.code(); // We must load class, initialize class and resolvethe field - FieldAccessInfo result; // initialize class if needed + fieldDescriptor result; // initialize class if needed constantPoolHandle constants(THREAD, caller->constants()); - LinkResolver::resolve_field(result, constants, field_access.index(), Bytecodes::java_code(code), false, CHECK_NULL); - return result.klass()(); + LinkResolver::resolve_field_access(result, constants, field_access.index(), Bytecodes::java_code(code), CHECK_NULL); + return result.field_holder(); } @@ -1252,7 +1259,7 @@ KlassHandle init_klass(THREAD, NULL); // klass needed by load_klass_patching code KlassHandle load_klass(THREAD, NULL); // klass needed by load_klass_patching code Handle mirror(THREAD, NULL); // oop needed by load_mirror_patching code - FieldAccessInfo result; // initialize class if needed + fieldDescriptor result; // initialize class if needed bool load_klass_or_mirror_patch_id = (stub_id == Runtime1::load_klass_patching_id || stub_id == Runtime1::load_mirror_patching_id); @@ -1260,11 +1267,11 @@ if (stub_id == Runtime1::access_field_patching_id) { Bytecode_field field_access(caller_method, bci); - FieldAccessInfo result; // initialize class if needed + fieldDescriptor result; // initialize class if needed Bytecodes::Code code = field_access.code(); constantPoolHandle constants(THREAD, caller_method->constants()); - LinkResolver::resolve_field(result, constants, field_access.index(), Bytecodes::java_code(code), false, CHECK); - patch_field_offset = result.field_offset(); + LinkResolver::resolve_field_access(result, constants, field_access.index(), Bytecodes::java_code(code), CHECK); + patch_field_offset = result.offset(); // If we're patching a field which is volatile then at compile it // must not have been known to be volatile, so the generated code @@ -1495,6 +1502,25 @@ return caller_is_deopted(); } +int Runtime1::move_appendix_patching(JavaThread* thread) { +// +// NOTE: we are still in Java +// + Thread* THREAD = thread; + debug_only(NoHandleMark nhm;) + { + // Enter VM mode + + ResetNoHandleMark rnhm; + patch_code(thread, load_appendix_patching_id); + } + // Back in JAVA, use no oops DON'T safepoint + + // Return true if calling code is deoptimized + + return caller_is_deopted(); +} + int Runtime1::move_klass_patching(JavaThread* thread) { // // NOTE: we are still in Java diff -r a84cf0dd740c -r 0b5e450b2321 src/cpu/aarch64/vm/c1_globals_aarch64.hpp --- a/src/cpu/aarch64/vm/c1_globals_aarch64.hpp Fri Oct 11 12:06:22 2013 +0100 +++ b/src/cpu/aarch64/vm/c1_globals_aarch64.hpp Fri Oct 11 12:39:49 2013 +0100 @@ -59,8 +59,9 @@ define_pd_global(intx, ReservedCodeCacheSize, 32*M ); define_pd_global(bool, ProfileInterpreter, false); define_pd_global(intx, CodeCacheExpansionSize, 32*K ); -define_pd_global(uintx,CodeCacheMinBlockLength, 1); -define_pd_global(uintx,MetaspaceSize, 12*M ); +define_pd_global(uintx, CodeCacheMinBlockLength, 1); +define_pd_global(uintx, CodeCacheMinimumUseSpace, 400*K); +define_pd_global(uintx, MetaspaceSize, 12*M ); define_pd_global(bool, NeverActAsServerClassMachine, true ); define_pd_global(uint64_t,MaxRAM, 1ULL*G); define_pd_global(bool, CICompileOSR, true ); diff -r a84cf0dd740c -r 0b5e450b2321 src/cpu/aarch64/vm/c2_globals_aarch64.hpp --- a/src/cpu/aarch64/vm/c2_globals_aarch64.hpp Fri Oct 11 12:06:22 2013 +0100 +++ b/src/cpu/aarch64/vm/c2_globals_aarch64.hpp Fri Oct 11 12:39:49 2013 +0100 @@ -46,7 +46,7 @@ #else define_pd_global(bool, ProfileInterpreter, true); #endif // CC_INTERP -define_pd_global(bool, TieredCompilation, false); +define_pd_global(bool, TieredCompilation, trueInTiered); define_pd_global(intx, CompileThreshold, 10000); define_pd_global(intx, BackEdgeThreshold, 100000); @@ -54,6 +54,7 @@ define_pd_global(intx, ConditionalMoveLimit, 3); define_pd_global(intx, FLOATPRESSURE, 30); define_pd_global(intx, FreqInlineSize, 325); +define_pd_global(intx, MinJumpTableSize, 10); define_pd_global(intx, INTPRESSURE, 23); define_pd_global(intx, InteriorEntryAlignment, 16); define_pd_global(intx, NewSizeThreadIncrease, ScaleForWordSize(4*K)); @@ -74,7 +75,8 @@ define_pd_global(bool, OptoBundling, false); define_pd_global(intx, ReservedCodeCacheSize, 48*M); -define_pd_global(uintx,CodeCacheMinBlockLength, 4); +define_pd_global(uintx, CodeCacheMinBlockLength, 4); +define_pd_global(uintx, CodeCacheMinimumUseSpace, 400*K); // Heap related flags define_pd_global(uintx,MetaspaceSize, ScaleForWordSize(16*M)); diff -r a84cf0dd740c -r 0b5e450b2321 src/cpu/aarch64/vm/cppInterpreterGenerator_aarch64.hpp --- a/src/cpu/aarch64/vm/cppInterpreterGenerator_aarch64.hpp Fri Oct 11 12:06:22 2013 +0100 +++ b/src/cpu/aarch64/vm/cppInterpreterGenerator_aarch64.hpp Fri Oct 11 12:39:49 2013 +0100 @@ -46,10 +46,12 @@ void generate_more_monitors(); void generate_deopt_handling(); +#if 0 address generate_interpreter_frame_manager(bool synchronized); // C++ interpreter only void generate_compute_interpreter_state(const Register state, const Register prev_state, const Register sender_sp, bool native); // C++ interpreter only +#endif #endif // CPU_AARCH64_VM_CPPINTERPRETERGENERATOR_AARCH64_HPP diff -r a84cf0dd740c -r 0b5e450b2321 src/cpu/aarch64/vm/frame_aarch64.cpp --- a/src/cpu/aarch64/vm/frame_aarch64.cpp Fri Oct 11 12:06:22 2013 +0100 +++ b/src/cpu/aarch64/vm/frame_aarch64.cpp Fri Oct 11 12:39:49 2013 +0100 @@ -35,6 +35,7 @@ #include "runtime/handles.inline.hpp" #include "runtime/javaCalls.hpp" #include "runtime/monitorChunk.hpp" +#include "runtime/os.hpp" #include "runtime/signature.hpp" #include "runtime/stubCodeGenerator.hpp" #include "runtime/stubRoutines.hpp" @@ -56,16 +57,22 @@ address sp = (address)_sp; address fp = (address)_fp; address unextended_sp = (address)_unextended_sp; - // sp must be within the stack - bool sp_safe = (sp <= thread->stack_base()) && - (sp >= thread->stack_base() - thread->stack_size()); + + // consider stack guards when trying to determine "safe" stack pointers + static size_t stack_guard_size = os::uses_stack_guard_pages() ? (StackYellowPages + StackRedPages) * os::vm_page_size() : 0; + size_t usable_stack_size = thread->stack_size() - stack_guard_size; + + // sp must be within the usable part of the stack (not in guards) + bool sp_safe = (sp < thread->stack_base()) && + (sp >= thread->stack_base() - usable_stack_size); + if (!sp_safe) { return false; } // unextended sp must be within the stack and above or equal sp - bool unextended_sp_safe = (unextended_sp <= thread->stack_base()) && + bool unextended_sp_safe = (unextended_sp < thread->stack_base()) && (unextended_sp >= sp); if (!unextended_sp_safe) { @@ -73,7 +80,8 @@ } // an fp must be within the stack and above (but not equal) sp - bool fp_safe = (fp <= thread->stack_base()) && (fp > sp); + // second evaluation on fp+ is added to handle situation where fp is -1 + bool fp_safe = (fp < thread->stack_base() && (fp > sp) && (((fp + (return_addr_offset * sizeof(void*))) < thread->stack_base()))); // We know sp/unextended_sp are safe only fp is questionable here @@ -88,6 +96,13 @@ // other generic buffer blobs are more problematic so we just assume they are // ok. adapter blobs never have a frame complete and are never ok. + // check for a valid frame_size, otherwise we are unlikely to get a valid sender_pc + + if (!Interpreter::contains(_pc) && _cb->frame_size() <= 0) { + //assert(0, "Invalid frame_size"); + return false; + } + if (!_cb->is_frame_complete_at(_pc)) { if (_cb->is_nmethod() || _cb->is_adapter_blob() || _cb->is_runtime_stub()) { return false; @@ -109,7 +124,7 @@ address jcw = (address)entry_frame_call_wrapper(); - bool jcw_safe = (jcw <= thread->stack_base()) && ( jcw > fp); + bool jcw_safe = (jcw < thread->stack_base()) && ( jcw > fp); return jcw_safe; @@ -135,12 +150,6 @@ sender_pc = (address) *(sender_sp-1); } - // We must always be able to find a recognizable pc - CodeBlob* sender_blob = CodeCache::find_blob_unsafe(sender_pc); - if (sender_pc == NULL || sender_blob == NULL) { - return false; - } - // If the potential sender is the interpreter then we can do some more checking if (Interpreter::contains(sender_pc)) { @@ -150,7 +159,7 @@ // is really a frame pointer. intptr_t *saved_fp = (intptr_t*)*(sender_sp - frame::sender_sp_offset); - bool saved_fp_safe = ((address)saved_fp <= thread->stack_base()) && (saved_fp > sender_sp); + bool saved_fp_safe = ((address)saved_fp < thread->stack_base()) && (saved_fp > sender_sp); if (!saved_fp_safe) { return false; @@ -164,6 +173,17 @@ } + // We must always be able to find a recognizable pc + CodeBlob* sender_blob = CodeCache::find_blob_unsafe(sender_pc); + if (sender_pc == NULL || sender_blob == NULL) { + return false; + } + + // Could be a zombie method + if (sender_blob->is_zombie() || sender_blob->is_unloaded()) { + return false; + } + // Could just be some random pointer within the codeBlob if (!sender_blob->code_contains(sender_pc)) { return false; @@ -175,10 +195,9 @@ } // Could be the call_stub - if (StubRoutines::returns_to_call_stub(sender_pc)) { intptr_t *saved_fp = (intptr_t*)*(sender_sp - frame::sender_sp_offset); - bool saved_fp_safe = ((address)saved_fp <= thread->stack_base()) && (saved_fp > sender_sp); + bool saved_fp_safe = ((address)saved_fp < thread->stack_base()) && (saved_fp > sender_sp); if (!saved_fp_safe) { return false; @@ -191,15 +210,24 @@ // Validate the JavaCallWrapper an entry frame must have address jcw = (address)sender.entry_frame_call_wrapper(); - bool jcw_safe = (jcw <= thread->stack_base()) && ( jcw > (address)sender.fp()); + bool jcw_safe = (jcw < thread->stack_base()) && ( jcw > (address)sender.fp()); return jcw_safe; } - // If the frame size is 0 something is bad because every nmethod has a non-zero frame size + if (sender_blob->is_nmethod()) { + nmethod* nm = sender_blob->as_nmethod_or_null(); + if (nm != NULL) { + if (nm->is_deopt_mh_entry(sender_pc) || nm->is_deopt_entry(sender_pc)) { + return false; + } + } + } + + // If the frame size is 0 something (or less) is bad because every nmethod has a non-zero frame size // because the return address counts against the callee's frame. - if (sender_blob->frame_size() == 0) { + if (sender_blob->frame_size() <= 0) { assert(!sender_blob->is_nmethod(), "should count return address at least"); return false; } @@ -209,7 +237,9 @@ // should not be anything but the call stub (already covered), the interpreter (already covered) // or an nmethod. - assert(sender_blob->is_nmethod(), "Impossible call chain"); + if (!sender_blob->is_nmethod()) { + return false; + } // Could put some more validation for the potential non-interpreted sender // frame we'd create by calling sender if I could think of any. Wait for next crash in forte... @@ -238,8 +268,6 @@ } - - void frame::patch_pc(Thread* thread, address pc) { address* pc_addr = &(((address*) sp())[-1]); if (TracePcPatching) { @@ -559,11 +587,9 @@ return false; } - // validate constantPoolCacheOop - + // validate constantPoolCache* ConstantPoolCache* cp = *interpreter_frame_cache_addr(); - - if (cp == NULL || !cp->is_metadata()) return false; + if (cp == NULL || !cp->is_metaspace_object()) return false; // validate locals diff -r a84cf0dd740c -r 0b5e450b2321 src/cpu/aarch64/vm/frame_aarch64.inline.hpp --- a/src/cpu/aarch64/vm/frame_aarch64.inline.hpp Fri Oct 11 12:06:22 2013 +0100 +++ b/src/cpu/aarch64/vm/frame_aarch64.inline.hpp Fri Oct 11 12:39:49 2013 +0100 @@ -294,8 +294,8 @@ // Entry frames -inline JavaCallWrapper* frame::entry_frame_call_wrapper() const { - return (JavaCallWrapper*)at(entry_frame_call_wrapper_offset); +inline JavaCallWrapper** frame::entry_frame_call_wrapper_addr() const { + return (JavaCallWrapper**)addr_at(entry_frame_call_wrapper_offset); } diff -r a84cf0dd740c -r 0b5e450b2321 src/cpu/aarch64/vm/globals_aarch64.hpp --- a/src/cpu/aarch64/vm/globals_aarch64.hpp Fri Oct 11 12:06:22 2013 +0100 +++ b/src/cpu/aarch64/vm/globals_aarch64.hpp Fri Oct 11 12:39:49 2013 +0100 @@ -69,7 +69,7 @@ define_pd_global(bool, UseMembar, true); // GC Ergo Flags -define_pd_global(intx, CMSYoungGenPerWorker, 64*M); // default max size of CMS young gen, per GC worker thread +define_pd_global(uintx, CMSYoungGenPerWorker, 64*M); // default max size of CMS young gen, per GC worker thread // avoid biased locking while we are bootstrapping the aarch64 build define_pd_global(bool, UseBiasedLocking, false); diff -r a84cf0dd740c -r 0b5e450b2321 src/cpu/aarch64/vm/macroAssembler_aarch64.cpp --- a/src/cpu/aarch64/vm/macroAssembler_aarch64.cpp Fri Oct 11 12:06:22 2013 +0100 +++ b/src/cpu/aarch64/vm/macroAssembler_aarch64.cpp Fri Oct 11 12:39:49 2013 +0100 @@ -339,7 +339,7 @@ assert(java_thread == rthread, "unexpected register"); #ifdef ASSERT // TraceBytecodes does not use r12 but saves it over the call, so don't verify - // if ((UseCompressedOops || UseCompressedKlassPointers) && !TraceBytecodes) verify_heapbase("call_VM_base: heap base corrupted?"); + // if ((UseCompressedOops || UseCompressedClassPointers) && !TraceBytecodes) verify_heapbase("call_VM_base: heap base corrupted?"); #endif // ASSERT assert(java_thread != oop_result , "cannot use the same register for java_thread & oop_result"); @@ -1550,7 +1550,7 @@ #ifdef ASSERT void MacroAssembler::verify_heapbase(const char* msg) { #if 0 - assert (UseCompressedOops || UseCompressedKlassPointers, "should be compressed"); + assert (UseCompressedOops || UseCompressedClassPointers, "should be compressed"); assert (Universe::heap() != NULL, "java heap should be initialized"); if (CheckCompressedOops) { Label ok; @@ -1641,79 +1641,6 @@ } } -#ifdef ASSERT -static Register spill_registers[] = { - rheapbase, - rcpool, - rmonitors, - rlocals, - rmethod -}; - -#define spill_msg(_reg) \ - "register " _reg " invalid after call" - -static const char *spill_error_msgs[] = { - spill_msg("rheapbase"), - spill_msg("rcpool"), - spill_msg("rmonitors"), - spill_msg("rlocals"), - spill_msg("rmethod") -}; - -#define SPILL_FRAME_COUNT (sizeof(spill_registers)/sizeof(spill_registers[0])) - -#define SPILL_FRAME_BYTESIZE (SPILL_FRAME_COUNT * wordSize) - -void MacroAssembler::spill(Register rscratcha, Register rscratchb) -{ -#if 0 - Label bumped; - // load and bump spill pointer - ldr(rscratcha, Address(rthread, JavaThread::spill_stack_offset())); - sub(rscratcha, rscratcha, SPILL_FRAME_BYTESIZE); - // check for overflow - ldr(rscratchb, Address(rthread, JavaThread::spill_stack_limit_offset())); - cmp(rscratcha, rscratchb); - br(Assembler::GE, bumped); - stop("oops! ran out of register spill area"); - // spill registers - bind(bumped); - for (int i = 0; i < (int)SPILL_FRAME_COUNT; i++) { - Register r = spill_registers[i]; - assert(r != rscratcha && r != rscratchb, "invalid scratch reg in spill"); - str(r, Address(rscratcha, (i * wordSize))); - } - // store new spill pointer - str(rscratcha, (Address(rthread, JavaThread::spill_stack_offset()))); -#endif -} - -void MacroAssembler::spillcheck(Register rscratcha, Register rscratchb) -{ -#if 0 - // load spill pointer - ldr(rscratcha, (Address(rthread, JavaThread::spill_stack_offset()))); - // check registers - for (int i = 0; i < (int)SPILL_FRAME_COUNT; i++) { - Register r = spill_registers[i]; - assert(r != rscratcha && r != rscratchb, "invalid scratch reg in spillcheck"); - // native code is allowed to modify rcpool - Label valid; - ldr(rscratchb, Address(rscratcha, (i * wordSize))); - cmp(r, rscratchb); - br(Assembler::EQ, valid); - stop(spill_error_msgs[i]); - bind(valid); - } - // decrement and store new spill pointer - add(rscratcha, rscratcha, SPILL_FRAME_BYTESIZE); - str(rscratcha, Address(rthread, JavaThread::spill_stack_offset())); -#endif -} - -#endif // ASSERT - void MacroAssembler::reinit_heapbase() { if (UseCompressedOops) { @@ -1994,7 +1921,7 @@ } void MacroAssembler::load_klass(Register dst, Register src) { - if (UseCompressedKlassPointers) { + if (UseCompressedClassPointers) { ldrw(dst, Address(src, oopDesc::klass_offset_in_bytes())); decode_klass_not_null(dst); } else { @@ -2003,7 +1930,7 @@ } void MacroAssembler::store_klass(Register dst, Register src) { - if (UseCompressedKlassPointers) { + if (UseCompressedClassPointers) { encode_klass_not_null(src); strw(src, Address(dst, oopDesc::klass_offset_in_bytes())); } else { @@ -2012,7 +1939,7 @@ } void MacroAssembler::store_klass_gap(Register dst, Register src) { - if (UseCompressedKlassPointers) { + if (UseCompressedClassPointers) { // Store to klass gap in destination str(src, Address(dst, oopDesc::klass_gap_offset_in_bytes())); } @@ -2152,7 +2079,6 @@ } void MacroAssembler::encode_klass_not_null(Register r) { - assert(Metaspace::is_initialized(), "metaspace should be initialized"); #ifdef ASSERT verify_heapbase("MacroAssembler::encode_klass_not_null: heap base corrupted?"); #endif @@ -2166,7 +2092,6 @@ } void MacroAssembler::encode_klass_not_null(Register dst, Register src) { - assert(Metaspace::is_initialized(), "metaspace should be initialized"); #ifdef ASSERT verify_heapbase("MacroAssembler::encode_klass_not_null2: heap base corrupted?"); #endif @@ -2183,9 +2108,8 @@ } void MacroAssembler::decode_klass_not_null(Register r) { - assert(Metaspace::is_initialized(), "metaspace should be initialized"); // Note: it will change flags - assert (UseCompressedKlassPointers, "should only be used for compressed headers"); + assert (UseCompressedClassPointers, "should only be used for compressed headers"); // Cannot assert, unverified entry point counts instructions (see .ad file) // vtableStubs also counts instructions in pd_code_size_limit. // Also do not verify_oop as this is called by verify_oop. @@ -2202,9 +2126,8 @@ } void MacroAssembler::decode_klass_not_null(Register dst, Register src) { - assert(Metaspace::is_initialized(), "metaspace should be initialized"); // Note: it will change flags - assert (UseCompressedKlassPointers, "should only be used for compressed headers"); + assert (UseCompressedClassPointers, "should only be used for compressed headers"); // Cannot assert, unverified entry point counts instructions (see .ad file) // vtableStubs also counts instructions in pd_code_size_limit. // Also do not verify_oop as this is called by verify_oop. @@ -2242,7 +2165,7 @@ void MacroAssembler::set_narrow_klass(Register dst, Klass* k) { - assert (UseCompressedKlassPointers, "should only be used for compressed headers"); + assert (UseCompressedClassPointers, "should only be used for compressed headers"); assert (oop_recorder() != NULL, "this assembler needs an OopRecorder"); mov_metadata(dst, k); encode_klass_not_null(dst); diff -r a84cf0dd740c -r 0b5e450b2321 src/cpu/aarch64/vm/macroAssembler_aarch64.hpp --- a/src/cpu/aarch64/vm/macroAssembler_aarch64.hpp Fri Oct 11 12:06:22 2013 +0100 +++ b/src/cpu/aarch64/vm/macroAssembler_aarch64.hpp Fri Oct 11 12:39:49 2013 +0100 @@ -535,14 +535,6 @@ void enter(); void leave(); - // debug only support for spilling and restoring/checking callee - // save registers around a Java method call - -#ifdef ASSERT - void spill(Register rscratcha, Register rscratchb); - void spillcheck(Register rscratcha, Register rscratchb); -#endif // ASSERT - // Support for getting the JavaThread pointer (i.e.; a reference to thread-local information) // The pointer will be loaded into the thread register. void get_thread(Register thread); diff -r a84cf0dd740c -r 0b5e450b2321 src/cpu/aarch64/vm/relocInfo_aarch64.cpp --- a/src/cpu/aarch64/vm/relocInfo_aarch64.cpp Fri Oct 11 12:06:22 2013 +0100 +++ b/src/cpu/aarch64/vm/relocInfo_aarch64.cpp Fri Oct 11 12:39:49 2013 +0100 @@ -58,13 +58,6 @@ return MacroAssembler::pd_call_destination(addr()); } -int Relocation::pd_breakpoint_size() { Unimplemented(); return 0; } - -void Relocation::pd_swap_in_breakpoint(address x, short* instrs, int instrlen) { Unimplemented(); } - - -void Relocation::pd_swap_out_breakpoint(address x, short* instrs, int instrlen) { Unimplemented(); } - void poll_Relocation::fix_relocation_after_move(const CodeBuffer* src, CodeBuffer* dest) { // fprintf(stderr, "Try to fix poll reloc at %p to %p\n", addr(), dest); if (NativeInstruction::maybe_cpool_ref(addr())) { diff -r a84cf0dd740c -r 0b5e450b2321 src/cpu/aarch64/vm/stubGenerator_aarch64.cpp --- a/src/cpu/aarch64/vm/stubGenerator_aarch64.cpp Fri Oct 11 12:06:22 2013 +0100 +++ b/src/cpu/aarch64/vm/stubGenerator_aarch64.cpp Fri Oct 11 12:39:49 2013 +0100 @@ -71,7 +71,7 @@ private: #ifdef PRODUCT -#define inc_counter_np(counter) (0) +#define inc_counter_np(counter) ((void)0) #else void inc_counter_np_(int& counter) { __ lea(rscratch2, ExternalAddress((address)&counter)); @@ -751,7 +751,6 @@ // make sure klass is 'reasonable', which is not zero. __ load_klass(r0, r0); // get klass __ cbz(r0, error); // if klass is NULL it is broken - // TODO: Future assert that klass is lower 4g memory for UseCompressedKlassPointers // return if everything seems ok __ bind(exit); @@ -1714,6 +1713,47 @@ void generate_math_stubs() { Unimplemented(); } +#ifndef BUILTIN_SIM + // Safefetch stubs. + void generate_safefetch(const char* name, int size, address* entry, + address* fault_pc, address* continuation_pc) { + // safefetch signatures: + // int SafeFetch32(int* adr, int errValue); + // intptr_t SafeFetchN (intptr_t* adr, intptr_t errValue); + // + // arguments: + // c_rarg0 = adr + // c_rarg1 = errValue + // + // result: + // PPC_RET = *adr or errValue + + StubCodeMark mark(this, "StubRoutines", name); + + // Entry point, pc or function descriptor. + *entry = __ pc(); + + // Load *adr into c_rarg1, may fault. + *fault_pc = __ pc(); + switch (size) { + case 4: + // int32_t + __ ldrw(c_rarg0, Address(c_rarg0, 0)); + break; + case 8: + // int64_t + __ ldr(c_rarg0, Address(c_rarg0, 0)); + break; + default: + ShouldNotReachHere(); + } + + // return errValue or *adr + *continuation_pc = __ pc(); + __ ret(lr); + } +#endif + #undef __ #define __ masm-> @@ -1917,6 +1957,16 @@ // arraycopy stubs used by compilers generate_arraycopy_stubs(); + +#ifndef BUILTIN_SIM + // Safefetch stubs. + generate_safefetch("SafeFetch32", sizeof(int), &StubRoutines::_safefetch32_entry, + &StubRoutines::_safefetch32_fault_pc, + &StubRoutines::_safefetch32_continuation_pc); + generate_safefetch("SafeFetchN", sizeof(intptr_t), &StubRoutines::_safefetchN_entry, + &StubRoutines::_safefetchN_fault_pc, + &StubRoutines::_safefetchN_continuation_pc); +#endif } public: diff -r a84cf0dd740c -r 0b5e450b2321 src/cpu/aarch64/vm/templateInterpreter_aarch64.cpp --- a/src/cpu/aarch64/vm/templateInterpreter_aarch64.cpp Fri Oct 11 12:06:22 2013 +0100 +++ b/src/cpu/aarch64/vm/templateInterpreter_aarch64.cpp Fri Oct 11 12:39:49 2013 +0100 @@ -208,9 +208,6 @@ __ sub(rscratch1, rscratch2, rscratch1, ext::uxtw, 3); __ andr(sp, rscratch1, -16); -#ifdef ASSERT - __ spillcheck(rscratch1, rscratch2); -#endif // ASSERT #ifndef PRODUCT // tell the simulator that the method has been reentered if (NotifySimulator) { @@ -1435,8 +1432,7 @@ -(frame::interpreter_frame_initial_sp_offset) + entry_size; const int stub_code = frame::entry_frame_after_call_words; - const int extra_stack = Method::extra_stack_entries(); - const int method_stack = (method->max_locals() + method->max_stack() + extra_stack) * + const int method_stack = (method->max_locals() + method->max_stack()) * Interpreter::stackElementWords; return (overhead_size + method_stack + stub_code); } @@ -1711,6 +1707,27 @@ __ str(zr, Address(rthread, JavaThread::popframe_condition_offset())); assert(JavaThread::popframe_inactive == 0, "fix popframe_inactive"); +#if INCLUDE_JVMTI + if (EnableInvokeDynamic) { + Label L_done; + + __ ldrb(rscratch1, Address(r13, 0)); + __ cmpw(r1, Bytecodes::_invokestatic); + __ br(Assembler::EQ, L_done); + + // The member name argument must be restored if _invokestatic is re-executed after a PopFrame call. + // Detect such a case in the InterpreterRuntime function and return the member name argument, or NULL. + + __ ldr(c_rarg0, Address(rlocals, 0)); + __ call_VM(r0, CAST_FROM_FN_PTR(address, InterpreterRuntime::member_name_arg_or_null), c_rarg0, rmethod, rscratch1); + + __ cbz(r0, L_done); + + __ str(r0, Address(esp, 0)); + __ bind(L_done); + } +#endif // INCLUDE_JVMTI + __ dispatch_next(vtos); // end of PopFrame support diff -r a84cf0dd740c -r 0b5e450b2321 src/cpu/aarch64/vm/templateTable_aarch64.cpp --- a/src/cpu/aarch64/vm/templateTable_aarch64.cpp Fri Oct 11 12:06:22 2013 +0100 +++ b/src/cpu/aarch64/vm/templateTable_aarch64.cpp Fri Oct 11 12:39:49 2013 +0100 @@ -2955,10 +2955,6 @@ transition(vtos, vtos); assert(byte_no == f2_byte, "use this argument"); -#ifdef ASSERT - __ spill(rscratch1, rscratch2); -#endif // ASSERT - prepare_invoke(byte_no, rmethod, noreg, r2, r3); // rmethod: index (actually a Method*) @@ -2973,10 +2969,6 @@ transition(vtos, vtos); assert(byte_no == f1_byte, "use this argument"); -#ifdef ASSERT - __ spill(rscratch1, rscratch2); -#endif // ASSERT - prepare_invoke(byte_no, rmethod, noreg, // get f1 Method* r2); // get receiver also for null check __ verify_oop(r2); @@ -2991,10 +2983,6 @@ transition(vtos, vtos); assert(byte_no == f1_byte, "use this argument"); -#ifdef ASSERT - __ spill(rscratch1, rscratch2); -#endif // ASSERT - prepare_invoke(byte_no, rmethod); // get f1 Method* // do the call __ profile_call(r0); @@ -3010,10 +2998,6 @@ transition(vtos, vtos); assert(byte_no == f1_byte, "use this argument"); -#ifdef ASSERT - __ spill(rscratch1, rscratch2); -#endif // ASSERT - prepare_invoke(byte_no, r0, rmethod, // get f1 Klass*, f2 itable index r2, r3); // recv, flags diff -r a84cf0dd740c -r 0b5e450b2321 src/os/linux/vm/os_linux.cpp --- a/src/os/linux/vm/os_linux.cpp Fri Oct 11 12:06:22 2013 +0100 +++ b/src/os/linux/vm/os_linux.cpp Fri Oct 11 12:39:49 2013 +0100 @@ -3325,9 +3325,9 @@ // format has been changed), we'll use the largest page size supported by // the processor. -#if !defined(ZERO) +#ifndef ZERO large_page_size = IA32_ONLY(4 * M) AMD64_ONLY(2 * M) IA64_ONLY(256 * M) SPARC_ONLY(4 * M) - ARM_ONLY(2 * M) PPC_ONLY(4 * M); + ARM_ONLY(2 * M) PPC_ONLY(4 * M) AARCH64_ONLY(2 * M); #endif // ZERO FILE *fp = fopen("/proc/meminfo", "r"); diff -r a84cf0dd740c -r 0b5e450b2321 src/os_cpu/linux_aarch64/vm/linux_aarch64.S --- a/src/os_cpu/linux_aarch64/vm/linux_aarch64.S Fri Oct 11 12:06:22 2013 +0100 +++ b/src/os_cpu/linux_aarch64/vm/linux_aarch64.S Fri Oct 11 12:39:49 2013 +0100 @@ -1,28 +1,4 @@ - .text - -#ifndef BUILTIN_SIM - - .globl SafeFetch32, Fetch32PFI, Fetch32Resume - .align 16 - .type SafeFetch32,@function - // Prototype: int SafeFetch32 (int * Adr, int ErrValue) -SafeFetch32: -Fetch32PFI: - ldr w0, [x0] -Fetch32Resume: - ret - - .globl SafeFetchN, FetchNPFI, FetchNResume - .align 16 - .type SafeFetchN,@function - // Prototype: intptr_t SafeFetchN (intptr_t * Adr, intptr_t ErrValue) -SafeFetchN: -FetchNPFI: - ldr x0, [x0] -FetchNResume: - ret - -#else +#ifdef BUILTIN_SIM .globl SafeFetch32, Fetch32PFI, Fetch32Resume .align 16 diff -r a84cf0dd740c -r 0b5e450b2321 src/os_cpu/linux_aarch64/vm/os_linux_aarch64.cpp --- a/src/os_cpu/linux_aarch64/vm/os_linux_aarch64.cpp Fri Oct 11 12:06:22 2013 +0100 +++ b/src/os_cpu/linux_aarch64/vm/os_linux_aarch64.cpp Fri Oct 11 12:39:49 2013 +0100 @@ -104,21 +104,6 @@ } void os::initialize_thread(Thread *thr) { -#ifdef ASSERT - if (!thr->is_Java_thread()) { - // Nothing to do! - return; - } - - JavaThread *java_thread = (JavaThread *)thr; - // spill frames are a fixed size of N (== 6?) saved registers at 8 - // bytes per register a 64K byte stack allows a call depth of 8K / N -#define SPILL_STACK_SIZE (1 << 16) - // initalise the spill stack so we cna check callee-save registers - address spill_stack = new unsigned char[SPILL_STACK_SIZE]; - java_thread->set_spill_stack(spill_stack + SPILL_STACK_SIZE); - java_thread->set_spill_stack_limit(spill_stack); -#endif // ASSERT } address os::Linux::ucontext_get_pc(ucontext_t * uc) { @@ -219,10 +204,12 @@ trap_page_fault = 0xE }; +#ifdef BUILTIN_SIM extern "C" void Fetch32PFI () ; extern "C" void Fetch32Resume () ; extern "C" void FetchNPFI () ; extern "C" void FetchNResume () ; +#endif extern "C" JNIEXPORT int JVM_handle_linux_signal(int sig, @@ -233,6 +220,10 @@ Thread* t = ThreadLocalStorage::get_thread_slow(); + // Must do this before SignalHandlerMark, if crash protection installed we will longjmp away + // (no destructors can be run) + os::WatcherThreadCrashProtection::check_crash_protection(sig, t); + SignalHandlerMark shm(t); // Note: it's not uncommon that JNI code uses signal/sigset to install @@ -286,22 +277,31 @@ if (info != NULL && uc != NULL && thread != NULL) { pc = (address) os::Linux::ucontext_get_pc(uc); +#ifdef BUILTIN_SIM if (pc == (address) Fetch32PFI) { -#ifdef BUILTIN_SIM uc->uc_mcontext.gregs[REG_PC] = intptr_t(Fetch32Resume) ; -#else - uc->uc_mcontext.pc = intptr_t(Fetch32Resume) ; -#endif return 1 ; } if (pc == (address) FetchNPFI) { -#ifdef BUILTIN_SIM uc->uc_mcontext.gregs[REG_PC] = intptr_t (FetchNResume) ; -#else - uc->uc_mcontext.pc = intptr_t (FetchNResume) ; -#endif return 1 ; } +#else + if (StubRoutines::is_safefetch_fault(pc)) { + uc->uc_mcontext.pc = intptr_t(StubRoutines::continuation_for_safefetch_fault(pc)); + return 1; + } +#endif + +#ifndef AMD64 + // Halt if SI_KERNEL before more crashes get misdiagnosed as Java bugs + // This can happen in any running code (currently more frequently in + // interpreter code but has been seen in compiled code) + if (sig == SIGSEGV && info->si_addr == 0 && info->si_code == SI_KERNEL) { + fatal("An irrecoverable SI_KERNEL SIGSEGV has occurred due " + "to unstable signal handling in this distribution."); + } +#endif // AMD64 // Handle ALL stack overflow variations here if (sig == SIGSEGV) { diff -r a84cf0dd740c -r 0b5e450b2321 src/os_cpu/linux_aarch64/vm/thread_linux_aarch64.cpp --- a/src/os_cpu/linux_aarch64/vm/thread_linux_aarch64.cpp Fri Oct 11 12:06:22 2013 +0100 +++ b/src/os_cpu/linux_aarch64/vm/thread_linux_aarch64.cpp Fri Oct 11 12:39:49 2013 +0100 @@ -32,8 +32,15 @@ void* ucontext, bool isInJava) { assert(Thread::current() == this, "caller must be current thread"); + return pd_get_top_frame(fr_addr, ucontext, isInJava); +} + +bool JavaThread::pd_get_top_frame_for_profiling(frame* fr_addr, void* ucontext, bool isInJava) { + return pd_get_top_frame(fr_addr, ucontext, isInJava); +} + +bool JavaThread::pd_get_top_frame(frame* fr_addr, void* ucontext, bool isInJava) { assert(this->is_Java_thread(), "must be JavaThread"); - JavaThread* jt = (JavaThread *)this; // If we have a last_Java_frame, then we should use it even if diff -r a84cf0dd740c -r 0b5e450b2321 src/os_cpu/linux_aarch64/vm/thread_linux_aarch64.hpp --- a/src/os_cpu/linux_aarch64/vm/thread_linux_aarch64.hpp Fri Oct 11 12:06:22 2013 +0100 +++ b/src/os_cpu/linux_aarch64/vm/thread_linux_aarch64.hpp Fri Oct 11 12:39:49 2013 +0100 @@ -71,21 +71,15 @@ bool pd_get_top_frame_for_signal_handler(frame* fr_addr, void* ucontext, bool isInJava); + bool pd_get_top_frame_for_profiling(frame* fr_addr, void* ucontext, bool isInJava); +private: + bool pd_get_top_frame(frame* fr_addr, void* ucontext, bool isInJava); +public: + // These routines are only used on cpu architectures that // have separate register stacks (Itanium). static bool register_stack_overflow() { return false; } static void enable_register_stack_guard() {} static void disable_register_stack_guard() {} -#ifdef ASSERT - void set_spill_stack(address base) { _spill_stack = _spill_stack_base = base; } - void set_spill_stack_limit(address limit) { _spill_stack_limit = limit; } - static ByteSize spill_stack_offset() { - return byte_offset_of(JavaThread, _spill_stack) ; - }; - static ByteSize spill_stack_limit_offset() { - return byte_offset_of(JavaThread, _spill_stack_limit) ; - }; -#endif - #endif // OS_CPU_LINUX_AARCH64_VM_THREAD_LINUX_AARCH64_HPP diff -r a84cf0dd740c -r 0b5e450b2321 src/share/tools/hsdis/hsdis.c --- a/src/share/tools/hsdis/hsdis.c Fri Oct 11 12:06:22 2013 +0100 +++ b/src/share/tools/hsdis/hsdis.c Fri Oct 11 12:39:49 2013 +0100 @@ -28,7 +28,6 @@ */ #include -#endif #include #include #include diff -r a84cf0dd740c -r 0b5e450b2321 src/share/vm/c1/c1_Compilation.cpp --- a/src/share/vm/c1/c1_Compilation.cpp Fri Oct 11 12:06:22 2013 +0100 +++ b/src/share/vm/c1/c1_Compilation.cpp Fri Oct 11 12:39:49 2013 +0100 @@ -343,8 +343,10 @@ // 3 bytes per character. We concatenate three such strings. // Yes, I know this is ridiculous, but it's debug code and glibc // allocates large arrays very efficiently. - size_t len = (65536 * 3) * 3; - char *name = new char[len]; +// size_t len = (65536 * 3) * 3; +// char *name = new char[len]; + size_t len = 1024; + char name[1024]; strncpy(name, _method->holder()->name()->as_utf8(), len); strncat(name, ".", len); @@ -352,7 +354,7 @@ strncat(name, _method->signature()->as_symbol()->as_utf8(), len); unsigned char *base = code()->insts()->start(); AArch64Simulator::get_current(UseSimulatorCache, DisableBCCheck)->notifyCompile(name, base); - delete[] name; +// delete[] name; } #endif diff -r a84cf0dd740c -r 0b5e450b2321 src/share/vm/c1/c1_Runtime1.cpp --- a/src/share/vm/c1/c1_Runtime1.cpp Fri Oct 11 12:06:22 2013 +0100 +++ b/src/share/vm/c1/c1_Runtime1.cpp Fri Oct 11 12:39:49 2013 +0100 @@ -235,8 +235,10 @@ sasm->must_gc_arguments()); #ifdef BUILTIN_SIM if (NotifySimulator) { - size_t len = 65536; - char *name = new char[len]; +// size_t len = 65536; +// char *name = new char[len]; + size_t len = 1024; + char name[1024]; // tell the sim about the new stub code AArch64Simulator *simulator = AArch64Simulator::get_current(UseSimulatorCache, DisableBCCheck); @@ -249,7 +251,7 @@ simulator->notifyCompile(name, base); // code does not get relocated so just pass offset 0 and the code is live simulator->notifyRelocate(base, 0); - delete[] name; +// delete[] name; } #endif // install blob @@ -1067,7 +1069,7 @@ ShouldNotReachHere(); } -#if defined(SPARC) || defined(PPC) || defined(TARGET_ARCH_aarch64) +#if defined(SPARC) || defined(PPC) || defined(AARCH64) if (load_klass_or_mirror_patch_id || stub_id == Runtime1::load_appendix_patching_id) { // Update the location in the nmethod with the proper @@ -1139,7 +1141,9 @@ ICache::invalidate_range(instr_pc, *byte_count); NativeGeneralJump::replace_mt_safe(instr_pc, copy_buff); - if (load_klass_or_mirror_patch_id) { + if (load_klass_or_mirror_patch_id + || stub_id == Runtime1::load_appendix_patching_id + || stub_id == Runtime1::access_field_patching_id) { relocInfo::relocType rtype; switch(stub_id) { case Runtime1::load_klass_patching_id: diff -r a84cf0dd740c -r 0b5e450b2321 src/share/vm/runtime/arguments.cpp --- a/src/share/vm/runtime/arguments.cpp Fri Oct 11 12:06:22 2013 +0100 +++ b/src/share/vm/runtime/arguments.cpp Fri Oct 11 12:39:49 2013 +0100 @@ -1459,7 +1459,6 @@ #endif // ZERO } - // NOTE: set_use_compressed_klass_ptrs() must be called after calling // set_use_compressed_oops(). void Arguments::set_use_compressed_klass_ptrs() { @@ -1472,10 +1471,13 @@ } FLAG_SET_DEFAULT(UseCompressedClassPointers, false); } else { +// ECN: FIXME - UseCompressedClassPointers is temporarily broken +#ifndef AARCH64 // Turn on UseCompressedClassPointers too if (FLAG_IS_DEFAULT(UseCompressedClassPointers)) { FLAG_SET_ERGO(bool, UseCompressedClassPointers, true); } +#endif // Check the CompressedClassSpaceSize to make sure we use compressed klass ptrs. if (UseCompressedClassPointers) { if (CompressedClassSpaceSize > KlassEncodingMetaspaceMax) { diff -r a84cf0dd740c -r 0b5e450b2321 src/share/vm/runtime/os.hpp --- a/src/share/vm/runtime/os.hpp Fri Oct 11 12:06:22 2013 +0100 +++ b/src/share/vm/runtime/os.hpp Fri Oct 11 12:39:49 2013 +0100 @@ -959,5 +959,9 @@ // It'd also be eligible for inlining on many platforms. extern "C" int SpinPause(); +#ifdef BUILTIN_SIM +extern "C" int SafeFetch32(int * adr, int errValue) ; +extern "C" intptr_t SafeFetchN(intptr_t * adr, intptr_t errValue) ; +#endif #endif // SHARE_VM_RUNTIME_OS_HPP diff -r a84cf0dd740c -r 0b5e450b2321 src/share/vm/runtime/reflection.cpp --- a/src/share/vm/runtime/reflection.cpp Fri Oct 11 12:06:22 2013 +0100 +++ b/src/share/vm/runtime/reflection.cpp Fri Oct 11 12:39:49 2013 +0100 @@ -374,8 +374,9 @@ } klass = klass->array_klass(dim, CHECK_NULL); oop obj = ArrayKlass::cast(klass)->multi_allocate(len, dimensions, CHECK_NULL); - // obj may be NULL is one of the dimensions is 0 - assert(obj == NULL || obj->is_array(), "just checking"); + // ECN: obj may be NULL if one of the dimensions is 0? + assert(obj != NULL, "can obj be NULL here?"); + assert(obj->is_array(), "just checking"); return arrayOop(obj); } diff -r a84cf0dd740c -r 0b5e450b2321 src/share/vm/runtime/stubRoutines.cpp --- a/src/share/vm/runtime/stubRoutines.cpp Fri Oct 11 12:06:22 2013 +0100 +++ b/src/share/vm/runtime/stubRoutines.cpp Fri Oct 11 12:39:49 2013 +0100 @@ -136,12 +136,14 @@ double (* StubRoutines::_intrinsic_cos )(double) = NULL; double (* StubRoutines::_intrinsic_tan )(double) = NULL; +#ifndef BUILTIN_SIM address StubRoutines::_safefetch32_entry = NULL; address StubRoutines::_safefetch32_fault_pc = NULL; address StubRoutines::_safefetch32_continuation_pc = NULL; address StubRoutines::_safefetchN_entry = NULL; address StubRoutines::_safefetchN_fault_pc = NULL; address StubRoutines::_safefetchN_continuation_pc = NULL; +#endif // Initialization // diff -r a84cf0dd740c -r 0b5e450b2321 src/share/vm/runtime/stubRoutines.hpp --- a/src/share/vm/runtime/stubRoutines.hpp Fri Oct 11 12:06:22 2013 +0100 +++ b/src/share/vm/runtime/stubRoutines.hpp Fri Oct 11 12:39:49 2013 +0100 @@ -227,6 +227,7 @@ static double (*_intrinsic_cos)(double); static double (*_intrinsic_tan)(double); +#ifndef BUILTIN_SIM // Safefetch stubs. static address _safefetch32_entry; static address _safefetch32_fault_pc; @@ -234,6 +235,7 @@ static address _safefetchN_entry; static address _safefetchN_fault_pc; static address _safefetchN_continuation_pc; +#endif public: // Initialization/Testing @@ -395,10 +397,10 @@ return _intrinsic_tan(d); } +#ifndef BUILTIN_SIM // // Safefetch stub support // - typedef int (*SafeFetch32Stub)(int* adr, int errValue); typedef intptr_t (*SafeFetchNStub) (intptr_t* adr, intptr_t errValue); @@ -422,6 +424,7 @@ ShouldNotReachHere(); return NULL; } +#endif // // Default versions of the above arraycopy functions for platforms which do @@ -442,6 +445,7 @@ static void arrayof_oop_copy_uninit(HeapWord* src, HeapWord* dest, size_t count); }; +#ifndef BUILTIN_SIM // Safefetch allows to load a value from a location that's not known // to be valid. If the load causes a fault, the error value is returned. inline int SafeFetch32(int* adr, int errValue) { @@ -452,5 +456,6 @@ assert(StubRoutines::SafeFetchN_stub(), "stub not yet generated"); return StubRoutines::SafeFetchN_stub()(adr, errValue); } +#endif #endif // SHARE_VM_RUNTIME_STUBROUTINES_HPP diff -r a84cf0dd740c -r 0b5e450b2321 test/gc/metaspace/TestPerfCountersAndMemoryPools.java --- a/test/gc/metaspace/TestPerfCountersAndMemoryPools.java Fri Oct 11 12:06:22 2013 +0100 +++ b/test/gc/metaspace/TestPerfCountersAndMemoryPools.java Fri Oct 11 12:39:49 2013 +0100 @@ -31,14 +31,14 @@ * @bug 8023476 * @summary Tests that a MemoryPoolMXBeans and PerfCounters for metaspace * report the same data. - * @run main/othervm -XX:+IgnoreUnrecognizedVMOptions -XX:-UseCompressedOops -XX:-UseCompressedKlassPointers -XX:+UseSerialGC -XX:+UsePerfData TestPerfCountersAndMemoryPools - * @run main/othervm -XX:+IgnoreUnrecognizedVMOptions -XX:+UseCompressedOops -XX:+UseCompressedKlassPointers -XX:+UseSerialGC -XX:+UsePerfData TestPerfCountersAndMemoryPools + * @run main/othervm -XX:+IgnoreUnrecognizedVMOptions -XX:-UseCompressedOops -XX:-UseCompressedClassPointers -XX:+UseSerialGC -XX:+UsePerfData TestPerfCountersAndMemoryPools + * @run main/othervm -XX:+IgnoreUnrecognizedVMOptions -XX:+UseCompressedOops -XX:+UseCompressedClassPointers -XX:+UseSerialGC -XX:+UsePerfData TestPerfCountersAndMemoryPools */ public class TestPerfCountersAndMemoryPools { public static void main(String[] args) throws Exception { checkMemoryUsage("Metaspace", "sun.gc.metaspace"); - if (InputArguments.contains("-XX:+UseCompressedKlassPointers") && Platform.is64bit()) { + if (InputArguments.contains("-XX:+UseCompressedClassPointers") && Platform.is64bit()) { checkMemoryUsage("Compressed Class Space", "sun.gc.compressedclassspace"); } } --- cut here ---