changeset 58004:0017ed5e309e

Merge
author prr
date Fri, 07 Feb 2020 11:09:59 -0800
parents cd9a28621c53 f1f8562f3ad2
children 915d0c063009
files src/hotspot/share/gc/shared/owstTaskTerminator.cpp src/hotspot/share/gc/shared/owstTaskTerminator.hpp src/hotspot/share/runtime/fieldType.cpp src/hotspot/share/runtime/fieldType.hpp src/java.base/share/classes/java/lang/reflect/ProxyGenerator_v49.java src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/formats/html/resources/script-dir/external/jquery/jquery.js src/jdk.javadoc/share/classes/jdk/javadoc/internal/tool/ToolOption.java test/hotspot/jtreg/runtime/7162488/Test7162488.sh test/hotspot/jtreg/runtime/StackGap/testme.sh test/hotspot/jtreg/runtime/StackGuardPages/testme.sh test/hotspot/jtreg/runtime/TLS/testtls.sh test/hotspot/jtreg/vmTestbase/jit/escape/LockCoarsening/LockCoarsening001/TestDescription.java test/hotspot/jtreg/vmTestbase/jit/escape/LockCoarsening/LockCoarsening002/TestDescription.java test/hotspot/jtreg/vmTestbase/jit/escape/LockCoarsening/run.sh test/hotspot/jtreg/vmTestbase/jit/tiered/TestDescription.java test/hotspot/jtreg/vmTestbase/jit/tiered/tieredTest.sh test/hotspot/jtreg/vmTestbase/metaspace/flags/maxMetaspaceSize/TestDescription.java test/hotspot/jtreg/vmTestbase/metaspace/flags/maxMetaspaceSize/maxMetaspaceSize.sh test/hotspot/jtreg/vmTestbase/vm/compiler/CodeCacheInfo/TestDescription.java test/hotspot/jtreg/vmTestbase/vm/compiler/CodeCacheInfo/run.sh test/hotspot/jtreg/vmTestbase/vm/compiler/CodeCacheInfoOnCompilation/TestDescription.java test/hotspot/jtreg/vmTestbase/vm/compiler/CodeCacheInfoOnCompilation/run.sh test/jdk/ProblemList.txt test/jdk/jdk/jfr/event/io/EvilInstrument.java test/jdk/jdk/jfr/event/sampling/libTestNative.c test/langtools/jdk/javadoc/doclet/testOptions/help.html
diffstat 842 files changed, 17703 insertions(+), 26469 deletions(-) [+]
line wrap: on
line diff
--- a/.hgtags	Tue Feb 04 12:56:19 2020 -0800
+++ b/.hgtags	Fri Feb 07 11:09:59 2020 -0800
@@ -612,3 +612,9 @@
 b97c1773ccafae4a8c16cc6aedb10b2a4f9a07ed jdk-15+5
 2776da28515e087cc8849acf1e131a65ea7e77b6 jdk-14+32
 ef7d53b4fccd4a0501b17d974e84f37aa99fa813 jdk-15+6
+f728b6c7f4910d6bd6070cb4dde8393f4ba95113 jdk-14+33
+e2bc57500c1b785837982f7ce8af6751387ed73b jdk-15+7
+a96bc204e3b31ddbf909b20088964112f052927e jdk-14+34
+c7d4f2849dbfb755fc5860b362a4044ea0c9e082 jdk-15+8
+4a87bb7ebfd7f6a25ec59a5982fe3607242777f8 jdk-14+35
+62b5bfef8d618e08e6f3a56cf1fb0e67e89e9cc2 jdk-15+9
--- a/doc/building.html	Tue Feb 04 12:56:19 2020 -0800
+++ b/doc/building.html	Fri Feb 07 11:09:59 2020 -0800
@@ -301,7 +301,7 @@
 </table>
 <p>All compilers are expected to be able to compile to the C99 language standard, as some C99 features are used in the source code. Microsoft Visual Studio doesn't fully support C99 so in practice shared code is limited to using C99 features that it does support.</p>
 <h3 id="gcc">gcc</h3>
-<p>The minimum accepted version of gcc is 4.8. Older versions will generate a warning by <code>configure</code> and are unlikely to work.</p>
+<p>The minimum accepted version of gcc is 5.0. Older versions will generate a warning by <code>configure</code> and are unlikely to work.</p>
 <p>The JDK is currently known to be able to compile with at least version 8.3 of gcc.</p>
 <p>In general, any version between these two should be usable.</p>
 <h3 id="clang">clang</h3>
@@ -639,11 +639,6 @@
 <p>You will need two copies of your toolchain, one which generates output that can run on the target system (the normal, or <em>target</em>, toolchain), and one that generates output that can run on the build system (the <em>build</em> toolchain). Note that cross-compiling is only supported for gcc at the time being. The gcc standard is to prefix cross-compiling toolchains with the target denominator. If you follow this standard, <code>configure</code> is likely to pick up the toolchain correctly.</p>
 <p>The <em>build</em> toolchain will be autodetected just the same way the normal <em>build</em>/<em>target</em> toolchain will be autodetected when not cross-compiling. If this is not what you want, or if the autodetection fails, you can specify a devkit containing the <em>build</em> toolchain using <code>--with-build-devkit</code> to <code>configure</code>, or by giving <code>BUILD_CC</code> and <code>BUILD_CXX</code> arguments.</p>
 <p>It is often helpful to locate the cross-compilation tools, headers and libraries in a separate directory, outside the normal path, and point out that directory to <code>configure</code>. Do this by setting the sysroot (<code>--with-sysroot</code>) and appending the directory when searching for cross-compilations tools (<code>--with-toolchain-path</code>). As a compact form, you can also use <code>--with-devkit</code> to point to a single directory, if it is correctly setup. (See <code>basics.m4</code> for details.)</p>
-<p>If you are unsure what toolchain and versions to use, these have been proved working at the time of writing:</p>
-<ul>
-<li><a href="https://releases.linaro.org/archive/13.11/components/toolchain/binaries/gcc-linaro-aarch64-linux-gnu-4.8-2013.11_linux.tar.xz">aarch64</a></li>
-<li><a href="https://launchpad.net/linaro-toolchain-unsupported/trunk/2012.09/+download/gcc-linaro-arm-linux-gnueabihf-raspbian-2012.09-20120921_linux.tar.bz2">arm 32-bit hardware floating point</a></li>
-</ul>
 <h3 id="native-libraries">Native Libraries</h3>
 <p>You will need copies of external native libraries for the <em>target</em> system, present on the <em>build</em> machine while building.</p>
 <p>Take care not to replace the <em>build</em> system's version of these libraries by mistake, since that can render the <em>build</em> machine unusable.</p>
--- a/doc/building.md	Tue Feb 04 12:56:19 2020 -0800
+++ b/doc/building.md	Fri Feb 07 11:09:59 2020 -0800
@@ -339,7 +339,7 @@
 
 ### gcc
 
-The minimum accepted version of gcc is 4.8. Older versions will generate a warning
+The minimum accepted version of gcc is 5.0. Older versions will generate a warning
 by `configure` and are unlikely to work.
 
 The JDK is currently known to be able to compile with at least version 8.3 of
@@ -1038,14 +1038,6 @@
 to point to a single directory, if it is correctly setup. (See `basics.m4` for
 details.)
 
-If you are unsure what toolchain and versions to use, these have been proved
-working at the time of writing:
-
-  * [aarch64](
-https://releases.linaro.org/archive/13.11/components/toolchain/binaries/gcc-linaro-aarch64-linux-gnu-4.8-2013.11_linux.tar.xz)
-  * [arm 32-bit hardware floating  point](
-https://launchpad.net/linaro-toolchain-unsupported/trunk/2012.09/+download/gcc-linaro-arm-linux-gnueabihf-raspbian-2012.09-20120921_linux.tar.bz2)
-
 ### Native Libraries
 
 You will need copies of external native libraries for the *target* system,
--- a/doc/testing.html	Tue Feb 04 12:56:19 2020 -0800
+++ b/doc/testing.html	Fri Feb 07 11:09:59 2020 -0800
@@ -5,7 +5,7 @@
   <meta name="generator" content="pandoc" />
   <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
   <title>Testing the JDK</title>
-  <style>
+  <style type="text/css">
       code{white-space: pre-wrap;}
       span.smallcaps{font-variant: small-caps;}
       span.underline{text-decoration: underline;}
@@ -21,9 +21,9 @@
 <header id="title-block-header">
 <h1 class="title">Testing the JDK</h1>
 </header>
-<nav id="TOC" role="doc-toc">
+<nav id="TOC">
 <ul>
-<li><a href="#using-make-test-the-run-test-framework">Using "make test" (the run-test framework)</a><ul>
+<li><a href="#using-make-test-the-run-test-framework">Using &quot;make test&quot; (the run-test framework)</a><ul>
 <li><a href="#configuration">Configuration</a></li>
 </ul></li>
 <li><a href="#test-selection">Test selection</a><ul>
@@ -47,7 +47,7 @@
 </ul></li>
 </ul>
 </nav>
-<h2 id="using-make-test-the-run-test-framework">Using "make test" (the run-test framework)</h2>
+<h2 id="using-make-test-the-run-test-framework">Using &quot;make test&quot; (the run-test framework)</h2>
 <p>This new way of running tests is developer-centric. It assumes that you have built a JDK locally and want to test it. Running common test targets is simple, and more complex ad-hoc combination of tests is possible. The user interface is forgiving, and clearly report errors it cannot resolve.</p>
 <p>The main target <code>test</code> uses the jdk-image as the tested product. There is also an alternate target <code>exploded-test</code> that uses the exploded image instead. Not all tests will run successfully on the exploded image, but using this target can greatly improve rebuild times for certain workflows.</p>
 <p>Previously, <code>make test</code> was used to invoke an old system for running tests, and <code>make run-test</code> was used for the new test framework. For backward compatibility with scripts and muscle memory, <code>run-test</code> (and variants like <code>exploded-run-test</code> or <code>run-test-tier1</code>) are kept as aliases.</p>
@@ -65,7 +65,7 @@
 <p>To be able to run microbenchmarks, <code>configure</code> needs to know where to find the JMH dependency. Use <code>--with-jmh=&lt;path to JMH jars&gt;</code> to point to a directory containing the core JMH and transitive dependencies. The recommended dependencies can be retrieved by running <code>sh make/devkit/createJMHBundle.sh</code>, after which <code>--with-jmh=build/jmh/jars</code> should work.</p>
 <h2 id="test-selection">Test selection</h2>
 <p>All functionality is available using the <code>test</code> make target. In this use case, the test or tests to be executed is controlled using the <code>TEST</code> variable. To speed up subsequent test runs with no source code changes, <code>test-only</code> can be used instead, which do not depend on the source and test image build.</p>
-<p>For some common top-level tests, direct make targets have been generated. This includes all JTReg test groups, the hotspot gtest, and custom tests (if present). This means that <code>make test-tier1</code> is equivalent to <code>make test TEST="tier1"</code>, but the latter is more tab-completion friendly. For more complex test runs, the <code>test TEST="x"</code> solution needs to be used.</p>
+<p>For some common top-level tests, direct make targets have been generated. This includes all JTReg test groups, the hotspot gtest, and custom tests (if present). This means that <code>make test-tier1</code> is equivalent to <code>make test TEST=&quot;tier1&quot;</code>, but the latter is more tab-completion friendly. For more complex test runs, the <code>test TEST=&quot;x&quot;</code> solution needs to be used.</p>
 <p>The test specifications given in <code>TEST</code> is parsed into fully qualified test descriptors, which clearly and unambigously show which tests will be run. As an example, <code>:tier1</code> will expand to <code>jtreg:$(TOPDIR)/test/hotspot/jtreg:tier1 jtreg:$(TOPDIR)/test/jdk:tier1 jtreg:$(TOPDIR)/test/langtools:tier1 jtreg:$(TOPDIR)/test/nashorn:tier1 jtreg:$(TOPDIR)/test/jaxp:tier1</code>. You can always submit a list of fully qualified test descriptors in the <code>TEST</code> variable if you want to shortcut the parser.</p>
 <h3 id="jtreg">JTReg</h3>
 <p>JTReg tests can be selected either by picking a JTReg test group, or a selection of files or directories containing JTReg tests.</p>
@@ -105,8 +105,8 @@
 <p>Additional work data is stored in <code>build/$BUILD/test-support/$TEST_ID</code>. For some frameworks, this directory might contain information that is useful in determining the cause of a failed test.</p>
 <h2 id="test-suite-control">Test suite control</h2>
 <p>It is possible to control various aspects of the test suites using make control variables.</p>
-<p>These variables use a keyword=value approach to allow multiple values to be set. So, for instance, <code>JTREG="JOBS=1;TIMEOUT_FACTOR=8"</code> will set the JTReg concurrency level to 1 and the timeout factor to 8. This is equivalent to setting <code>JTREG_JOBS=1 JTREG_TIMEOUT_FACTOR=8</code>, but using the keyword format means that the <code>JTREG</code> variable is parsed and verified for correctness, so <code>JTREG="TMIEOUT_FACTOR=8"</code> would give an error, while <code>JTREG_TMIEOUT_FACTOR=8</code> would just pass unnoticed.</p>
-<p>To separate multiple keyword=value pairs, use <code>;</code> (semicolon). Since the shell normally eats <code>;</code>, the recommended usage is to write the assignment inside qoutes, e.g. <code>JTREG="...;..."</code>. This will also make sure spaces are preserved, as in <code>JTREG="VM_OPTIONS=-XshowSettings -Xlog:gc+ref=debug"</code>.</p>
+<p>These variables use a keyword=value approach to allow multiple values to be set. So, for instance, <code>JTREG=&quot;JOBS=1;TIMEOUT_FACTOR=8&quot;</code> will set the JTReg concurrency level to 1 and the timeout factor to 8. This is equivalent to setting <code>JTREG_JOBS=1 JTREG_TIMEOUT_FACTOR=8</code>, but using the keyword format means that the <code>JTREG</code> variable is parsed and verified for correctness, so <code>JTREG=&quot;TMIEOUT_FACTOR=8&quot;</code> would give an error, while <code>JTREG_TMIEOUT_FACTOR=8</code> would just pass unnoticed.</p>
+<p>To separate multiple keyword=value pairs, use <code>;</code> (semicolon). Since the shell normally eats <code>;</code>, the recommended usage is to write the assignment inside qoutes, e.g. <code>JTREG=&quot;...;...&quot;</code>. This will also make sure spaces are preserved, as in <code>JTREG=&quot;VM_OPTIONS=-XshowSettings -Xlog:gc+ref=debug&quot;</code>.</p>
 <p>(Other ways are possible, e.g. using backslash: <code>JTREG=JOBS=1\;TIMEOUT_FACTOR=8</code>. Also, as a special technique, the string <code>%20</code> will be replaced with space for certain options, e.g. <code>JTREG=VM_OPTIONS=-XshowSettings%20-Xlog:gc+ref=debug</code>. This can be useful if you have layers of scripts and have trouble getting proper quoting of command line arguments through.)</p>
 <p>As far as possible, the names of the keywords have been standardized between test suites.</p>
 <h3 id="general-keywords-test_opts">General keywords (TEST_OPTS)</h3>
@@ -135,8 +135,8 @@
 <p>The timeout factor (<code>-timeoutFactor</code>).</p>
 <p>Defaults to 4.</p>
 <h4 id="test_mode">TEST_MODE</h4>
-<p>The test mode (<code>-agentvm</code>, <code>-samevm</code> or <code>-othervm</code>).</p>
-<p>Defaults to <code>-agentvm</code>.</p>
+<p>The test mode (<code>agentvm</code> or <code>othervm</code>).</p>
+<p>Defaults to <code>agentvm</code>.</p>
 <h4 id="assert">ASSERT</h4>
 <p>Enable asserts (<code>-ea -esa</code>, or none).</p>
 <p>Set to <code>true</code> or <code>false</code>. If true, adds <code>-ea -esa</code>. Defaults to true, except for hotspot.</p>
@@ -161,7 +161,7 @@
 <p>Set to <code>true</code> or <code>false</code>. If <code>true</code>, JTReg will use <code>-match:</code> option, otherwise <code>-exclude:</code> will be used. Default is <code>false</code>.</p>
 <h4 id="options">OPTIONS</h4>
 <p>Additional options to the JTReg test framework.</p>
-<p>Use <code>JTREG="OPTIONS=--help all"</code> to see all available JTReg options.</p>
+<p>Use <code>JTREG=&quot;OPTIONS=--help all&quot;</code> to see all available JTReg options.</p>
 <h4 id="java_options-1">JAVA_OPTIONS</h4>
 <p>Additional Java options to JTReg (<code>-javaoption</code>).</p>
 <h4 id="vm_options-1">VM_OPTIONS</h4>
@@ -176,7 +176,7 @@
 <p>Default is 1. Set to -1 to repeat indefinitely. This can be especially useful combined with <code>OPTIONS=--gtest_break_on_failure</code> to reproduce an intermittent problem.</p>
 <h4 id="options-1">OPTIONS</h4>
 <p>Additional options to the Gtest test framework.</p>
-<p>Use <code>GTEST="OPTIONS=--help"</code> to see all available Gtest options.</p>
+<p>Use <code>GTEST=&quot;OPTIONS=--help&quot;</code> to see all available Gtest options.</p>
 <h4 id="aot_modules-2">AOT_MODULES</h4>
 <p>Generate AOT modules before testing for the specified module, or set of modules. If multiple modules are specified, they should be separated by space (or, to help avoid quoting issues, the special value <code>%20</code>).</p>
 <h3 id="microbenchmark-keywords">Microbenchmark keywords</h3>
@@ -203,7 +203,7 @@
 <p>To run these tests correctly, additional parameters for the correct docker image are required on Ubuntu 18.04 by using <code>JAVA_OPTIONS</code>.</p>
 <pre><code>$ make test TEST=&quot;jtreg:test/hotspot/jtreg/containers/docker&quot; JTREG=&quot;JAVA_OPTIONS=-Djdk.test.docker.image.name=ubuntu -Djdk.test.docker.image.version=latest&quot;</code></pre>
 <h3 id="non-us-locale">Non-US locale</h3>
-<p>If your locale is non-US, some tests are likely to fail. To work around this you can set the locale to US. On Unix platforms simply setting <code>LANG="en_US"</code> in the environment before running tests should work. On Windows, setting <code>JTREG="VM_OPTIONS=-Duser.language=en -Duser.country=US"</code> helps for most, but not all test cases. For example:</p>
+<p>If your locale is non-US, some tests are likely to fail. To work around this you can set the locale to US. On Unix platforms simply setting <code>LANG=&quot;en_US&quot;</code> in the environment before running tests should work. On Windows, setting <code>JTREG=&quot;VM_OPTIONS=-Duser.language=en -Duser.country=US&quot;</code> helps for most, but not all test cases. For example:</p>
 <pre><code>$ export LANG=&quot;en_US&quot; &amp;&amp; make test TEST=...
 $ make test JTREG=&quot;VM_OPTIONS=-Duser.language=en -Duser.country=US&quot; TEST=...</code></pre>
 <h3 id="pkcs11-tests">PKCS11 Tests</h3>
@@ -214,11 +214,11 @@
 <p>Some Client UI tests use key sequences which may be reserved by the operating system. Usually that causes the test failure. So it is highly recommended to disable system key shortcuts prior testing. The steps to access and disable system key shortcuts for various platforms are provided below.</p>
 <h4 id="macos">MacOS</h4>
 <p>Choose Apple menu; System Preferences, click Keyboard, then click Shortcuts; select or deselect desired shortcut.</p>
-<p>For example, test/jdk/javax/swing/TooltipManager/JMenuItemToolTipKeyBindingsTest/JMenuItemToolTipKeyBindingsTest.java fails on MacOS because it uses <code>CTRL + F1</code> key sequence to show or hide tooltip message but the key combination is reserved by the operating system. To run the test correctly the default global key shortcut should be disabled using the steps described above, and then deselect "Turn keyboard access on or off" option which is responsible for <code>CTRL + F1</code> combination.</p>
+<p>For example, test/jdk/javax/swing/TooltipManager/JMenuItemToolTipKeyBindingsTest/JMenuItemToolTipKeyBindingsTest.java fails on MacOS because it uses <code>CTRL + F1</code> key sequence to show or hide tooltip message but the key combination is reserved by the operating system. To run the test correctly the default global key shortcut should be disabled using the steps described above, and then deselect &quot;Turn keyboard access on or off&quot; option which is responsible for <code>CTRL + F1</code> combination.</p>
 <h4 id="linux">Linux</h4>
 <p>Open the Activities overview and start typing Settings; Choose Settings, click Devices, then click Keyboard; set or override desired shortcut.</p>
 <h4 id="windows">Windows</h4>
-<p>Type <code>gpedit</code> in the Search and then click Edit group policy; navigate to User Configuration -&gt; Administrative Templates -&gt; Windows Components -&gt; File Explorer; in the right-side pane look for "Turn off Windows key hotkeys" and double click on it; enable or disable hotkeys.</p>
+<p>Type <code>gpedit</code> in the Search and then click Edit group policy; navigate to User Configuration -&gt; Administrative Templates -&gt; Windows Components -&gt; File Explorer; in the right-side pane look for &quot;Turn off Windows key hotkeys&quot; and double click on it; enable or disable hotkeys.</p>
 <p>Note: restart is required to make the settings take effect.</p>
 </body>
 </html>
--- a/doc/testing.md	Tue Feb 04 12:56:19 2020 -0800
+++ b/doc/testing.md	Fri Feb 07 11:09:59 2020 -0800
@@ -261,9 +261,9 @@
 Defaults to 4.
 
 #### TEST_MODE
-The test mode (`-agentvm`, `-samevm` or `-othervm`).
+The test mode (`agentvm` or `othervm`).
 
-Defaults to `-agentvm`.
+Defaults to `agentvm`.
 
 #### ASSERT
 Enable asserts (`-ea -esa`, or none).
--- a/make/GenerateLinkOptData.gmk	Tue Feb 04 12:56:19 2020 -0800
+++ b/make/GenerateLinkOptData.gmk	Fri Feb 07 11:09:59 2020 -0800
@@ -66,6 +66,13 @@
 	$(call LogInfo, Generating $(patsubst $(OUTPUTDIR)/%, %, $@))
 	$(call LogInfo, Generating $(patsubst $(OUTPUTDIR)/%, %, $(JLI_TRACE_FILE)))
 	$(FIXPATH) $(INTERIM_IMAGE_DIR)/bin/java -XX:DumpLoadedClassList=$@.raw \
+	    -Duser.language=en -Duser.country=US \
+	    -cp $(SUPPORT_OUTPUTDIR)/classlist.jar \
+	    build.tools.classlist.HelloClasslist $(LOG_DEBUG)
+	$(GREP) -v HelloClasslist $@.raw > $(INTERIM_IMAGE_DIR)/lib/classlist
+	$(FIXPATH) $(INTERIM_IMAGE_DIR)/bin/java -Xshare:dump \
+	    -Xmx128M -Xms128M $(LOG_INFO)
+	$(FIXPATH) $(INTERIM_IMAGE_DIR)/bin/java -XX:DumpLoadedClassList=$@.raw \
 	    -Djava.lang.invoke.MethodHandle.TRACE_RESOLVE=true \
 	    -Duser.language=en -Duser.country=US \
 	    -cp $(SUPPORT_OUTPUTDIR)/classlist.jar \
--- a/make/RunTests.gmk	Tue Feb 04 12:56:19 2020 -0800
+++ b/make/RunTests.gmk	Fri Feb 07 11:09:59 2020 -0800
@@ -1073,10 +1073,14 @@
 	$$(call LogWarn, Test report is stored in $$(strip \
 	    $$(subst $$(TOPDIR)/, , $$($1_TEST_RESULTS_DIR))))
 	$$(call LogWarn, Warning: Special test results are not properly parsed!)
-	$$(eval $1_PASSED := 0)
-	$$(eval $1_FAILED := 0)
+	$$(eval $1_PASSED := $$(shell \
+	  if [ `$(CAT) $$($1_EXITCODE)` = "0" ]; then $(ECHO) 1; else $(ECHO) 0; fi \
+	))
+	$$(eval $1_FAILED := $$(shell \
+	  if [ `$(CAT) $$($1_EXITCODE)` = "0" ]; then $(ECHO) 0; else $(ECHO) 1; fi \
+	))
 	$$(eval $1_ERROR := 0)
-	$$(eval $1_TOTAL := 0)
+	$$(eval $1_TOTAL := 1)
 
   $1: run-test-$1 parse-test-$1
 
--- a/make/autoconf/flags-cflags.m4	Tue Feb 04 12:56:19 2020 -0800
+++ b/make/autoconf/flags-cflags.m4	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 #
-# Copyright (c) 2011, 2019, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2011, 2020, Oracle and/or its affiliates. All rights reserved.
 # DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
 #
 # This code is free software; you can redistribute it and/or modify it
@@ -532,10 +532,13 @@
   if test "x$TOOLCHAIN_TYPE" = xgcc; then
     TOOLCHAIN_CFLAGS_JVM="$TOOLCHAIN_CFLAGS_JVM -fcheck-new -fstack-protector"
     TOOLCHAIN_CFLAGS_JDK="-pipe -fstack-protector"
-    # reduce lib size on s390x in link step, this needs also special compile flags
-    if test "x$OPENJDK_TARGET_CPU" = xs390x; then
-      TOOLCHAIN_CFLAGS_JVM="$TOOLCHAIN_CFLAGS_JVM -ffunction-sections -fdata-sections"
+    # reduce lib size on linux in link step, this needs also special compile flags
+    # do this on s390x also for libjvm (where serviceability agent is not supported)
+    if test "x$ENABLE_LINKTIME_GC" = xtrue; then
       TOOLCHAIN_CFLAGS_JDK="$TOOLCHAIN_CFLAGS_JDK -ffunction-sections -fdata-sections"
+      if test "x$OPENJDK_TARGET_CPU" = xs390x; then
+        TOOLCHAIN_CFLAGS_JVM="$TOOLCHAIN_CFLAGS_JVM -ffunction-sections -fdata-sections"
+      fi
     fi
     # technically NOT for CXX (but since this gives *worse* performance, use
     # no-strict-aliasing everywhere!)
@@ -595,8 +598,7 @@
   # our toolchains are in a condition to support that. But what we loosely aim for is
   # C99 level.
   if test "x$TOOLCHAIN_TYPE" = xgcc || test "x$TOOLCHAIN_TYPE" = xclang || test "x$TOOLCHAIN_TYPE" = xxlc; then
-    # This raises the language level for older 4.8 gcc, while lowering it for later
-    # versions. clang and xlclang support the same flag.
+    # Explicitly set C99. clang and xlclang support the same flag.
     LANGSTD_CFLAGS="-std=c99"
   elif test "x$TOOLCHAIN_TYPE" = xsolstudio; then
     # We can't turn on -std=c99 without breaking compilation of the splashscreen/png
@@ -813,7 +815,7 @@
     fi
 
     $1_CXXSTD_CXXFLAG="-std=gnu++98"
-    FLAGS_CXX_COMPILER_CHECK_ARGUMENTS(ARGUMENT: [${$1_CXXSTD_CXXFLAG} -Werror],
+    FLAGS_CXX_COMPILER_CHECK_ARGUMENTS(ARGUMENT: [${$1_CXXSTD_CXXFLAG}],
         PREFIX: $3, IF_FALSE: [$1_CXXSTD_CXXFLAG=""])
     $1_TOOLCHAIN_CFLAGS_JDK_CXXONLY="${$1_CXXSTD_CXXFLAG}"
     $1_TOOLCHAIN_CFLAGS_JVM="${$1_TOOLCHAIN_CFLAGS_JVM} ${$1_CXXSTD_CXXFLAG}"
@@ -940,10 +942,10 @@
   # Notably, value range propagation now assumes that the this pointer of C++
   # member functions is non-null.
   NO_DELETE_NULL_POINTER_CHECKS_CFLAG="-fno-delete-null-pointer-checks"
-  FLAGS_COMPILER_CHECK_ARGUMENTS(ARGUMENT: [$NO_DELETE_NULL_POINTER_CHECKS_CFLAG -Werror],
+  FLAGS_COMPILER_CHECK_ARGUMENTS(ARGUMENT: [$NO_DELETE_NULL_POINTER_CHECKS_CFLAG],
       PREFIX: $2, IF_FALSE: [NO_DELETE_NULL_POINTER_CHECKS_CFLAG=""])
   NO_LIFETIME_DSE_CFLAG="-fno-lifetime-dse"
-  FLAGS_COMPILER_CHECK_ARGUMENTS(ARGUMENT: [$NO_LIFETIME_DSE_CFLAG -Werror],
+  FLAGS_COMPILER_CHECK_ARGUMENTS(ARGUMENT: [$NO_LIFETIME_DSE_CFLAG],
       PREFIX: $2, IF_FALSE: [NO_LIFETIME_DSE_CFLAG=""])
   $1_GCC6_CFLAGS="${NO_DELETE_NULL_POINTER_CHECKS_CFLAG} ${NO_LIFETIME_DSE_CFLAG}"
 ])
--- a/make/autoconf/flags-ldflags.m4	Tue Feb 04 12:56:19 2020 -0800
+++ b/make/autoconf/flags-ldflags.m4	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 #
-# Copyright (c) 2011, 2019, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2011, 2020, Oracle and/or its affiliates. All rights reserved.
 # DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
 #
 # This code is free software; you can redistribute it and/or modify it
@@ -72,9 +72,13 @@
     # Add -z defs, to forbid undefined symbols in object files.
     # add relro (mark relocations read only) for all libs
     BASIC_LDFLAGS="$BASIC_LDFLAGS -Wl,-z,defs -Wl,-z,relro"
-    # s390x : remove unused code+data in link step
-    if test "x$OPENJDK_TARGET_CPU" = xs390x; then
-      BASIC_LDFLAGS="$BASIC_LDFLAGS -Wl,--gc-sections -Wl,--print-gc-sections"
+    # Linux : remove unused code+data in link step
+    if test "x$ENABLE_LINKTIME_GC" = xtrue; then
+      if test "x$OPENJDK_TARGET_CPU" = xs390x; then
+        BASIC_LDFLAGS="$BASIC_LDFLAGS -Wl,--gc-sections -Wl,--print-gc-sections"
+      else
+        BASIC_LDFLAGS_JDK_ONLY="$BASIC_LDFLAGS_JDK_ONLY -Wl,--gc-sections"
+      fi
     fi
 
     BASIC_LDFLAGS_JVM_ONLY="-Wl,-O1"
--- a/make/autoconf/flags.m4	Tue Feb 04 12:56:19 2020 -0800
+++ b/make/autoconf/flags.m4	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 #
-# Copyright (c) 2011, 2018, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2011, 2020, Oracle and/or its affiliates. All rights reserved.
 # DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
 #
 # This code is free software; you can redistribute it and/or modify it
@@ -49,7 +49,7 @@
     # --- Arm-sflt CFLAGS and ASFLAGS ---
     # Armv5te is required for assembler, because pld insn used in arm32 hotspot is only in v5E and above.
     # However, there is also a GCC bug which generates unaligned strd/ldrd instructions on armv5te:
-    # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82445, and it was fixed only quite recently.
+    # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82445, and it was fixed in gcc 7.1.
     # The resulting compromise is to enable v5TE for assembler and let GCC generate code for v5T.
     if test "x$OPENJDK_TARGET_ABI_PROFILE" = xarm-vfp-sflt; then
       ARM_FLOAT_TYPE=vfp-sflt
@@ -438,7 +438,7 @@
 
   saved_cflags="$CFLAGS"
   saved_cc="$CC"
-  CFLAGS="$CFLAGS ARG_ARGUMENT"
+  CFLAGS="$CFLAGS $CFLAGS_WARNINGS_ARE_ERRORS ARG_ARGUMENT"
   CC="$ARG_PREFIX[CC]"
   AC_LANG_PUSH([C])
   AC_COMPILE_IFELSE([AC_LANG_SOURCE([[int i;]])], [],
@@ -469,7 +469,7 @@
 
   saved_cxxflags="$CXXFLAGS"
   saved_cxx="$CXX"
-  CXXFLAGS="$CXXFLAG ARG_ARGUMENT"
+  CXXFLAGS="$CXXFLAG $CFLAGS_WARNINGS_ARE_ERRORS ARG_ARGUMENT"
   CXX="$ARG_PREFIX[CXX]"
   AC_LANG_PUSH([C++])
   AC_COMPILE_IFELSE([AC_LANG_SOURCE([[int i;]])], [],
--- a/make/autoconf/jdk-options.m4	Tue Feb 04 12:56:19 2020 -0800
+++ b/make/autoconf/jdk-options.m4	Fri Feb 07 11:09:59 2020 -0800
@@ -139,6 +139,30 @@
 
   AC_SUBST(ENABLE_HEADLESS_ONLY)
 
+  # should we linktime gc unused code sections in the JDK build ?
+  AC_MSG_CHECKING([linktime gc])
+  AC_ARG_ENABLE([linktime-gc], [AS_HELP_STRING([--enable-linktime-gc],
+      [linktime gc unused code sections in the JDK build @<:@disabled@:>@])])
+
+  if test "x$enable_linktime_gc" = "xyes"; then
+    ENABLE_LINKTIME_GC="true"
+    AC_MSG_RESULT([yes])
+  elif test "x$enable_linktime_gc" = "xno"; then
+    ENABLE_LINKTIME_GC="false"
+    AC_MSG_RESULT([no])
+  elif test "x$OPENJDK_TARGET_OS" = "xlinux" && test "x$OPENJDK_TARGET_CPU" = xs390x; then
+    ENABLE_LINKTIME_GC="true"
+    AC_MSG_RESULT([yes])
+  elif test "x$enable_linktime_gc" = "x"; then
+    ENABLE_LINKTIME_GC="false"
+    AC_MSG_RESULT([no])
+  else
+    AC_MSG_ERROR([--enable-linktime-gc can only take yes or no])
+  fi
+
+  AC_SUBST(ENABLE_LINKTIME_GC)
+
+
   # Should we build the complete docs, or just a lightweight version?
   AC_ARG_ENABLE([full-docs], [AS_HELP_STRING([--enable-full-docs],
       [build complete documentation @<:@enabled if all tools found@:>@])])
--- a/make/autoconf/spec.gmk.in	Tue Feb 04 12:56:19 2020 -0800
+++ b/make/autoconf/spec.gmk.in	Fri Feb 07 11:09:59 2020 -0800
@@ -301,6 +301,8 @@
 # Only build headless support or not
 ENABLE_HEADLESS_ONLY := @ENABLE_HEADLESS_ONLY@
 
+ENABLE_LINKTIME_GC := @ENABLE_LINKTIME_GC@
+
 ENABLE_FULL_DOCS := @ENABLE_FULL_DOCS@
 
 # JDK_OUTPUTDIR specifies where a working jvm is built.
--- a/make/autoconf/toolchain.m4	Tue Feb 04 12:56:19 2020 -0800
+++ b/make/autoconf/toolchain.m4	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 #
-# Copyright (c) 2011, 2019, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2011, 2020, Oracle and/or its affiliates. All rights reserved.
 # DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
 #
 # This code is free software; you can redistribute it and/or modify it
@@ -52,7 +52,7 @@
 
 # Minimum supported versions, empty means unspecified
 TOOLCHAIN_MINIMUM_VERSION_clang="3.2"
-TOOLCHAIN_MINIMUM_VERSION_gcc="4.8"
+TOOLCHAIN_MINIMUM_VERSION_gcc="5.0"
 TOOLCHAIN_MINIMUM_VERSION_microsoft="16.00.30319.01" # VS2010
 TOOLCHAIN_MINIMUM_VERSION_solstudio="5.13"
 TOOLCHAIN_MINIMUM_VERSION_xlc=""
@@ -64,10 +64,11 @@
 # Must have CC_VERSION_NUMBER and CXX_VERSION_NUMBER.
 # $1 - optional variable prefix for compiler and version variables (BUILD_)
 # $2 - optional variable prefix for comparable variable (OPENJDK_BUILD_)
+# $3 - optional human readable description for the type of compilers ("build " or "")
 AC_DEFUN([TOOLCHAIN_PREPARE_FOR_VERSION_COMPARISONS],
 [
   if test "x[$]$1CC_VERSION_NUMBER" != "x[$]$1CXX_VERSION_NUMBER"; then
-    AC_MSG_WARN([C and C++ compiler have different version numbers, [$]$1CC_VERSION_NUMBER vs [$]$1CXX_VERSION_NUMBER.])
+    AC_MSG_WARN([The $3C and C++ compilers have different version numbers, [$]$1CC_VERSION_NUMBER vs [$]$1CXX_VERSION_NUMBER.])
     AC_MSG_WARN([This typically indicates a broken setup, and is not supported])
   fi
 
@@ -450,9 +451,10 @@
     # There is no specific version flag, but all output starts with a version string.
     # First line typically looks something like:
     # Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 16.00.40219.01 for 80x86
+    # but the compiler name may vary depending on locale.
     COMPILER_VERSION_OUTPUT=`"$COMPILER" 2>&1 | $GREP -v 'ERROR.*UtilTranslatePathList' | $HEAD -n 1 | $TR -d '\r'`
     # Check that this is likely to be Microsoft CL.EXE.
-    $ECHO "$COMPILER_VERSION_OUTPUT" | $GREP "Microsoft.*Compiler" > /dev/null
+    $ECHO "$COMPILER_VERSION_OUTPUT" | $GREP "Microsoft" > /dev/null
     if test $? -ne 0; then
       AC_MSG_NOTICE([The $COMPILER_NAME compiler (located as $COMPILER) does not seem to be the required $TOOLCHAIN_TYPE compiler.])
       AC_MSG_NOTICE([The result from running it was: "$COMPILER_VERSION_OUTPUT"])
@@ -997,7 +999,7 @@
 
     TOOLCHAIN_EXTRACT_COMPILER_VERSION(BUILD_CC, [BuildC])
     TOOLCHAIN_EXTRACT_COMPILER_VERSION(BUILD_CXX, [BuildC++])
-    TOOLCHAIN_PREPARE_FOR_VERSION_COMPARISONS([BUILD_], [OPENJDK_BUILD_])
+    TOOLCHAIN_PREPARE_FOR_VERSION_COMPARISONS([BUILD_], [OPENJDK_BUILD_], [build ])
     TOOLCHAIN_EXTRACT_LD_VERSION(BUILD_LD, [build linker])
     TOOLCHAIN_PREPARE_FOR_LD_VERSION_COMPARISONS([BUILD_], [OPENJDK_BUILD_])
   else
@@ -1013,7 +1015,7 @@
     BUILD_STRIP="$STRIP"
     BUILD_AR="$AR"
 
-    TOOLCHAIN_PREPARE_FOR_VERSION_COMPARISONS([], [OPENJDK_BUILD_])
+    TOOLCHAIN_PREPARE_FOR_VERSION_COMPARISONS([], [OPENJDK_BUILD_], [build ])
     TOOLCHAIN_PREPARE_FOR_LD_VERSION_COMPARISONS([BUILD_], [OPENJDK_BUILD_])
   fi
 
--- a/make/common/MakeBase.gmk	Tue Feb 04 12:56:19 2020 -0800
+++ b/make/common/MakeBase.gmk	Fri Feb 07 11:09:59 2020 -0800
@@ -525,15 +525,16 @@
 # Param 2 - (optional) name of file to store value in
 DependOnVariableHelper = \
     $(strip \
-        $(eval -include $(call DependOnVariableFileName, $1, $2)) \
+        $(eval $1_filename := $(call DependOnVariableFileName, $1, $2)) \
+        $(if $(wildcard $($1_filename)), $(eval include $($1_filename))) \
         $(if $(call equals, $(strip $($1)), $(strip $($1_old))),,\
-          $(call MakeDir, $(dir $(call DependOnVariableFileName, $1, $2))) \
+          $(call MakeDir, $(dir $($1_filename))) \
           $(if $(findstring $(LOG_LEVEL), trace), \
               $(info NewVariable $1: >$(strip $($1))<) \
               $(info OldVariable $1: >$(strip $($1_old))<)) \
           $(call WriteFile, $1_old:=$(call DoubleDollar,$(call EscapeHash,$($1))), \
-              $(call DependOnVariableFileName, $1, $2))) \
-        $(call DependOnVariableFileName, $1, $2) \
+              $($1_filename))) \
+        $($1_filename) \
     )
 
 # Main macro
--- a/make/gensrc/Gensrc-jdk.internal.vm.compiler.gmk	Tue Feb 04 12:56:19 2020 -0800
+++ b/make/gensrc/Gensrc-jdk.internal.vm.compiler.gmk	Fri Feb 07 11:09:59 2020 -0800
@@ -36,6 +36,7 @@
 ################################################################################
 
 PROC_SRC_SUBDIRS := \
+    org.graalvm.compiler.asm.amd64 \
     org.graalvm.compiler.code \
     org.graalvm.compiler.core \
     org.graalvm.compiler.core.aarch64 \
--- a/make/hotspot/lib/CompileJvm.gmk	Tue Feb 04 12:56:19 2020 -0800
+++ b/make/hotspot/lib/CompileJvm.gmk	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 #
-# Copyright (c) 2013, 2019, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2013, 2020, Oracle and/or its affiliates. All rights reserved.
 # DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
 #
 # This code is free software; you can redistribute it and/or modify it
@@ -82,7 +82,7 @@
     delete-non-virtual-dtor char-subscripts array-bounds int-in-bool-context \
     ignored-qualifiers  missing-field-initializers implicit-fallthrough \
     empty-body strict-overflow sequence-point maybe-uninitialized \
-    misleading-indentation cast-function-type
+    misleading-indentation cast-function-type invalid-offsetof
 
 ifeq ($(call check-jvm-feature, zero), true)
   DISABLED_WARNINGS_gcc += return-type switch clobbered
@@ -91,7 +91,8 @@
 DISABLED_WARNINGS_clang := tautological-compare \
     undefined-var-template sometimes-uninitialized unknown-pragmas \
     delete-non-virtual-dtor missing-braces char-subscripts \
-    ignored-qualifiers missing-field-initializers mismatched-tags
+    ignored-qualifiers missing-field-initializers mismatched-tags \
+    invalid-offsetof
 
 DISABLED_WARNINGS_solstudio := labelnotused hidef w_novirtualdescr inlafteruse \
     unknownpragma doubunder w_enumnotused w_toomanyenumnotused \
--- a/make/hotspot/lib/JvmFeatures.gmk	Tue Feb 04 12:56:19 2020 -0800
+++ b/make/hotspot/lib/JvmFeatures.gmk	Fri Feb 07 11:09:59 2020 -0800
@@ -214,6 +214,10 @@
         cpCache.cpp \
         defNewGeneration.cpp \
         frame_arm.cpp \
+        frame_aarch64.cpp \
+        frame_ppc.cpp \
+        frame_s390.cpp \
+        frame_x86.cpp \
         genCollectedHeap.cpp \
         generation.cpp \
         genMarkSweep.cpp \
@@ -223,6 +227,10 @@
         heap.cpp \
         icache.cpp \
         icache_arm.cpp \
+        icache_aarch64.cpp \
+        icache_ppc.cpp \
+        icache_s390.cpp \
+        icache_x86.cpp \
         instanceKlass.cpp \
         invocationCounter.cpp \
         iterator.cpp \
--- a/make/jdk/src/classes/build/tools/classlist/HelloClasslist.java	Tue Feb 04 12:56:19 2020 -0800
+++ b/make/jdk/src/classes/build/tools/classlist/HelloClasslist.java	Fri Feb 07 11:09:59 2020 -0800
@@ -99,7 +99,7 @@
                 DateFormat.getDateInstance(DateFormat.DEFAULT, Locale.ROOT)
                         .format(new Date()));
 
-        LOGGER.log(Level.INFO, "New Date: " + newDate + " - old: " + oldDate);
+        LOGGER.log(Level.FINE, "New Date: " + newDate + " - old: " + oldDate);
     }
 
 }
--- a/make/lib/CoreLibraries.gmk	Tue Feb 04 12:56:19 2020 -0800
+++ b/make/lib/CoreLibraries.gmk	Fri Feb 07 11:09:59 2020 -0800
@@ -144,7 +144,7 @@
     CFLAGS := $(CFLAGS_JDKLIB) \
         $(LIBZ_CFLAGS), \
     CFLAGS_unix := $(BUILD_LIBZIP_MMAP) -UDEBUG, \
-    DISABLED_WARNINGS_gcc := unused-function, \
+    DISABLED_WARNINGS_gcc := unused-function implicit-fallthrough, \
     LDFLAGS := $(LDFLAGS_JDKLIB) \
         $(call SET_SHARED_LIBRARY_ORIGIN), \
     LIBS_unix := -ljvm -ljava $(LIBZ_LIBS), \
@@ -210,7 +210,7 @@
     EXTRA_FILES := $(LIBJLI_EXTRA_FILES), \
     OPTIMIZATION := HIGH, \
     CFLAGS := $(CFLAGS_JDKLIB) $(LIBJLI_CFLAGS), \
-    DISABLED_WARNINGS_gcc := unused-function, \
+    DISABLED_WARNINGS_gcc := unused-function implicit-fallthrough, \
     DISABLED_WARNINGS_clang := sometimes-uninitialized format-nonliteral, \
     LDFLAGS := $(LDFLAGS_JDKLIB) \
         $(call SET_SHARED_LIBRARY_ORIGIN), \
--- a/make/test/JtregGraalUnit.gmk	Tue Feb 04 12:56:19 2020 -0800
+++ b/make/test/JtregGraalUnit.gmk	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 #
-# Copyright (c) 2018, 2019, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2018, 2020, Oracle and/or its affiliates. All rights reserved.
 # DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
 #
 # This code is free software; you can redistribute it and/or modify it
@@ -176,8 +176,6 @@
     ))
 
     TARGETS_IMAGE += $(COPY_HOTSPOT_JTREG_GRAAL)
-  else
-    $(info Skip building of Graal unit tests because 3rd party libraries directory is not specified)
   endif
 endif
 
--- a/make/test/JtregNativeHotspot.gmk	Tue Feb 04 12:56:19 2020 -0800
+++ b/make/test/JtregNativeHotspot.gmk	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 #
-# Copyright (c) 2015, 2019, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2015, 2020, Oracle and/or its affiliates. All rights reserved.
 # DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
 #
 # This code is free software; you can redistribute it and/or modify it
@@ -880,8 +880,10 @@
 ifeq ($(call isTargetOs, windows), true)
     BUILD_HOTSPOT_JTREG_EXECUTABLES_CFLAGS_exeFPRegs := -MT
     BUILD_HOTSPOT_JTREG_EXCLUDE += exesigtest.c libterminatedThread.c
+    BUILD_HOTSPOT_JTREG_EXECUTABLES_LIBS_exejvm-test-launcher := jvm.lib
 
 else
+    BUILD_HOTSPOT_JTREG_EXECUTABLES_LIBS_exejvm-test-launcher := -ljvm
     BUILD_HOTSPOT_JTREG_LIBRARIES_LIBS_libbootclssearch_agent += -lpthread
     BUILD_HOTSPOT_JTREG_LIBRARIES_LIBS_libsystemclssearch_agent += -lpthread
     BUILD_HOTSPOT_JTREG_LIBRARIES_LIBS_libgetsysprop001 += -lpthread
--- a/src/hotspot/cpu/aarch64/aarch64Test.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/cpu/aarch64/aarch64Test.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 2014, Red Hat Inc. All rights reserved.
+ * Copyright (c) 2014, 2020, Red Hat Inc. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
  * This code is free software; you can redistribute it and/or modify it
@@ -32,10 +32,12 @@
 
 extern "C" void entry(CodeBuffer*);
 
+#ifdef ASSERT
 void aarch64TestHook()
 {
   BufferBlob* b = BufferBlob::create("aarch64Test", 500000);
   CodeBuffer code(b);
-  MacroAssembler _masm(&code);
   entry(&code);
+  BufferBlob::free(b);
 }
+#endif
--- a/src/hotspot/cpu/aarch64/assembler_aarch64.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/cpu/aarch64/assembler_aarch64.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,6 +1,6 @@
 /*
- * Copyright (c) 1997, 2012, Oracle and/or its affiliates. All rights reserved.
- * Copyright (c) 2014, Red Hat Inc. All rights reserved.
+ * Copyright (c) 1997, 2020, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2014, 2020 Red Hat Inc. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
  * This code is free software; you can redistribute it and/or modify it
@@ -73,7 +73,6 @@
     }
     assert(ok, "Assembler smoke test failed");
   }
-#endif // ASSERT
 
 void entry(CodeBuffer *cb) {
 
@@ -91,7 +90,6 @@
 
   // Smoke test for assembler
 
-#ifdef ASSERT
 // BEGIN  Generated code -- do not edit
 // Generated by aarch64-asmtest.py
     Label back, forth;
@@ -1459,9 +1457,8 @@
     asm_check((unsigned int *)PC, vector_insns,
               sizeof vector_insns / sizeof vector_insns[0]);
   }
-
+}
 #endif // ASSERT
-}
 
 #undef __
 
--- a/src/hotspot/cpu/aarch64/c1_LIRAssembler_aarch64.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/cpu/aarch64/c1_LIRAssembler_aarch64.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -138,18 +138,6 @@
   }
 }
 
-void LIR_Assembler::set_24bit_FPU() { Unimplemented(); }
-
-void LIR_Assembler::reset_FPU() { Unimplemented(); }
-
-void LIR_Assembler::fpop() { Unimplemented(); }
-
-void LIR_Assembler::fxch(int i) { Unimplemented(); }
-
-void LIR_Assembler::fld(int i) { Unimplemented(); }
-
-void LIR_Assembler::ffree(int i) { Unimplemented(); }
-
 void LIR_Assembler::breakpoint() { Unimplemented(); }
 
 void LIR_Assembler::push(LIR_Opr opr) { Unimplemented(); }
--- a/src/hotspot/cpu/aarch64/c1_LIRGenerator_aarch64.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/cpu/aarch64/c1_LIRGenerator_aarch64.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1127,7 +1127,6 @@
   // arguments of lir_convert
   LIR_Opr conv_input = input;
   LIR_Opr conv_result = result;
-  ConversionStub* stub = NULL;
 
   __ convert(x->op(), conv_input, conv_result);
 
--- a/src/hotspot/cpu/aarch64/globalDefinitions_aarch64.hpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/cpu/aarch64/globalDefinitions_aarch64.hpp	Fri Feb 07 11:09:59 2020 -0800
@@ -34,11 +34,10 @@
 
 #define SUPPORTS_NATIVE_CX8
 
-// Aarch64 was not originally defined as multi-copy-atomic, but now is.
-// See: "Simplifying ARM Concurrency: Multicopy-atomic Axiomatic and
-// Operational Models for ARMv8"
-// So we could #define CPU_MULTI_COPY_ATOMIC but historically we have
-// not done so.
+// Aarch64 was not originally defined to be multi-copy-atomic, but now
+// is.  See: "Simplifying ARM Concurrency: Multicopy-atomic Axiomatic
+// and Operational Models for ARMv8"
+#define CPU_MULTI_COPY_ATOMIC
 
 // According to the ARMv8 ARM, "Concurrent modification and execution
 // of instructions can lead to the resulting instruction performing
--- a/src/hotspot/cpu/aarch64/icache_aarch64.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/cpu/aarch64/icache_aarch64.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,6 +1,6 @@
 /*
- * Copyright (c) 1997, 2010, Oracle and/or its affiliates. All rights reserved.
- * Copyright (c) 2014, Red Hat Inc. All rights reserved.
+ * Copyright (c) 1997, 2020, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2014, 2020 Red Hat Inc. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
  * This code is free software; you can redistribute it and/or modify it
@@ -24,7 +24,6 @@
  */
 
 #include "precompiled.hpp"
-#include "asm/macroAssembler.hpp"
 #include "runtime/icache.hpp"
 
 extern void aarch64TestHook();
@@ -36,5 +35,7 @@
 }
 
 void ICache::initialize() {
+#ifdef ASSERT
   aarch64TestHook();
+#endif
 }
--- a/src/hotspot/cpu/aarch64/jvmciCodeInstaller_aarch64.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/cpu/aarch64/jvmciCodeInstaller_aarch64.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -61,7 +61,7 @@
 #endif // ASSERT
   Handle obj = jvmci_env()->asConstant(constant, JVMCI_CHECK);
   jobject value = JNIHandles::make_local(obj());
-  MacroAssembler::patch_oop(pc, (address)obj());
+  MacroAssembler::patch_oop(pc, cast_from_oop<address>(obj()));
   int oop_index = _oop_recorder->find_index(value);
   RelocationHolder rspec = oop_Relocation::spec(oop_index);
   _instructions->relocate(pc, rspec);
--- a/src/hotspot/cpu/aarch64/macroAssembler_aarch64.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/cpu/aarch64/macroAssembler_aarch64.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,6 +1,6 @@
 /*
- * Copyright (c) 1997, 2019, Oracle and/or its affiliates. All rights reserved.
- * Copyright (c) 2014, 2019, Red Hat Inc. All rights reserved.
+ * Copyright (c) 1997, 2020, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2014, 2020, Red Hat Inc. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
  * This code is free software; you can redistribute it and/or modify it
@@ -4977,8 +4977,6 @@
       sub(cnt2, zr, cnt2, LSL, str2_chr_shift);
     } else if (isLU) {
       ldrs(vtmp, Address(str1));
-      cmp(str1, str2);
-      br(Assembler::EQ, DONE);
       ldr(tmp2, Address(str2));
       cmp(cnt2, stub_threshold);
       br(GE, STUB);
@@ -4993,8 +4991,6 @@
       fmovd(tmp1, vtmp);
     } else { // UL case
       ldr(tmp1, Address(str1));
-      cmp(str1, str2);
-      br(Assembler::EQ, DONE);
       ldrs(vtmp, Address(str2));
       cmp(cnt2, stub_threshold);
       br(GE, STUB);
--- a/src/hotspot/cpu/aarch64/sharedRuntime_aarch64.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/cpu/aarch64/sharedRuntime_aarch64.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 2003, 2019, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2003, 2020, Oracle and/or its affiliates. All rights reserved.
  * Copyright (c) 2014, 2019, Red Hat Inc. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
@@ -1333,28 +1333,16 @@
         // Arrays are passed as int, elem* pair
         out_sig_bt[argc++] = T_INT;
         out_sig_bt[argc++] = T_ADDRESS;
-        Symbol* atype = ss.as_symbol();
-        const char* at = atype->as_C_string();
-        if (strlen(at) == 2) {
-          assert(at[0] == '[', "must be");
-          switch (at[1]) {
-            case 'B': in_elem_bt[i]  = T_BYTE; break;
-            case 'C': in_elem_bt[i]  = T_CHAR; break;
-            case 'D': in_elem_bt[i]  = T_DOUBLE; break;
-            case 'F': in_elem_bt[i]  = T_FLOAT; break;
-            case 'I': in_elem_bt[i]  = T_INT; break;
-            case 'J': in_elem_bt[i]  = T_LONG; break;
-            case 'S': in_elem_bt[i]  = T_SHORT; break;
-            case 'Z': in_elem_bt[i]  = T_BOOLEAN; break;
-            default: ShouldNotReachHere();
-          }
-        }
+        ss.skip_array_prefix(1);  // skip one '['
+        assert(ss.is_primitive(), "primitive type expected");
+        in_elem_bt[i] = ss.type();
       } else {
         out_sig_bt[argc++] = in_sig_bt[i];
         in_elem_bt[i] = T_VOID;
       }
       if (in_sig_bt[i] != T_VOID) {
-        assert(in_sig_bt[i] == ss.type(), "must match");
+        assert(in_sig_bt[i] == ss.type() ||
+               in_sig_bt[i] == T_ARRAY, "must match");
         ss.next();
       }
     }
--- a/src/hotspot/cpu/arm/arm.ad	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/cpu/arm/arm.ad	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 //
-// Copyright (c) 2008, 2018, Oracle and/or its affiliates. All rights reserved.
+// Copyright (c) 2008, 2020, Oracle and/or its affiliates. All rights reserved.
 // DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
 //
 // This code is free software; you can redistribute it and/or modify it
@@ -351,10 +351,7 @@
 
   // If this does safepoint polling, then do it here
   if (do_polling() && ra_->C->is_method_compilation()) {
-    // mov_slow here is usually one or two instruction
-    __ mov_address(Rtemp, (address)os::get_polling_page());
-    __ relocate(relocInfo::poll_return_type);
-    __ ldr(Rtemp, Address(Rtemp));
+    __ read_polling_page(Rtemp, relocInfo::poll_return_type);
   }
 }
 
--- a/src/hotspot/cpu/arm/c1_LIRAssembler_arm.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/cpu/arm/c1_LIRAssembler_arm.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 2008, 2019, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2008, 2020, Oracle and/or its affiliates. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
  * This code is free software; you can redistribute it and/or modify it
@@ -86,30 +86,6 @@
 //--------------fpu register translations-----------------------
 
 
-void LIR_Assembler::set_24bit_FPU() {
-  ShouldNotReachHere();
-}
-
-void LIR_Assembler::reset_FPU() {
-  ShouldNotReachHere();
-}
-
-void LIR_Assembler::fpop() {
-  Unimplemented();
-}
-
-void LIR_Assembler::fxch(int i) {
-  Unimplemented();
-}
-
-void LIR_Assembler::fld(int i) {
-  Unimplemented();
-}
-
-void LIR_Assembler::ffree(int i) {
-  Unimplemented();
-}
-
 void LIR_Assembler::breakpoint() {
   __ breakpoint();
 }
@@ -309,23 +285,16 @@
 void LIR_Assembler::return_op(LIR_Opr result) {
   // Pop the frame before safepoint polling
   __ remove_frame(initial_frame_size_in_bytes());
-
-  // mov_slow here is usually one or two instruction
-  __ mov_address(Rtemp, os::get_polling_page());
-  __ relocate(relocInfo::poll_return_type);
-  __ ldr(Rtemp, Address(Rtemp));
+  __ read_polling_page(Rtemp, relocInfo::poll_return_type);
   __ ret();
 }
 
-
 int LIR_Assembler::safepoint_poll(LIR_Opr tmp, CodeEmitInfo* info) {
-  __ mov_address(Rtemp, os::get_polling_page());
   if (info != NULL) {
     add_debug_info_for_branch(info);
   }
   int offset = __ offset();
-  __ relocate(relocInfo::poll_type);
-  __ ldr(Rtemp, Address(Rtemp));
+  __ read_polling_page(Rtemp, relocInfo::poll_type);
   return offset;
 }
 
--- a/src/hotspot/cpu/arm/globalDefinitions_arm.hpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/cpu/arm/globalDefinitions_arm.hpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 2008, 2019, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2008, 2020, Oracle and/or its affiliates. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
  * This code is free software; you can redistribute it and/or modify it
@@ -62,4 +62,6 @@
 #endif
 #endif
 
+#define THREAD_LOCAL_POLL
+
 #endif // CPU_ARM_GLOBALDEFINITIONS_ARM_HPP
--- a/src/hotspot/cpu/arm/interp_masm_arm.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/cpu/arm/interp_masm_arm.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 2008, 2018, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2008, 2020, Oracle and/or its affiliates. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
  * This code is free software; you can redistribute it and/or modify it
@@ -42,6 +42,7 @@
 #include "runtime/basicLock.hpp"
 #include "runtime/biasedLocking.hpp"
 #include "runtime/frame.inline.hpp"
+#include "runtime/safepointMechanism.hpp"
 #include "runtime/sharedRuntime.hpp"
 
 //--------------------------------------------------------------------
@@ -556,7 +557,7 @@
 
 void InterpreterMacroAssembler::dispatch_base(TosState state,
                                               DispatchTableMode table_mode,
-                                              bool verifyoop) {
+                                              bool verifyoop, bool generate_poll) {
   if (VerifyActivationFrameSize) {
     Label L;
     sub(Rtemp, FP, SP);
@@ -571,6 +572,18 @@
     interp_verify_oop(R0_tos, state, __FILE__, __LINE__);
   }
 
+  Label safepoint;
+  address* const safepoint_table = Interpreter::safept_table(state);
+  address* const table           = Interpreter::dispatch_table(state);
+  bool needs_thread_local_poll = generate_poll &&
+    SafepointMechanism::uses_thread_local_poll() && table != safepoint_table;
+
+  if (needs_thread_local_poll) {
+    NOT_PRODUCT(block_comment("Thread-local Safepoint poll"));
+    ldr(Rtemp, Address(Rthread, Thread::polling_page_offset()));
+    tbnz(Rtemp, exact_log2(SafepointMechanism::poll_bit()), safepoint);
+  }
+
   if((state == itos) || (state == btos) || (state == ztos) || (state == ctos) || (state == stos)) {
     zap_high_non_significant_bits(R0_tos);
   }
@@ -600,12 +613,18 @@
     indirect_jump(Address::indexed_ptr(Rtemp, R3_bytecode), Rtemp);
   }
 
+  if (needs_thread_local_poll) {
+    bind(safepoint);
+    lea(Rtemp, ExternalAddress((address)safepoint_table));
+    indirect_jump(Address::indexed_ptr(Rtemp, R3_bytecode), Rtemp);
+  }
+
   nop(); // to avoid filling CPU pipeline with invalid instructions
   nop();
 }
 
-void InterpreterMacroAssembler::dispatch_only(TosState state) {
-  dispatch_base(state, DispatchDefault);
+void InterpreterMacroAssembler::dispatch_only(TosState state, bool generate_poll) {
+  dispatch_base(state, DispatchDefault, true, generate_poll);
 }
 
 
@@ -617,10 +636,10 @@
   dispatch_base(state, DispatchNormal, false);
 }
 
-void InterpreterMacroAssembler::dispatch_next(TosState state, int step) {
+void InterpreterMacroAssembler::dispatch_next(TosState state, int step, bool generate_poll) {
   // load next bytecode and advance Rbcp
   ldrb(R3_bytecode, Address(Rbcp, step, pre_indexed));
-  dispatch_base(state, DispatchDefault);
+  dispatch_base(state, DispatchDefault, true, generate_poll);
 }
 
 void InterpreterMacroAssembler::narrow(Register result) {
--- a/src/hotspot/cpu/arm/interp_masm_arm.hpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/cpu/arm/interp_masm_arm.hpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 2008, 2019, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2008, 2020, Oracle and/or its affiliates. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
  * This code is free software; you can redistribute it and/or modify it
@@ -54,7 +54,7 @@
 
   // base routine for all dispatches
   typedef enum { DispatchDefault, DispatchNormal } DispatchTableMode;
-  void dispatch_base(TosState state, DispatchTableMode table_mode, bool verifyoop = true);
+  void dispatch_base(TosState state, DispatchTableMode table_mode, bool verifyoop = true, bool generate_poll = false);
 
  public:
   InterpreterMacroAssembler(CodeBuffer* code);
@@ -160,10 +160,10 @@
   // Dispatching
   void dispatch_prolog(TosState state, int step = 0);
   void dispatch_epilog(TosState state, int step = 0);
-  void dispatch_only(TosState state);                      // dispatch by R3_bytecode
-  void dispatch_only_normal(TosState state);               // dispatch normal table by R3_bytecode
+  void dispatch_only(TosState state, bool generate_poll = false);  // dispatch by R3_bytecode
+  void dispatch_only_normal(TosState state);                       // dispatch normal table by R3_bytecode
   void dispatch_only_noverify(TosState state);
-  void dispatch_next(TosState state, int step = 0);        // load R3_bytecode from [Rbcp + step] and dispatch by R3_bytecode
+  void dispatch_next(TosState state, int step = 0, bool generate_poll = false); // load R3_bytecode from [Rbcp + step] and dispatch by R3_bytecode
 
   // jump to an invoked target
   void prepare_to_jump_from_interpreted();
--- a/src/hotspot/cpu/arm/macroAssembler_arm.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/cpu/arm/macroAssembler_arm.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 2008, 2018, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2008, 2020, Oracle and/or its affiliates. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
  * This code is free software; you can redistribute it and/or modify it
@@ -2053,4 +2053,32 @@
   bind(done);
 
 }
+
+void MacroAssembler::safepoint_poll(Register tmp1, Label& slow_path) {
+  if (SafepointMechanism::uses_thread_local_poll()) {
+    ldr_u32(tmp1, Address(Rthread, Thread::polling_page_offset()));
+    tst(tmp1, exact_log2(SafepointMechanism::poll_bit()));
+    b(slow_path, eq);
+  } else {
+    ldr_global_s32(tmp1, SafepointSynchronize::address_of_state());
+    cmp(tmp1, SafepointSynchronize::_not_synchronized);
+    b(slow_path, ne);
+  }
+}
+
+void MacroAssembler::get_polling_page(Register dest) {
+  if (SafepointMechanism::uses_thread_local_poll()) {
+    ldr(dest, Address(Rthread, Thread::polling_page_offset()));
+  } else {
+    mov_address(dest, os::get_polling_page());
+  }
+}
+
+void MacroAssembler::read_polling_page(Register dest, relocInfo::relocType rtype) {
+  get_polling_page(dest);
+  relocate(rtype);
+  ldr(dest, Address(dest));
+}
+
+
 #endif // COMPILER2
--- a/src/hotspot/cpu/arm/macroAssembler_arm.hpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/cpu/arm/macroAssembler_arm.hpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 2008, 2019, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2008, 2020, Oracle and/or its affiliates. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
  * This code is free software; you can redistribute it and/or modify it
@@ -1074,7 +1074,9 @@
   void fast_unlock(Register obj, Register box, Register scratch, Register scratch2);
 #endif
 
-
+  void safepoint_poll(Register tmp1, Label& slow_path);
+  void get_polling_page(Register dest);
+  void read_polling_page(Register dest, relocInfo::relocType rtype);
 };
 
 
--- a/src/hotspot/cpu/arm/sharedRuntime_arm.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/cpu/arm/sharedRuntime_arm.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 2008, 2018, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2008, 2020, Oracle and/or its affiliates. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
  * This code is free software; you can redistribute it and/or modify it
@@ -34,6 +34,7 @@
 #include "oops/compiledICHolder.hpp"
 #include "oops/klass.inline.hpp"
 #include "runtime/sharedRuntime.hpp"
+#include "runtime/safepointMechanism.hpp"
 #include "runtime/vframeArray.hpp"
 #include "utilities/align.hpp"
 #include "vmreg_arm.inline.hpp"
@@ -1219,20 +1220,18 @@
   }
 
   // Do a safepoint check while thread is in transition state
-  InlinedAddress safepoint_state(SafepointSynchronize::address_of_state());
   Label call_safepoint_runtime, return_to_java;
   __ mov(Rtemp, _thread_in_native_trans);
-  __ ldr_literal(R2, safepoint_state);
   __ str_32(Rtemp, Address(Rthread, JavaThread::thread_state_offset()));
 
   // make sure the store is observed before reading the SafepointSynchronize state and further mem refs
   __ membar(MacroAssembler::Membar_mask_bits(MacroAssembler::StoreLoad | MacroAssembler::StoreStore), Rtemp);
 
-  __ ldr_s32(R2, Address(R2));
+  __ safepoint_poll(R2, call_safepoint_runtime);
   __ ldr_u32(R3, Address(Rthread, JavaThread::suspend_flags_offset()));
-  __ cmp(R2, SafepointSynchronize::_not_synchronized);
-  __ cond_cmp(R3, 0, eq);
+  __ cmp(R3, 0);
   __ b(call_safepoint_runtime, ne);
+
   __ bind(return_to_java);
 
   // Perform thread state transition and reguard stack yellow pages if needed
@@ -1303,8 +1302,6 @@
   pop_result_registers(masm, ret_type);
   __ b(return_to_java);
 
-  __ bind_literal(safepoint_state);
-
   // Reguard stack pages. Save native results around a call to C runtime.
   __ bind(reguard);
   push_result_registers(masm, ret_type);
@@ -1806,15 +1803,29 @@
   oop_maps->add_gc_map(pc_offset, map);
   __ reset_last_Java_frame(Rtemp); // Rtemp free since scratched by far call
 
-  // Check for pending exception
-  __ ldr(Rtemp, Address(Rthread, Thread::pending_exception_offset()));
-  __ cmp(Rtemp, 0);
+  if (!cause_return) {
+    if (SafepointMechanism::uses_thread_local_poll()) {
+      // If our stashed return pc was modified by the runtime we avoid touching it
+      __ ldr(R3_tmp, Address(Rthread, JavaThread::saved_exception_pc_offset()));
+      __ ldr(R2_tmp, Address(SP, RegisterSaver::LR_offset * wordSize));
+      __ cmp(R2_tmp, R3_tmp);
+      // Adjust return pc forward to step over the safepoint poll instruction
+      __ add(R2_tmp, R2_tmp, 4, eq);
+      __ str(R2_tmp, Address(SP, RegisterSaver::LR_offset * wordSize), eq);
+    }
 
-  if (!cause_return) {
+    // Check for pending exception
+    __ ldr(Rtemp, Address(Rthread, Thread::pending_exception_offset()));
+    __ cmp(Rtemp, 0);
+
     RegisterSaver::restore_live_registers(masm, false);
     __ pop(PC, eq);
     __ pop(Rexception_pc);
   } else {
+    // Check for pending exception
+    __ ldr(Rtemp, Address(Rthread, Thread::pending_exception_offset()));
+    __ cmp(Rtemp, 0);
+
     RegisterSaver::restore_live_registers(masm);
     __ bx(LR, eq);
     __ mov(Rexception_pc, LR);
--- a/src/hotspot/cpu/arm/templateInterpreterGenerator_arm.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/cpu/arm/templateInterpreterGenerator_arm.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 2008, 2018, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2008, 2020, Oracle and/or its affiliates. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
  * This code is free software; you can redistribute it and/or modify it
@@ -959,8 +959,6 @@
     // Force this write out before the read below
   __ membar(MacroAssembler::StoreLoad, Rtemp);
 
-  __ ldr_global_s32(Rtemp, SafepointSynchronize::address_of_state());
-
   // Protect the return value in the interleaved code: save it to callee-save registers.
   __ mov(Rsaved_result_lo, R0);
   __ mov(Rsaved_result_hi, R1);
@@ -973,12 +971,16 @@
 #endif // __ABI_HARD__
 
   {
-    __ ldr_u32(R3, Address(Rthread, JavaThread::suspend_flags_offset()));
-    __ cmp(Rtemp, SafepointSynchronize::_not_synchronized);
-    __ cond_cmp(R3, 0, eq);
+  Label call, skip_call;
+  __ safepoint_poll(Rtemp, call);
+  __ ldr_u32(R3, Address(Rthread, JavaThread::suspend_flags_offset()));
+  __ cmp(R3, 0);
+  __ b(skip_call, eq);
+  __ bind(call);
+  __ mov(R0, Rthread);
+  __ call(CAST_FROM_FN_PTR(address, JavaThread::check_special_condition_for_native_trans), relocInfo::none);
+  __ bind(skip_call);
 
-  __ mov(R0, Rthread, ne);
-  __ call(CAST_FROM_FN_PTR(address, JavaThread::check_special_condition_for_native_trans), relocInfo::none, ne);
 #if R9_IS_SCRATCHED
   __ restore_method();
 #endif
--- a/src/hotspot/cpu/arm/templateTable_arm.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/cpu/arm/templateTable_arm.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 2008, 2019, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2008, 2020, Oracle and/or its affiliates. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
  * This code is free software; you can redistribute it and/or modify it
@@ -2168,7 +2168,7 @@
   }
 
   // continue with the bytecode @ target
-  __ dispatch_only(vtos);
+  __ dispatch_only(vtos, true);
 
   if (UseLoopCounter) {
     if (ProfileInterpreter) {
@@ -2362,7 +2362,7 @@
 
   // load the next bytecode to R3_bytecode and advance Rbcp
   __ ldrb(R3_bytecode, Address(Rbcp, Roffset, lsl, 0, pre_indexed));
-  __ dispatch_only(vtos);
+  __ dispatch_only(vtos, true);
 
 }
 
@@ -2439,7 +2439,7 @@
 
   // load the next bytecode to R3_bytecode and advance Rbcp
   __ ldrb(R3_bytecode, Address(Rbcp, Roffset, lsl, 0, pre_indexed));
-  __ dispatch_only(vtos);
+  __ dispatch_only(vtos, true);
 }
 
 
@@ -2533,7 +2533,7 @@
   __ profile_switch_case(R0, i, R1, i);
   __ byteswap_u32(offset, temp1, temp2);
   __ ldrb(R3_bytecode, Address(Rbcp, offset, lsl, 0, pre_indexed));
-  __ dispatch_only(vtos);
+  __ dispatch_only(vtos, true);
 
   // default case
   __ bind(default_case);
@@ -2541,7 +2541,7 @@
   __ ldr_s32(offset, Address(array, -2*BytesPerInt));
   __ byteswap_u32(offset, temp1, temp2);
   __ ldrb(R3_bytecode, Address(Rbcp, offset, lsl, 0, pre_indexed));
-  __ dispatch_only(vtos);
+  __ dispatch_only(vtos, true);
 }
 
 
--- a/src/hotspot/cpu/ppc/c1_LIRAssembler_ppc.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/cpu/ppc/c1_LIRAssembler_ppc.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1726,12 +1726,6 @@
 }
 
 
-void LIR_Assembler::fpop() {
-  Unimplemented();
-  // do nothing
-}
-
-
 void LIR_Assembler::intrinsic_op(LIR_Code code, LIR_Opr value, LIR_Opr thread, LIR_Opr dest, LIR_Op* op) {
   switch (code) {
     case lir_sqrt: {
@@ -2691,16 +2685,6 @@
   }
 }
 
-
-void LIR_Assembler::set_24bit_FPU() {
-  Unimplemented();
-}
-
-void LIR_Assembler::reset_FPU() {
-  Unimplemented();
-}
-
-
 void LIR_Assembler::breakpoint() {
   __ illtrap();
 }
@@ -2894,19 +2878,6 @@
 }
 
 
-void LIR_Assembler::fxch(int i) {
-  Unimplemented();
-}
-
-void LIR_Assembler::fld(int i) {
-  Unimplemented();
-}
-
-void LIR_Assembler::ffree(int i) {
-  Unimplemented();
-}
-
-
 void LIR_Assembler::rt_call(LIR_Opr result, address dest,
                             const LIR_OprList* args, LIR_Opr tmp, CodeEmitInfo* info) {
   // Stubs: Called via rt_call, but dest is a stub address (no function descriptor).
--- a/src/hotspot/cpu/ppc/sharedRuntime_ppc.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/cpu/ppc/sharedRuntime_ppc.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 1997, 2019, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 1997, 2020, Oracle and/or its affiliates. All rights reserved.
  * Copyright (c) 2012, 2019 SAP SE. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
@@ -1934,27 +1934,15 @@
     for (int i = 0; i < total_in_args ; i++, o++) {
       if (in_sig_bt[i] == T_ARRAY) {
         // Arrays are passed as int, elem* pair
-        Symbol* atype = ss.as_symbol();
-        const char* at = atype->as_C_string();
-        if (strlen(at) == 2) {
-          assert(at[0] == '[', "must be");
-          switch (at[1]) {
-            case 'B': in_elem_bt[o] = T_BYTE; break;
-            case 'C': in_elem_bt[o] = T_CHAR; break;
-            case 'D': in_elem_bt[o] = T_DOUBLE; break;
-            case 'F': in_elem_bt[o] = T_FLOAT; break;
-            case 'I': in_elem_bt[o] = T_INT; break;
-            case 'J': in_elem_bt[o] = T_LONG; break;
-            case 'S': in_elem_bt[o] = T_SHORT; break;
-            case 'Z': in_elem_bt[o] = T_BOOLEAN; break;
-            default: ShouldNotReachHere();
-          }
-        }
+        ss.skip_array_prefix(1);  // skip one '['
+        assert(ss.is_primitive(), "primitive type expected");
+        in_elem_bt[o] = ss.type();
       } else {
         in_elem_bt[o] = T_VOID;
       }
       if (in_sig_bt[i] != T_VOID) {
-        assert(in_sig_bt[i] == ss.type(), "must match");
+        assert(in_sig_bt[i] == ss.type() ||
+               in_sig_bt[i] == T_ARRAY, "must match");
         ss.next();
       }
     }
--- a/src/hotspot/cpu/s390/assembler_s390.hpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/cpu/s390/assembler_s390.hpp	Fri Feb 07 11:09:59 2020 -0800
@@ -351,14 +351,6 @@
     : _address((address) addr),
       _rspec(rspec_from_rtype(rtype, (address) addr)) {}
 
-  AddressLiteral(oop addr, relocInfo::relocType rtype = relocInfo::none)
-    : _address((address) addr),
-      _rspec(rspec_from_rtype(rtype, (address) addr)) {}
-
-  AddressLiteral(oop* addr, relocInfo::relocType rtype = relocInfo::none)
-    : _address((address) addr),
-      _rspec(rspec_from_rtype(rtype, (address) addr)) {}
-
   AddressLiteral(float* addr, relocInfo::relocType rtype = relocInfo::none)
     : _address((address) addr),
       _rspec(rspec_from_rtype(rtype, (address) addr)) {}
@@ -390,7 +382,6 @@
 
  public:
   ExternalAddress(address target) : AddressLiteral(target, reloc_for_target(          target)) {}
-  ExternalAddress(oop*    target) : AddressLiteral(target, reloc_for_target((address) target)) {}
 };
 
 // Argument is an abstraction used to represent an outgoing actual
--- a/src/hotspot/cpu/s390/c1_LIRAssembler_s390.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/cpu/s390/c1_LIRAssembler_s390.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1698,10 +1698,6 @@
   }
 }
 
-void LIR_Assembler::fpop() {
-  // do nothing
-}
-
 void LIR_Assembler::intrinsic_op(LIR_Code code, LIR_Opr value, LIR_Opr thread, LIR_Opr dest, LIR_Op* op) {
   switch (code) {
     case lir_sqrt: {
@@ -2739,14 +2735,6 @@
   }
 }
 
-void LIR_Assembler::set_24bit_FPU() {
-  ShouldNotCallThis(); // x86 only
-}
-
-void LIR_Assembler::reset_FPU() {
-  ShouldNotCallThis(); // x86 only
-}
-
 void LIR_Assembler::breakpoint() {
   Unimplemented();
   //  __ breakpoint_trap();
@@ -2887,18 +2875,6 @@
   }
 }
 
-void LIR_Assembler::fxch(int i) {
-  ShouldNotCallThis(); // x86 only
-}
-
-void LIR_Assembler::fld(int i) {
-  ShouldNotCallThis(); // x86 only
-}
-
-void LIR_Assembler::ffree(int i) {
-  ShouldNotCallThis(); // x86 only
-}
-
 void LIR_Assembler::rt_call(LIR_Opr result, address dest,
                             const LIR_OprList* args, LIR_Opr tmp, CodeEmitInfo* info) {
   assert(!tmp->is_valid(), "don't need temporary");
--- a/src/hotspot/cpu/s390/copy_s390.hpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/cpu/s390/copy_s390.hpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,6 +1,6 @@
 /*
- * Copyright (c) 2016, 2019, Oracle and/or its affiliates. All rights reserved.
- * Copyright (c) 2016 SAP SE. All rights reserved.
+ * Copyright (c) 2016, 2020, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2016, 2020 SAP SE. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
  * This code is free software; you can redistribute it and/or modify it
@@ -1095,12 +1095,6 @@
   pd_zero_to_bytes(tohw, count*HeapWordSize);
 }
 
-// Delegate to pd_zero_to_bytes. It also works HeapWord-atomic.
-static void pd_zero_to_words_large(HeapWord* tohw, size_t count) {
-  // JVM2008: generally frequent, some tests show very frequent calls.
-  pd_zero_to_bytes(tohw, count*HeapWordSize);
-}
-
 static void pd_zero_to_bytes(void* to, size_t count) {
   // JVM2008: some calls (generally), some tests frequent
 #ifdef USE_INLINE_ASM
--- a/src/hotspot/cpu/s390/sharedRuntime_s390.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/cpu/s390/sharedRuntime_s390.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 2016, 2019, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2016, 2020, Oracle and/or its affiliates. All rights reserved.
  * Copyright (c) 2016, 2019, SAP SE. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
@@ -1626,27 +1626,15 @@
     for (int i = 0; i < total_in_args; i++, o++) {
       if (in_sig_bt[i] == T_ARRAY) {
         // Arrays are passed as tuples (int, elem*).
-        Symbol* atype = ss.as_symbol();
-        const char* at = atype->as_C_string();
-        if (strlen(at) == 2) {
-          assert(at[0] == '[', "must be");
-          switch (at[1]) {
-            case 'B': in_elem_bt[o]  = T_BYTE; break;
-            case 'C': in_elem_bt[o]  = T_CHAR; break;
-            case 'D': in_elem_bt[o]  = T_DOUBLE; break;
-            case 'F': in_elem_bt[o]  = T_FLOAT; break;
-            case 'I': in_elem_bt[o]  = T_INT; break;
-            case 'J': in_elem_bt[o]  = T_LONG; break;
-            case 'S': in_elem_bt[o]  = T_SHORT; break;
-            case 'Z': in_elem_bt[o]  = T_BOOLEAN; break;
-            default: ShouldNotReachHere();
-          }
-        }
+        ss.skip_array_prefix(1);  // skip one '['
+        assert(ss.is_primitive(), "primitive type expected");
+        in_elem_bt[o] = ss.type();
       } else {
         in_elem_bt[o] = T_VOID;
       }
       if (in_sig_bt[i] != T_VOID) {
-        assert(in_sig_bt[i] == ss.type(), "must match");
+        assert(in_sig_bt[i] == ss.type() ||
+               in_sig_bt[i] == T_ARRAY, "must match");
         ss.next();
       }
     }
--- a/src/hotspot/cpu/sparc/c1_LIRAssembler_sparc.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/cpu/sparc/c1_LIRAssembler_sparc.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1732,12 +1732,6 @@
   }
 }
 
-
-void LIR_Assembler::fpop() {
-  // do nothing
-}
-
-
 void LIR_Assembler::intrinsic_op(LIR_Code code, LIR_Opr value, LIR_Opr thread, LIR_Opr dest, LIR_Op* op) {
   switch (code) {
     case lir_tan: {
@@ -2658,16 +2652,6 @@
   }
 }
 
-void LIR_Assembler::set_24bit_FPU() {
-  Unimplemented();
-}
-
-
-void LIR_Assembler::reset_FPU() {
-  Unimplemented();
-}
-
-
 void LIR_Assembler::breakpoint() {
   __ breakpoint_trap();
 }
@@ -3057,19 +3041,6 @@
   }
 }
 
-
-void LIR_Assembler::fxch(int i) {
-  Unimplemented();
-}
-
-void LIR_Assembler::fld(int i) {
-  Unimplemented();
-}
-
-void LIR_Assembler::ffree(int i) {
-  Unimplemented();
-}
-
 void LIR_Assembler::rt_call(LIR_Opr result, address dest,
                             const LIR_OprList* args, LIR_Opr tmp, CodeEmitInfo* info) {
 
--- a/src/hotspot/cpu/sparc/globalDefinitions_sparc.hpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/cpu/sparc/globalDefinitions_sparc.hpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 1999, 2019, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 1999, 2020, Oracle and/or its affiliates. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
  * This code is free software; you can redistribute it and/or modify it
@@ -42,12 +42,14 @@
 #if defined(TIERED)
   // tiered, 64-bit, large machine
   #define DEFAULT_CACHE_LINE_SIZE 128
+  #define OM_CACHE_LINE_SIZE 64
 #elif defined(COMPILER1)
   // pure C1, 32-bit, small machine
   #define DEFAULT_CACHE_LINE_SIZE 16
 #elif defined(COMPILER2)
   // pure C2, 64-bit, large machine
   #define DEFAULT_CACHE_LINE_SIZE 128
+  #define OM_CACHE_LINE_SIZE 64
 #endif
 
 #if defined(SOLARIS)
--- a/src/hotspot/cpu/sparc/sharedRuntime_sparc.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/cpu/sparc/sharedRuntime_sparc.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 2003, 2019, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2003, 2020, Oracle and/or its affiliates. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
  * This code is free software; you can redistribute it and/or modify it
@@ -1907,28 +1907,16 @@
         // Arrays are passed as int, elem* pair
         out_sig_bt[argc++] = T_INT;
         out_sig_bt[argc++] = T_ADDRESS;
-        Symbol* atype = ss.as_symbol();
-        const char* at = atype->as_C_string();
-        if (strlen(at) == 2) {
-          assert(at[0] == '[', "must be");
-          switch (at[1]) {
-            case 'B': in_elem_bt[i]  = T_BYTE; break;
-            case 'C': in_elem_bt[i]  = T_CHAR; break;
-            case 'D': in_elem_bt[i]  = T_DOUBLE; break;
-            case 'F': in_elem_bt[i]  = T_FLOAT; break;
-            case 'I': in_elem_bt[i]  = T_INT; break;
-            case 'J': in_elem_bt[i]  = T_LONG; break;
-            case 'S': in_elem_bt[i]  = T_SHORT; break;
-            case 'Z': in_elem_bt[i]  = T_BOOLEAN; break;
-            default: ShouldNotReachHere();
-          }
-        }
+        ss.skip_array_prefix(1);  // skip one '['
+        assert(ss.is_primitive(), "primitive type expected");
+        in_elem_bt[i] = ss.type();
       } else {
         out_sig_bt[argc++] = in_sig_bt[i];
         in_elem_bt[i] = T_VOID;
       }
       if (in_sig_bt[i] != T_VOID) {
-        assert(in_sig_bt[i] == ss.type(), "must match");
+        assert(in_sig_bt[i] == ss.type() ||
+               in_sig_bt[i] == T_ARRAY, "must match");
         ss.next();
       }
     }
--- a/src/hotspot/cpu/x86/assembler_x86.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/cpu/x86/assembler_x86.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -7330,9 +7330,7 @@
  emit_int8(0x48 | dst->encoding());
 }
 
-#endif // _LP64
-
-// 64bit typically doesn't use the x87 but needs to for the trig funcs
+// 64bit doesn't use the x87
 
 void Assembler::fabs() {
   emit_int8((unsigned char)0xD9);
@@ -7767,6 +7765,7 @@
   emit_int8((unsigned char)0xD9);
   emit_int8((unsigned char)0xEA);
 }
+#endif // !_LP64
 
 // SSE SIMD prefix byte values corresponding to VexSimdPrefix encoding.
 static int simd_pre[4] = { 0, 0x66, 0xF3, 0xF2 };
@@ -8834,6 +8833,18 @@
   emit_operand(dst, src);
 }
 
+void Assembler::cvttsd2siq(Register dst, Address src) {
+  NOT_LP64(assert(VM_Version::supports_sse2(), ""));
+  // F2 REX.W 0F 2C /r
+  // CVTTSD2SI r64, xmm1/m64
+  InstructionMark im(this);
+  emit_int8((unsigned char)0xF2);
+  prefix(REX_W);
+  emit_int8(0x0F);
+  emit_int8(0x2C);
+  emit_operand(dst, src);
+}
+
 void Assembler::cvttsd2siq(Register dst, XMMRegister src) {
   NOT_LP64(assert(VM_Version::supports_sse2(), ""));
   InstructionAttr attributes(AVX_128bit, /* rex_w */ true, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
--- a/src/hotspot/cpu/x86/assembler_x86.hpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/cpu/x86/assembler_x86.hpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1110,6 +1110,7 @@
   // Convert with Truncation Scalar Double-Precision Floating-Point Value to Doubleword Integer
   void cvttsd2sil(Register dst, Address src);
   void cvttsd2sil(Register dst, XMMRegister src);
+  void cvttsd2siq(Register dst, Address src);
   void cvttsd2siq(Register dst, XMMRegister src);
 
   // Convert with Truncation Scalar Single-Precision Floating-Point Value to Doubleword Integer
@@ -1137,6 +1138,7 @@
 
   void emms();
 
+#ifndef _LP64
   void fabs();
 
   void fadd(int i);
@@ -1270,16 +1272,17 @@
 
   void fxch(int i = 1);
 
-  void fxrstor(Address src);
-  void xrstor(Address src);
-
-  void fxsave(Address dst);
-  void xsave(Address dst);
-
   void fyl2x();
   void frndint();
   void f2xm1();
   void fldl2e();
+#endif // !_LP64
+
+  void fxrstor(Address src);
+  void xrstor(Address src);
+
+  void fxsave(Address dst);
+  void xsave(Address dst);
 
   void hlt();
 
--- a/src/hotspot/cpu/x86/c1_CodeStubs_x86.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/cpu/x86/c1_CodeStubs_x86.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -37,6 +37,7 @@
 
 #define __ ce->masm()->
 
+#ifndef _LP64
 float ConversionStub::float_zero = 0.0;
 double ConversionStub::double_zero = 0.0;
 
@@ -52,7 +53,6 @@
     __ comisd(input()->as_xmm_double_reg(),
               ExternalAddress((address)&double_zero));
   } else {
-    LP64_ONLY(ShouldNotReachHere());
     __ push(rax);
     __ ftst();
     __ fnstsw_ax();
@@ -76,6 +76,7 @@
   __ bind(do_return);
   __ jmp(_continuation);
 }
+#endif // !_LP64
 
 void CounterOverflowStub::emit_code(LIR_Assembler* ce) {
   __ bind(_entry);
--- a/src/hotspot/cpu/x86/c1_LIRAssembler_x86.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/cpu/x86/c1_LIRAssembler_x86.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -158,15 +158,7 @@
   }
 }
 
-
-void LIR_Assembler::set_24bit_FPU() {
-  __ fldcw(ExternalAddress(StubRoutines::addr_fpu_cntrl_wrd_24()));
-}
-
-void LIR_Assembler::reset_FPU() {
-  __ fldcw(ExternalAddress(StubRoutines::addr_fpu_cntrl_wrd_std()));
-}
-
+#ifndef _LP64
 void LIR_Assembler::fpop() {
   __ fpop();
 }
@@ -182,6 +174,7 @@
 void LIR_Assembler::ffree(int i) {
   __ ffree(i);
 }
+#endif // !_LP64
 
 void LIR_Assembler::breakpoint() {
   __ int3();
@@ -670,6 +663,7 @@
                    InternalAddress(float_constant(c->as_jfloat())));
         }
       } else {
+#ifndef _LP64
         assert(dest->is_single_fpu(), "must be");
         assert(dest->fpu_regnr() == 0, "dest must be TOS");
         if (c->is_zero_float()) {
@@ -679,6 +673,9 @@
         } else {
           __ fld_s (InternalAddress(float_constant(c->as_jfloat())));
         }
+#else
+        ShouldNotReachHere();
+#endif // !_LP64
       }
       break;
     }
@@ -692,6 +689,7 @@
                     InternalAddress(double_constant(c->as_jdouble())));
         }
       } else {
+#ifndef _LP64
         assert(dest->is_double_fpu(), "must be");
         assert(dest->fpu_regnrLo() == 0, "dest must be TOS");
         if (c->is_zero_double()) {
@@ -701,6 +699,9 @@
         } else {
           __ fld_d (InternalAddress(double_constant(c->as_jdouble())));
         }
+#else
+        ShouldNotReachHere();
+#endif // !_LP64
       }
       break;
     }
@@ -892,6 +893,7 @@
     }
 #endif // LP64
 
+#ifndef _LP64
     // special moves from fpu-register to xmm-register
     // necessary for method results
   } else if (src->is_single_xmm() && !dest->is_single_xmm()) {
@@ -907,6 +909,12 @@
     __ fstp_d(Address(rsp, 0));
     __ movdbl(dest->as_xmm_double_reg(), Address(rsp, 0));
 
+  // move between fpu-registers (no instruction necessary because of fpu-stack)
+  } else if (dest->is_single_fpu() || dest->is_double_fpu()) {
+    assert(src->is_single_fpu() || src->is_double_fpu(), "must match");
+    assert(src->fpu() == dest->fpu(), "currently should be nothing to do");
+#endif // !_LP64
+
     // move between xmm-registers
   } else if (dest->is_single_xmm()) {
     assert(src->is_single_xmm(), "must match");
@@ -915,10 +923,6 @@
     assert(src->is_double_xmm(), "must match");
     __ movdbl(dest->as_xmm_double_reg(), src->as_xmm_double_reg());
 
-    // move between fpu-registers (no instruction necessary because of fpu-stack)
-  } else if (dest->is_single_fpu() || dest->is_double_fpu()) {
-    assert(src->is_single_fpu() || src->is_double_fpu(), "must match");
-    assert(src->fpu() == dest->fpu(), "currently should be nothing to do");
   } else {
     ShouldNotReachHere();
   }
@@ -953,6 +957,7 @@
     Address dst_addr = frame_map()->address_for_slot(dest->double_stack_ix());
     __ movdbl(dst_addr, src->as_xmm_double_reg());
 
+#ifndef _LP64
   } else if (src->is_single_fpu()) {
     assert(src->fpu_regnr() == 0, "argument must be on TOS");
     Address dst_addr = frame_map()->address_for_slot(dest->single_stack_ix());
@@ -964,6 +969,7 @@
     Address dst_addr = frame_map()->address_for_slot(dest->double_stack_ix());
     if (pop_fpu_stack)     __ fstp_d (dst_addr);
     else                   __ fst_d  (dst_addr);
+#endif // !_LP64
 
   } else {
     ShouldNotReachHere();
@@ -998,6 +1004,10 @@
   int null_check_here = code_offset();
   switch (type) {
     case T_FLOAT: {
+#ifdef _LP64
+      assert(src->is_single_xmm(), "not a float");
+      __ movflt(as_Address(to_addr), src->as_xmm_float_reg());
+#else
       if (src->is_single_xmm()) {
         __ movflt(as_Address(to_addr), src->as_xmm_float_reg());
       } else {
@@ -1006,10 +1016,15 @@
         if (pop_fpu_stack)      __ fstp_s(as_Address(to_addr));
         else                    __ fst_s (as_Address(to_addr));
       }
+#endif // _LP64
       break;
     }
 
     case T_DOUBLE: {
+#ifdef _LP64
+      assert(src->is_double_xmm(), "not a double");
+      __ movdbl(as_Address(to_addr), src->as_xmm_double_reg());
+#else
       if (src->is_double_xmm()) {
         __ movdbl(as_Address(to_addr), src->as_xmm_double_reg());
       } else {
@@ -1018,6 +1033,7 @@
         if (pop_fpu_stack)      __ fstp_d(as_Address(to_addr));
         else                    __ fst_d (as_Address(to_addr));
       }
+#endif // _LP64
       break;
     }
 
@@ -1134,6 +1150,7 @@
     Address src_addr = frame_map()->address_for_slot(src->double_stack_ix());
     __ movdbl(dest->as_xmm_double_reg(), src_addr);
 
+#ifndef _LP64
   } else if (dest->is_single_fpu()) {
     assert(dest->fpu_regnr() == 0, "dest must be TOS");
     Address src_addr = frame_map()->address_for_slot(src->single_stack_ix());
@@ -1143,6 +1160,7 @@
     assert(dest->fpu_regnrLo() == 0, "dest must be TOS");
     Address src_addr = frame_map()->address_for_slot(src->double_stack_ix());
     __ fld_d(src_addr);
+#endif // _LP64
 
   } else {
     ShouldNotReachHere();
@@ -1226,9 +1244,13 @@
       if (dest->is_single_xmm()) {
         __ movflt(dest->as_xmm_float_reg(), from_addr);
       } else {
+#ifndef _LP64
         assert(dest->is_single_fpu(), "must be");
         assert(dest->fpu_regnr() == 0, "dest must be TOS");
         __ fld_s(from_addr);
+#else
+        ShouldNotReachHere();
+#endif // !LP64
       }
       break;
     }
@@ -1237,9 +1259,13 @@
       if (dest->is_double_xmm()) {
         __ movdbl(dest->as_xmm_double_reg(), from_addr);
       } else {
+#ifndef _LP64
         assert(dest->is_double_fpu(), "must be");
         assert(dest->fpu_regnrLo() == 0, "dest must be TOS");
         __ fld_d(from_addr);
+#else
+        ShouldNotReachHere();
+#endif // !LP64
       }
       break;
     }
@@ -1495,6 +1521,47 @@
       break;
 
 
+#ifdef _LP64
+    case Bytecodes::_f2d:
+      __ cvtss2sd(dest->as_xmm_double_reg(), src->as_xmm_float_reg());
+      break;
+
+    case Bytecodes::_d2f:
+      __ cvtsd2ss(dest->as_xmm_float_reg(), src->as_xmm_double_reg());
+      break;
+
+    case Bytecodes::_i2f:
+      __ cvtsi2ssl(dest->as_xmm_float_reg(), src->as_register());
+      break;
+
+    case Bytecodes::_i2d:
+      __ cvtsi2sdl(dest->as_xmm_double_reg(), src->as_register());
+      break;
+
+    case Bytecodes::_l2f:
+      __ cvtsi2ssq(dest->as_xmm_float_reg(), src->as_register_lo());
+      break;
+
+    case Bytecodes::_l2d:
+      __ cvtsi2sdq(dest->as_xmm_double_reg(), src->as_register_lo());
+      break;
+
+    case Bytecodes::_f2i:
+      __ convert_f2i(dest->as_register(), src->as_xmm_float_reg());
+      break;
+
+    case Bytecodes::_d2i:
+      __ convert_d2i(dest->as_register(), src->as_xmm_double_reg());
+      break;
+
+    case Bytecodes::_f2l:
+      __ convert_f2l(dest->as_register_lo(), src->as_xmm_float_reg());
+      break;
+
+    case Bytecodes::_d2l:
+      __ convert_d2l(dest->as_register_lo(), src->as_xmm_double_reg());
+      break;
+#else
     case Bytecodes::_f2d:
     case Bytecodes::_d2f:
       if (dest->is_single_xmm()) {
@@ -1520,6 +1587,15 @@
       }
       break;
 
+    case Bytecodes::_l2f:
+    case Bytecodes::_l2d:
+      assert(!dest->is_xmm_register(), "result in xmm register not supported (no SSE instruction present)");
+      assert(dest->fpu() == 0, "result must be on TOS");
+      __ movptr(Address(rsp, 0),          src->as_register_lo());
+      __ movl(Address(rsp, BytesPerWord), src->as_register_hi());
+      __ fild_d(Address(rsp, 0));
+      // float result is rounded later through spilling
+
     case Bytecodes::_f2i:
     case Bytecodes::_d2i:
       if (src->is_single_xmm()) {
@@ -1533,7 +1609,6 @@
         __ movl(dest->as_register(), Address(rsp, 0));
         __ fldcw(ExternalAddress(StubRoutines::addr_fpu_cntrl_wrd_std()));
       }
-
       // IA32 conversion instructions do not match JLS for overflow, underflow and NaN -> fixup in stub
       assert(op->stub() != NULL, "stub required");
       __ cmpl(dest->as_register(), 0x80000000);
@@ -1541,17 +1616,6 @@
       __ bind(*op->stub()->continuation());
       break;
 
-    case Bytecodes::_l2f:
-    case Bytecodes::_l2d:
-      assert(!dest->is_xmm_register(), "result in xmm register not supported (no SSE instruction present)");
-      assert(dest->fpu() == 0, "result must be on TOS");
-
-      __ movptr(Address(rsp, 0),            src->as_register_lo());
-      NOT_LP64(__ movl(Address(rsp, BytesPerWord), src->as_register_hi()));
-      __ fild_d(Address(rsp, 0));
-      // float result is rounded later through spilling
-      break;
-
     case Bytecodes::_f2l:
     case Bytecodes::_d2l:
       assert(!src->is_xmm_register(), "input in xmm register not supported (no SSE instruction present)");
@@ -1563,6 +1627,7 @@
         __ call(RuntimeAddress(Runtime1::entry_for(Runtime1::fpu2long_stub_id)));
       }
       break;
+#endif // _LP64
 
     default: ShouldNotReachHere();
   }
@@ -2222,6 +2287,7 @@
       }
     }
 
+#ifndef _LP64
   } else if (left->is_single_fpu()) {
     assert(dest->is_single_fpu(),  "fpu stack allocation required");
 
@@ -2297,6 +2363,7 @@
       __ fld_x(ExternalAddress(StubRoutines::addr_fpu_subnormal_bias2()));
       __ fmulp(dest->fpu_regnrLo() + 1);
     }
+#endif // !_LP64
 
   } else if (left->is_single_stack() || left->is_address()) {
     assert(left == dest, "left and dest must be equal");
@@ -2339,6 +2406,7 @@
   }
 }
 
+#ifndef _LP64
 void LIR_Assembler::arith_fpu_implementation(LIR_Code code, int left_index, int right_index, int dest_index, bool pop_fpu_stack) {
   assert(pop_fpu_stack  || (left_index     == dest_index || right_index     == dest_index), "invalid LIR");
   assert(!pop_fpu_stack || (left_index - 1 == dest_index || right_index - 1 == dest_index), "invalid LIR");
@@ -2396,6 +2464,7 @@
       ShouldNotReachHere();
   }
 }
+#endif // _LP64
 
 
 void LIR_Assembler::intrinsic_op(LIR_Code code, LIR_Opr value, LIR_Opr tmp, LIR_Opr dest, LIR_Op* op) {
@@ -2425,6 +2494,7 @@
       default      : ShouldNotReachHere();
     }
 
+#ifndef _LP64
   } else if (value->is_double_fpu()) {
     assert(value->fpu_regnrLo() == 0 && dest->fpu_regnrLo() == 0, "both must be on TOS");
     switch(code) {
@@ -2432,6 +2502,7 @@
       case lir_sqrt  : __ fsqrt(); break;
       default      : ShouldNotReachHere();
     }
+#endif // !_LP64
   } else {
     Unimplemented();
   }
@@ -2740,10 +2811,12 @@
       ShouldNotReachHere();
     }
 
+#ifndef _LP64
   } else if(opr1->is_single_fpu() || opr1->is_double_fpu()) {
     assert(opr1->is_fpu_register() && opr1->fpu() == 0, "currently left-hand side must be on TOS (relax this restriction)");
     assert(opr2->is_fpu_register(), "both must be registers");
     __ fcmp(noreg, opr2->fpu(), op->fpu_pop_count() > 0, op->fpu_pop_count() > 1);
+#endif // LP64
 
   } else if (opr1->is_address() && opr2->is_constant()) {
     LIR_Const* c = opr2->as_constant_ptr();
@@ -2787,12 +2860,16 @@
       __ cmpsd2int(left->as_xmm_double_reg(), right->as_xmm_double_reg(), dst->as_register(), code == lir_ucmp_fd2i);
 
     } else {
+#ifdef _LP64
+      ShouldNotReachHere();
+#else
       assert(left->is_single_fpu() || left->is_double_fpu(), "must be");
       assert(right->is_single_fpu() || right->is_double_fpu(), "must match");
 
       assert(left->fpu() == 0, "left must be on TOS");
       __ fcmp2int(dst->as_register(), code == lir_ucmp_fd2i, right->fpu(),
                   op->fpu_pop_count() > 0, op->fpu_pop_count() > 1);
+#endif // LP64
     }
   } else {
     assert(code == lir_cmp_l2i, "check");
@@ -3809,10 +3886,12 @@
       __ xorpd(dest->as_xmm_double_reg(),
                ExternalAddress((address)double_signflip_pool));
     }
+#ifndef _LP64
   } else if (left->is_single_fpu() || left->is_double_fpu()) {
     assert(left->fpu() == 0, "arg must be on TOS");
     assert(dest->fpu() == 0, "dest must be TOS");
     __ fchs();
+#endif // !_LP64
 
   } else {
     ShouldNotReachHere();
@@ -3882,6 +3961,7 @@
       ShouldNotReachHere();
     }
 
+#ifndef _LP64
   } else if (src->is_double_fpu()) {
     assert(src->fpu_regnrLo() == 0, "must be TOS");
     if (dest->is_double_stack()) {
@@ -3901,6 +3981,8 @@
     } else {
       ShouldNotReachHere();
     }
+#endif // !_LP64
+
   } else {
     ShouldNotReachHere();
   }
--- a/src/hotspot/cpu/x86/c1_LIRAssembler_x86.hpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/cpu/x86/c1_LIRAssembler_x86.hpp	Fri Feb 07 11:09:59 2020 -0800
@@ -29,8 +29,6 @@
 
   Address::ScaleFactor array_element_size(BasicType type) const;
 
-  void arith_fpu_implementation(LIR_Code code, int left_index, int right_index, int dest_index, bool pop_fpu_stack);
-
   // helper functions which checks for overflow and sets bailout if it
   // occurs.  Always returns a valid embeddable pointer but in the
   // bailout case the pointer won't be to unique storage.
@@ -62,4 +60,13 @@
   void store_parameter(jobject c,   int offset_from_esp_in_words);
   void store_parameter(Metadata* c, int offset_from_esp_in_words);
 
+#ifndef _LP64
+  void arith_fpu_implementation(LIR_Code code, int left_index, int right_index, int dest_index, bool pop_fpu_stack);
+
+  void fpop();
+  void fxch(int i);
+  void fld(int i);
+  void ffree(int i);
+#endif // !_LP64
+
 #endif // CPU_X86_C1_LIRASSEMBLER_X86_HPP
--- a/src/hotspot/cpu/x86/c1_LIRGenerator_x86.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/cpu/x86/c1_LIRGenerator_x86.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -386,6 +386,42 @@
     tmp = new_register(T_DOUBLE);
   }
 
+#ifdef _LP64
+  if (x->op() == Bytecodes::_frem || x->op() == Bytecodes::_drem) {
+    // frem and drem are implemented as a direct call into the runtime.
+    LIRItem left(x->x(), this);
+    LIRItem right(x->y(), this);
+
+    BasicType bt = as_BasicType(x->type());
+    BasicTypeList signature(2);
+    signature.append(bt);
+    signature.append(bt);
+    CallingConvention* cc = frame_map()->c_calling_convention(&signature);
+
+    const LIR_Opr result_reg = result_register_for(x->type());
+    left.load_item_force(cc->at(0));
+    right.load_item_force(cc->at(1));
+
+    address entry = NULL;
+    switch (x->op()) {
+      case Bytecodes::_frem:
+        entry = CAST_FROM_FN_PTR(address, SharedRuntime::frem);
+        break;
+      case Bytecodes::_drem:
+        entry = CAST_FROM_FN_PTR(address, SharedRuntime::drem);
+        break;
+      default:
+        ShouldNotReachHere();
+    }
+
+    LIR_Opr result = rlock_result(x);
+    __ call_runtime_leaf(entry, getThreadTemp(), result_reg, cc->args());
+    __ move(result_reg, result);
+  } else {
+    arithmetic_op_fpu(x->op(), reg, left.result(), right.result(), x->is_strictfp(), tmp);
+    set_result(x, round_item(reg));
+  }
+#else
   if ((UseSSE >= 1 && x->op() == Bytecodes::_frem) || (UseSSE >= 2 && x->op() == Bytecodes::_drem)) {
     // special handling for frem and drem: no SSE instruction, so must use FPU with temporary fpu stack slots
     LIR_Opr fpu0, fpu1;
@@ -404,8 +440,8 @@
   } else {
     arithmetic_op_fpu(x->op(), reg, left.result(), right.result(), x->is_strictfp(), tmp);
   }
-
   set_result(x, round_item(reg));
+#endif // _LP64
 }
 
 
@@ -444,9 +480,6 @@
     case Bytecodes::_ldiv:
       entry = CAST_FROM_FN_PTR(address, SharedRuntime::ldiv);
       break; // check if dividend is 0 is done elsewhere
-    case Bytecodes::_lmul:
-      entry = CAST_FROM_FN_PTR(address, SharedRuntime::lmul);
-      break;
     default:
       ShouldNotReachHere();
     }
@@ -1145,6 +1178,15 @@
 }
 
 void LIRGenerator::do_Convert(Convert* x) {
+#ifdef _LP64
+  LIRItem value(x->value(), this);
+  value.load_item();
+  LIR_Opr input = value.result();
+  LIR_Opr result = rlock(x);
+  __ convert(x->op(), input, result);
+  assert(result->is_virtual(), "result must be virtual register");
+  set_result(x, result);
+#else
   // flags that vary for the different operations and different SSE-settings
   bool fixed_input = false, fixed_result = false, round_result = false, needs_stub = false;
 
@@ -1203,6 +1245,7 @@
 
   assert(result->is_virtual(), "result must be virtual register");
   set_result(x, result);
+#endif // _LP64
 }
 
 
--- a/src/hotspot/cpu/x86/c1_LinearScan_x86.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/cpu/x86/c1_LinearScan_x86.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -28,6 +28,11 @@
 #include "utilities/bitMap.inline.hpp"
 
 
+#ifdef _LP64
+void LinearScan::allocate_fpu_stack() {
+  // No FPU stack used on x86-64
+}
+#else
 //----------------------------------------------------------------------
 // Allocation of FPU stack slots (Intel x86 only)
 //----------------------------------------------------------------------
@@ -815,12 +820,6 @@
 #ifndef PRODUCT
 void FpuStackAllocator::check_invalid_lir_op(LIR_Op* op) {
   switch (op->code()) {
-    case lir_24bit_FPU:
-    case lir_reset_FPU:
-    case lir_ffree:
-      assert(false, "operations not allowed in lir. If one of these operations is needed, check if they have fpu operands");
-      break;
-
     case lir_fpop_raw:
     case lir_fxch:
     case lir_fld:
@@ -1139,3 +1138,4 @@
 
   return changed;
 }
+#endif // _LP64
--- a/src/hotspot/cpu/x86/c1_MacroAssembler_x86.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/cpu/x86/c1_MacroAssembler_x86.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -325,12 +325,12 @@
   if (PreserveFramePointer) {
     mov(rbp, rsp);
   }
-#ifdef TIERED
-  // c2 leaves fpu stack dirty. Clean it on entry
+#if !defined(_LP64) && defined(TIERED)
   if (UseSSE < 2 ) {
+    // c2 leaves fpu stack dirty. Clean it on entry
     empty_FPU_stack();
   }
-#endif // TIERED
+#endif // !_LP64 && TIERED
   decrement(rsp, frame_size_in_bytes); // does not emit code for frame_size == 0
 
   BarrierSetAssembler* bs = BarrierSet::barrier_set()->barrier_set_assembler();
@@ -357,7 +357,7 @@
   }
   if (C1Breakpoint)int3();
   // build frame
-  verify_FPU(0, "method_entry");
+  IA32_ONLY( verify_FPU(0, "method_entry"); )
 }
 
 void C1_MacroAssembler::load_parameter(int offset_in_words, Register reg) {
--- a/src/hotspot/cpu/x86/c1_Runtime1_x86.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/cpu/x86/c1_Runtime1_x86.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -427,6 +427,7 @@
 #endif
 
   if (save_fpu_registers) {
+#ifndef _LP64
     if (UseSSE < 2) {
       // save FPU stack
       __ fnsave(Address(rsp, fpu_state_off * VMRegImpl::stack_slot_size));
@@ -454,6 +455,7 @@
         offset += 8;
       }
     }
+#endif // !_LP64
 
     if (UseSSE >= 2) {
       // save XMM registers
@@ -473,6 +475,7 @@
         __ movdbl(Address(rsp, xmm_regs_as_doubles_off * VMRegImpl::stack_slot_size + offset), xmm_name);
         offset += 8;
       }
+#ifndef _LP64
     } else if (UseSSE == 1) {
       // save XMM registers as float because double not supported without SSE2(num MMX == num fpu)
       int offset = 0;
@@ -481,26 +484,37 @@
         __ movflt(Address(rsp, xmm_regs_as_doubles_off * VMRegImpl::stack_slot_size + offset), xmm_name);
         offset += 8;
       }
+#endif // !_LP64
     }
   }
 
   // FPU stack must be empty now
-  __ verify_FPU(0, "save_live_registers");
+  NOT_LP64( __ verify_FPU(0, "save_live_registers"); )
 }
 
 #undef __
 #define __ sasm->
 
 static void restore_fpu(C1_MacroAssembler* sasm, bool restore_fpu_registers) {
+#ifdef _LP64
+  if (restore_fpu_registers) {
+    // restore XMM registers
+    int xmm_bypass_limit = FrameMap::nof_xmm_regs;
+    if (UseAVX < 3) {
+      xmm_bypass_limit = xmm_bypass_limit / 2;
+    }
+    int offset = 0;
+    for (int n = 0; n < xmm_bypass_limit; n++) {
+      XMMRegister xmm_name = as_XMMRegister(n);
+      __ movdbl(xmm_name, Address(rsp, xmm_regs_as_doubles_off * VMRegImpl::stack_slot_size + offset));
+      offset += 8;
+    }
+  }
+#else
   if (restore_fpu_registers) {
     if (UseSSE >= 2) {
       // restore XMM registers
       int xmm_bypass_limit = FrameMap::nof_xmm_regs;
-#ifdef _LP64
-      if (UseAVX < 3) {
-        xmm_bypass_limit = xmm_bypass_limit / 2;
-      }
-#endif
       int offset = 0;
       for (int n = 0; n < xmm_bypass_limit; n++) {
         XMMRegister xmm_name = as_XMMRegister(n);
@@ -523,11 +537,11 @@
       // check that FPU stack is really empty
       __ verify_FPU(0, "restore_live_registers");
     }
-
   } else {
     // check that FPU stack is really empty
     __ verify_FPU(0, "restore_live_registers");
   }
+#endif // _LP64
 
 #ifdef ASSERT
   {
@@ -699,12 +713,12 @@
   default:  ShouldNotReachHere();
   }
 
-#ifdef TIERED
-  // C2 can leave the fpu stack dirty
+#if !defined(_LP64) && defined(TIERED)
   if (UseSSE < 2) {
+    // C2 can leave the fpu stack dirty
     __ empty_FPU_stack();
   }
-#endif // TIERED
+#endif // !_LP64 && TIERED
 
   // verify that only rax, and rdx is valid at this time
   __ invalidate_registers(false, true, true, false, true, true);
@@ -806,7 +820,7 @@
 #endif
 
   // clear the FPU stack in case any FPU results are left behind
-  __ empty_FPU_stack();
+  NOT_LP64( __ empty_FPU_stack(); )
 
   // save exception_oop in callee-saved register to preserve it during runtime calls
   __ verify_not_null_oop(exception_oop);
@@ -1477,11 +1491,23 @@
 
     case fpu2long_stub_id:
       {
+#ifdef _LP64
+        Label done;
+        __ cvttsd2siq(rax, Address(rsp, wordSize));
+        __ cmp64(rax, ExternalAddress((address) StubRoutines::x86::double_sign_flip()));
+        __ jccb(Assembler::notEqual, done);
+        __ movq(rax, Address(rsp, wordSize));
+        __ subptr(rsp, 8);
+        __ movq(Address(rsp, 0), rax);
+        __ call(RuntimeAddress(CAST_FROM_FN_PTR(address, StubRoutines::x86::d2l_fixup())));
+        __ pop(rax);
+        __ bind(done);
+        __ ret(0);
+#else
         // rax, and rdx are destroyed, but should be free since the result is returned there
         // preserve rsi,ecx
         __ push(rsi);
         __ push(rcx);
-        LP64_ONLY(__ push(rdx);)
 
         // check for NaN
         Label return0, do_return, return_min_jlong, do_convert;
@@ -1526,46 +1552,29 @@
         __ fldz();
         __ fcomp_d(value_low_word);
         __ fnstsw_ax();
-#ifdef _LP64
-        __ testl(rax, 0x4100);  // ZF & CF == 0
-        __ jcc(Assembler::equal, return_min_jlong);
-#else
         __ sahf();
         __ jcc(Assembler::above, return_min_jlong);
-#endif // _LP64
         // return max_jlong
-#ifndef _LP64
         __ movl(rdx, 0x7fffffff);
         __ movl(rax, 0xffffffff);
-#else
-        __ mov64(rax, CONST64(0x7fffffffffffffff));
-#endif // _LP64
         __ jmp(do_return);
 
         __ bind(return_min_jlong);
-#ifndef _LP64
         __ movl(rdx, 0x80000000);
         __ xorl(rax, rax);
-#else
-        __ mov64(rax, UCONST64(0x8000000000000000));
-#endif // _LP64
         __ jmp(do_return);
 
         __ bind(return0);
         __ fpop();
-#ifndef _LP64
         __ xorptr(rdx,rdx);
         __ xorptr(rax,rax);
-#else
-        __ xorptr(rax, rax);
-#endif // _LP64
 
         __ bind(do_return);
         __ addptr(rsp, 32);
-        LP64_ONLY(__ pop(rdx);)
         __ pop(rcx);
         __ pop(rsi);
         __ ret(0);
+#endif // _LP64
       }
       break;
 
--- a/src/hotspot/cpu/x86/gc/shenandoah/shenandoahBarrierSetAssembler_x86.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/cpu/x86/gc/shenandoah/shenandoahBarrierSetAssembler_x86.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -513,6 +513,19 @@
   // 3: apply keep-alive barrier if needed
   if (ShenandoahBarrierSet::need_keep_alive_barrier(decorators, type)) {
     __ push_IU_state();
+    // That path can be reached from the c2i adapter with live fp
+    // arguments in registers.
+    LP64_ONLY(assert(Argument::n_float_register_parameters_j == 8, "8 fp registers to save at java call"));
+    __ subptr(rsp, 64);
+    __ movdbl(Address(rsp, 0), xmm0);
+    __ movdbl(Address(rsp, 8), xmm1);
+    __ movdbl(Address(rsp, 16), xmm2);
+    __ movdbl(Address(rsp, 24), xmm3);
+    __ movdbl(Address(rsp, 32), xmm4);
+    __ movdbl(Address(rsp, 40), xmm5);
+    __ movdbl(Address(rsp, 48), xmm6);
+    __ movdbl(Address(rsp, 56), xmm7);
+
     Register thread = NOT_LP64(tmp_thread) LP64_ONLY(r15_thread);
     assert_different_registers(dst, tmp1, tmp_thread);
     if (!thread->is_valid()) {
@@ -528,6 +541,15 @@
                                  tmp1 /* tmp */,
                                  true /* tosca_live */,
                                  true /* expand_call */);
+    __ movdbl(xmm0, Address(rsp, 0));
+    __ movdbl(xmm1, Address(rsp, 8));
+    __ movdbl(xmm2, Address(rsp, 16));
+    __ movdbl(xmm3, Address(rsp, 24));
+    __ movdbl(xmm4, Address(rsp, 32));
+    __ movdbl(xmm5, Address(rsp, 40));
+    __ movdbl(xmm6, Address(rsp, 48));
+    __ movdbl(xmm7, Address(rsp, 56));
+    __ addptr(rsp, 64);
     __ pop_IU_state();
   }
 }
--- a/src/hotspot/cpu/x86/globalDefinitions_x86.hpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/cpu/x86/globalDefinitions_x86.hpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 1999, 2019, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 1999, 2020, Oracle and/or its affiliates. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
  * This code is free software; you can redistribute it and/or modify it
@@ -40,6 +40,7 @@
   #ifdef _LP64
     // tiered, 64-bit, large machine
     #define DEFAULT_CACHE_LINE_SIZE 128
+    #define OM_CACHE_LINE_SIZE 64
   #else
     // tiered, 32-bit, medium machine
     #define DEFAULT_CACHE_LINE_SIZE 64
@@ -52,6 +53,7 @@
   #ifdef _LP64
     // pure C2, 64-bit, large machine
     #define DEFAULT_CACHE_LINE_SIZE 128
+    #define OM_CACHE_LINE_SIZE 64
   #else
     // pure C2, 32-bit, medium machine
     #define DEFAULT_CACHE_LINE_SIZE 64
--- a/src/hotspot/cpu/x86/macroAssembler_x86.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/cpu/x86/macroAssembler_x86.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -349,11 +349,6 @@
   pop(rsi);
 }
 
-void MacroAssembler::pop_fTOS() {
-  fld_d(Address(rsp, 0));
-  addl(rsp, 2 * wordSize);
-}
-
 void MacroAssembler::push_callee_saved_registers() {
   push(rsi);
   push(rdi);
@@ -361,12 +356,6 @@
   push(rcx);
 }
 
-void MacroAssembler::push_fTOS() {
-  subl(rsp, 2 * wordSize);
-  fstp_d(Address(rsp, 0));
-}
-
-
 void MacroAssembler::pushoop(jobject obj) {
   push_literal32((int32_t)obj, oop_Relocation::spec_for_immediate());
 }
@@ -2735,8 +2724,7 @@
   }
 }
 
-// !defined(COMPILER2) is because of stupid core builds
-#if !defined(_LP64) || defined(COMPILER1) || !defined(COMPILER2) || INCLUDE_JVMCI
+#ifndef _LP64
 void MacroAssembler::empty_FPU_stack() {
   if (VM_Version::supports_mmx()) {
     emms();
@@ -2744,7 +2732,7 @@
     for (int i = 8; i-- > 0; ) ffree(i);
   }
 }
-#endif // !LP64 || C1 || !C2 || INCLUDE_JVMCI
+#endif // !LP64
 
 
 void MacroAssembler::enter() {
@@ -2765,6 +2753,7 @@
   }
 }
 
+#if !defined(_LP64)
 void MacroAssembler::fcmp(Register tmp) {
   fcmp(tmp, 1, true, true);
 }
@@ -2846,84 +2835,19 @@
   Assembler::fldcw(as_Address(src));
 }
 
-void MacroAssembler::mulpd(XMMRegister dst, AddressLiteral src) {
-  if (reachable(src)) {
-    Assembler::mulpd(dst, as_Address(src));
-  } else {
-    lea(rscratch1, src);
-    Assembler::mulpd(dst, Address(rscratch1, 0));
-  }
-}
-
-void MacroAssembler::increase_precision() {
-  subptr(rsp, BytesPerWord);
-  fnstcw(Address(rsp, 0));
-  movl(rax, Address(rsp, 0));
-  orl(rax, 0x300);
-  push(rax);
-  fldcw(Address(rsp, 0));
-  pop(rax);
-}
-
-void MacroAssembler::restore_precision() {
-  fldcw(Address(rsp, 0));
-  addptr(rsp, BytesPerWord);
-}
-
 void MacroAssembler::fpop() {
   ffree();
   fincstp();
 }
 
-void MacroAssembler::load_float(Address src) {
-  if (UseSSE >= 1) {
-    movflt(xmm0, src);
-  } else {
-    LP64_ONLY(ShouldNotReachHere());
-    NOT_LP64(fld_s(src));
-  }
-}
-
-void MacroAssembler::store_float(Address dst) {
-  if (UseSSE >= 1) {
-    movflt(dst, xmm0);
-  } else {
-    LP64_ONLY(ShouldNotReachHere());
-    NOT_LP64(fstp_s(dst));
-  }
-}
-
-void MacroAssembler::load_double(Address src) {
-  if (UseSSE >= 2) {
-    movdbl(xmm0, src);
-  } else {
-    LP64_ONLY(ShouldNotReachHere());
-    NOT_LP64(fld_d(src));
-  }
-}
-
-void MacroAssembler::store_double(Address dst) {
-  if (UseSSE >= 2) {
-    movdbl(dst, xmm0);
-  } else {
-    LP64_ONLY(ShouldNotReachHere());
-    NOT_LP64(fstp_d(dst));
-  }
-}
-
 void MacroAssembler::fremr(Register tmp) {
   save_rax(tmp);
   { Label L;
     bind(L);
     fprem();
     fwait(); fnstsw_ax();
-#ifdef _LP64
-    testl(rax, 0x400);
-    jcc(Assembler::notEqual, L);
-#else
     sahf();
     jcc(Assembler::parity, L);
-#endif // _LP64
   }
   restore_rax(tmp);
   // Result is in ST0.
@@ -2932,6 +2856,52 @@
   fxch(1);
   fpop();
 }
+#endif // !LP64
+
+void MacroAssembler::mulpd(XMMRegister dst, AddressLiteral src) {
+  if (reachable(src)) {
+    Assembler::mulpd(dst, as_Address(src));
+  } else {
+    lea(rscratch1, src);
+    Assembler::mulpd(dst, Address(rscratch1, 0));
+  }
+}
+
+void MacroAssembler::load_float(Address src) {
+  if (UseSSE >= 1) {
+    movflt(xmm0, src);
+  } else {
+    LP64_ONLY(ShouldNotReachHere());
+    NOT_LP64(fld_s(src));
+  }
+}
+
+void MacroAssembler::store_float(Address dst) {
+  if (UseSSE >= 1) {
+    movflt(dst, xmm0);
+  } else {
+    LP64_ONLY(ShouldNotReachHere());
+    NOT_LP64(fstp_s(dst));
+  }
+}
+
+void MacroAssembler::load_double(Address src) {
+  if (UseSSE >= 2) {
+    movdbl(xmm0, src);
+  } else {
+    LP64_ONLY(ShouldNotReachHere());
+    NOT_LP64(fld_d(src));
+  }
+}
+
+void MacroAssembler::store_double(Address dst) {
+  if (UseSSE >= 2) {
+    movdbl(dst, xmm0);
+  } else {
+    LP64_ONLY(ShouldNotReachHere());
+    NOT_LP64(fstp_d(dst));
+  }
+}
 
 // dst = c = a * b + c
 void MacroAssembler::fmad(XMMRegister dst, XMMRegister a, XMMRegister b, XMMRegister c) {
@@ -5098,6 +5068,7 @@
 }
 
 
+#ifndef _LP64
 static bool _verify_FPU(int stack_depth, char* s, CPU_State* state) {
   static int counter = 0;
   FPU_State* fs = &state->_fpu_state;
@@ -5154,7 +5125,6 @@
   return true;
 }
 
-
 void MacroAssembler::verify_FPU(int stack_depth, const char* s) {
   if (!VerifyFPU) return;
   push_CPU_state();
@@ -5174,6 +5144,7 @@
   }
   pop_CPU_state();
 }
+#endif // _LP64
 
 void MacroAssembler::restore_cpu_control_state_after_jni() {
   // Either restore the MXCSR register after returning from the JNI Call
@@ -9888,6 +9859,56 @@
 }
 
 #ifdef _LP64
+void MacroAssembler::convert_f2i(Register dst, XMMRegister src) {
+  Label done;
+  cvttss2sil(dst, src);
+  // Conversion instructions do not match JLS for overflow, underflow and NaN -> fixup in stub
+  cmpl(dst, 0x80000000); // float_sign_flip
+  jccb(Assembler::notEqual, done);
+  subptr(rsp, 8);
+  movflt(Address(rsp, 0), src);
+  call(RuntimeAddress(CAST_FROM_FN_PTR(address, StubRoutines::x86::f2i_fixup())));
+  pop(dst);
+  bind(done);
+}
+
+void MacroAssembler::convert_d2i(Register dst, XMMRegister src) {
+  Label done;
+  cvttsd2sil(dst, src);
+  // Conversion instructions do not match JLS for overflow, underflow and NaN -> fixup in stub
+  cmpl(dst, 0x80000000); // float_sign_flip
+  jccb(Assembler::notEqual, done);
+  subptr(rsp, 8);
+  movdbl(Address(rsp, 0), src);
+  call(RuntimeAddress(CAST_FROM_FN_PTR(address, StubRoutines::x86::d2i_fixup())));
+  pop(dst);
+  bind(done);
+}
+
+void MacroAssembler::convert_f2l(Register dst, XMMRegister src) {
+  Label done;
+  cvttss2siq(dst, src);
+  cmp64(dst, ExternalAddress((address) StubRoutines::x86::double_sign_flip()));
+  jccb(Assembler::notEqual, done);
+  subptr(rsp, 8);
+  movflt(Address(rsp, 0), src);
+  call(RuntimeAddress(CAST_FROM_FN_PTR(address, StubRoutines::x86::f2l_fixup())));
+  pop(dst);
+  bind(done);
+}
+
+void MacroAssembler::convert_d2l(Register dst, XMMRegister src) {
+  Label done;
+  cvttsd2siq(dst, src);
+  cmp64(dst, ExternalAddress((address) StubRoutines::x86::double_sign_flip()));
+  jccb(Assembler::notEqual, done);
+  subptr(rsp, 8);
+  movdbl(Address(rsp, 0), src);
+  call(RuntimeAddress(CAST_FROM_FN_PTR(address, StubRoutines::x86::d2l_fixup())));
+  pop(dst);
+  bind(done);
+}
+
 void MacroAssembler::cache_wb(Address line)
 {
   // 64 bit cpus always support clflush
@@ -10000,4 +10021,4 @@
   }
 }
 
-#endif
+#endif // !WIN32 || _LP64
--- a/src/hotspot/cpu/x86/macroAssembler_x86.hpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/cpu/x86/macroAssembler_x86.hpp	Fri Feb 07 11:09:59 2020 -0800
@@ -426,6 +426,7 @@
   // Division by power of 2, rounding towards 0
   void division_with_shift(Register reg, int shift_value);
 
+#ifndef _LP64
   // Compares the top-most stack entries on the FPU stack and sets the eflags as follows:
   //
   // CF (corresponds to C0) if x < y
@@ -454,6 +455,10 @@
   // tmp is a temporary register, if none is available use noreg
   void fremr(Register tmp);
 
+  // only if +VerifyFPU
+  void verify_FPU(int stack_depth, const char* s = "illegal FPU state");
+#endif // !LP64
+
   // dst = c = a * b + c
   void fmad(XMMRegister dst, XMMRegister a, XMMRegister b, XMMRegister c);
   void fmaf(XMMRegister dst, XMMRegister a, XMMRegister b, XMMRegister c);
@@ -473,9 +478,6 @@
   void jC2 (Register tmp, Label& L);
   void jnC2(Register tmp, Label& L);
 
-  // Pop ST (ffree & fincstp combined)
-  void fpop();
-
   // Load float value from 'address'. If UseSSE >= 1, the value is loaded into
   // register xmm0. Otherwise, the value is loaded onto the FPU stack.
   void load_float(Address src);
@@ -492,13 +494,12 @@
   // from register xmm0. Otherwise, the value is stored from the FPU stack.
   void store_double(Address dst);
 
-  // pushes double TOS element of FPU stack on CPU stack; pops from FPU stack
-  void push_fTOS();
-
-  // pops double TOS element from CPU stack and pushes on FPU stack
-  void pop_fTOS();
+#ifndef _LP64
+  // Pop ST (ffree & fincstp combined)
+  void fpop();
 
   void empty_FPU_stack();
+#endif // !_LP64
 
   void push_IU_state();
   void pop_IU_state();
@@ -609,9 +610,6 @@
 #define verify_method_ptr(reg) _verify_method_ptr(reg, "broken method " #reg, __FILE__, __LINE__)
 #define verify_klass_ptr(reg) _verify_klass_ptr(reg, "broken klass " #reg, __FILE__, __LINE__)
 
-  // only if +VerifyFPU
-  void verify_FPU(int stack_depth, const char* s = "illegal FPU state");
-
   // Verify or restore cpu control state after JNI call
   void restore_cpu_control_state_after_jni();
 
@@ -902,6 +900,7 @@
   void comisd(XMMRegister dst, Address src) { Assembler::comisd(dst, src); }
   void comisd(XMMRegister dst, AddressLiteral src);
 
+#ifndef _LP64
   void fadd_s(Address src)        { Assembler::fadd_s(src); }
   void fadd_s(AddressLiteral src) { Assembler::fadd_s(as_Address(src)); }
 
@@ -920,6 +919,7 @@
 
   void fmul_s(Address src)        { Assembler::fmul_s(src); }
   void fmul_s(AddressLiteral src) { Assembler::fmul_s(as_Address(src)); }
+#endif // _LP64
 
   void ldmxcsr(Address src) { Assembler::ldmxcsr(src); }
   void ldmxcsr(AddressLiteral src);
@@ -1082,9 +1082,6 @@
                 Register rax, Register rcx, Register rdx, Register tmp);
 #endif
 
-  void increase_precision();
-  void restore_precision();
-
 private:
 
   // these are private because users should be doing movflt/movdbl
@@ -1813,6 +1810,11 @@
                           XMMRegister tmp1, Register tmp2);
 
 #ifdef _LP64
+  void convert_f2i(Register dst, XMMRegister src);
+  void convert_d2i(Register dst, XMMRegister src);
+  void convert_f2l(Register dst, XMMRegister src);
+  void convert_d2l(Register dst, XMMRegister src);
+
   void cache_wb(Address line);
   void cache_wbsync(bool is_pre);
 #endif // _LP64
--- a/src/hotspot/cpu/x86/methodHandles_x86.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/cpu/x86/methodHandles_x86.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -604,7 +604,10 @@
   // robust stack walking implemented in trace_method_handle_stub.
 
   // save FP result, valid at some call sites (adapter_opt_return_float, ...)
-  __ increment(rsp, -2 * wordSize);
+  __ decrement(rsp, 2 * wordSize);
+#ifdef _LP64
+  __ movdbl(Address(rsp, 0), xmm0);
+#else
   if  (UseSSE >= 2) {
     __ movdbl(Address(rsp, 0), xmm0);
   } else if (UseSSE == 1) {
@@ -612,6 +615,7 @@
   } else {
     __ fst_d(Address(rsp, 0));
   }
+#endif // LP64
 
   // Incoming state:
   // rcx: method handle
@@ -626,6 +630,9 @@
   __ super_call_VM_leaf(CAST_FROM_FN_PTR(address, trace_method_handle_stub_wrapper), rsp);
   __ increment(rsp, sizeof(MethodHandleStubArguments));
 
+#ifdef _LP64
+  __ movdbl(xmm0, Address(rsp, 0));
+#else
   if  (UseSSE >= 2) {
     __ movdbl(xmm0, Address(rsp, 0));
   } else if (UseSSE == 1) {
@@ -633,6 +640,7 @@
   } else {
     __ fld_d(Address(rsp, 0));
   }
+#endif // LP64
   __ increment(rsp, 2 * wordSize);
 
   __ popa();
--- a/src/hotspot/cpu/x86/sharedRuntime_x86_32.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/cpu/x86/sharedRuntime_x86_32.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 2003, 2019, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2003, 2020, Oracle and/or its affiliates. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
  * This code is free software; you can redistribute it and/or modify it
@@ -1697,28 +1697,16 @@
         // Arrays are passed as int, elem* pair
         out_sig_bt[argc++] = T_INT;
         out_sig_bt[argc++] = T_ADDRESS;
-        Symbol* atype = ss.as_symbol();
-        const char* at = atype->as_C_string();
-        if (strlen(at) == 2) {
-          assert(at[0] == '[', "must be");
-          switch (at[1]) {
-            case 'B': in_elem_bt[i]  = T_BYTE; break;
-            case 'C': in_elem_bt[i]  = T_CHAR; break;
-            case 'D': in_elem_bt[i]  = T_DOUBLE; break;
-            case 'F': in_elem_bt[i]  = T_FLOAT; break;
-            case 'I': in_elem_bt[i]  = T_INT; break;
-            case 'J': in_elem_bt[i]  = T_LONG; break;
-            case 'S': in_elem_bt[i]  = T_SHORT; break;
-            case 'Z': in_elem_bt[i]  = T_BOOLEAN; break;
-            default: ShouldNotReachHere();
-          }
-        }
+        ss.skip_array_prefix(1);  // skip one '['
+        assert(ss.is_primitive(), "primitive type expected");
+        in_elem_bt[i] = ss.type();
       } else {
         out_sig_bt[argc++] = in_sig_bt[i];
         in_elem_bt[i] = T_VOID;
       }
       if (in_sig_bt[i] != T_VOID) {
-        assert(in_sig_bt[i] == ss.type(), "must match");
+        assert(in_sig_bt[i] == ss.type() ||
+               in_sig_bt[i] == T_ARRAY, "must match");
         ss.next();
       }
     }
--- a/src/hotspot/cpu/x86/sharedRuntime_x86_64.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/cpu/x86/sharedRuntime_x86_64.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 2003, 2019, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2003, 2020, Oracle and/or its affiliates. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
  * This code is free software; you can redistribute it and/or modify it
@@ -2002,28 +2002,16 @@
         // Arrays are passed as int, elem* pair
         out_sig_bt[argc++] = T_INT;
         out_sig_bt[argc++] = T_ADDRESS;
-        Symbol* atype = ss.as_symbol();
-        const char* at = atype->as_C_string();
-        if (strlen(at) == 2) {
-          assert(at[0] == '[', "must be");
-          switch (at[1]) {
-            case 'B': in_elem_bt[i]  = T_BYTE; break;
-            case 'C': in_elem_bt[i]  = T_CHAR; break;
-            case 'D': in_elem_bt[i]  = T_DOUBLE; break;
-            case 'F': in_elem_bt[i]  = T_FLOAT; break;
-            case 'I': in_elem_bt[i]  = T_INT; break;
-            case 'J': in_elem_bt[i]  = T_LONG; break;
-            case 'S': in_elem_bt[i]  = T_SHORT; break;
-            case 'Z': in_elem_bt[i]  = T_BOOLEAN; break;
-            default: ShouldNotReachHere();
-          }
-        }
+        ss.skip_array_prefix(1);  // skip one '['
+        assert(ss.is_primitive(), "primitive type expected");
+        in_elem_bt[i] = ss.type();
       } else {
         out_sig_bt[argc++] = in_sig_bt[i];
         in_elem_bt[i] = T_VOID;
       }
       if (in_sig_bt[i] != T_VOID) {
-        assert(in_sig_bt[i] == ss.type(), "must match");
+        assert(in_sig_bt[i] == ss.type() ||
+               in_sig_bt[i] == T_ARRAY, "must match");
         ss.next();
       }
     }
--- a/src/hotspot/cpu/x86/stubGenerator_x86_64.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/cpu/x86/stubGenerator_x86_64.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -6341,6 +6341,16 @@
 
     StubRoutines::x86::_verify_mxcsr_entry    = generate_verify_mxcsr();
 
+    StubRoutines::x86::_f2i_fixup             = generate_f2i_fixup();
+    StubRoutines::x86::_f2l_fixup             = generate_f2l_fixup();
+    StubRoutines::x86::_d2i_fixup             = generate_d2i_fixup();
+    StubRoutines::x86::_d2l_fixup             = generate_d2l_fixup();
+
+    StubRoutines::x86::_float_sign_mask       = generate_fp_mask("float_sign_mask",  0x7FFFFFFF7FFFFFFF);
+    StubRoutines::x86::_float_sign_flip       = generate_fp_mask("float_sign_flip",  0x8000000080000000);
+    StubRoutines::x86::_double_sign_mask      = generate_fp_mask("double_sign_mask", 0x7FFFFFFFFFFFFFFF);
+    StubRoutines::x86::_double_sign_flip      = generate_fp_mask("double_sign_flip", 0x8000000000000000);
+
     // Build this early so it's available for the interpreter.
     StubRoutines::_throw_StackOverflowError_entry =
       generate_throw_exception("StackOverflowError throw_exception",
@@ -6364,7 +6374,7 @@
       StubRoutines::_crc32c_table_addr = (address)StubRoutines::x86::_crc32c_table;
       StubRoutines::_updateBytesCRC32C = generate_updateBytesCRC32C(supports_clmul);
     }
-    if (VM_Version::supports_sse2() && UseLibmIntrinsic && InlineIntrinsics) {
+    if (UseLibmIntrinsic && InlineIntrinsics) {
       if (vmIntrinsics::is_intrinsic_available(vmIntrinsics::_dsin) ||
           vmIntrinsics::is_intrinsic_available(vmIntrinsics::_dcos) ||
           vmIntrinsics::is_intrinsic_available(vmIntrinsics::_dtan)) {
@@ -6432,15 +6442,6 @@
                                                 throw_NullPointerException_at_call));
 
     // entry points that are platform specific
-    StubRoutines::x86::_f2i_fixup = generate_f2i_fixup();
-    StubRoutines::x86::_f2l_fixup = generate_f2l_fixup();
-    StubRoutines::x86::_d2i_fixup = generate_d2i_fixup();
-    StubRoutines::x86::_d2l_fixup = generate_d2l_fixup();
-
-    StubRoutines::x86::_float_sign_mask  = generate_fp_mask("float_sign_mask",  0x7FFFFFFF7FFFFFFF);
-    StubRoutines::x86::_float_sign_flip  = generate_fp_mask("float_sign_flip",  0x8000000080000000);
-    StubRoutines::x86::_double_sign_mask = generate_fp_mask("double_sign_mask", 0x7FFFFFFFFFFFFFFF);
-    StubRoutines::x86::_double_sign_flip = generate_fp_mask("double_sign_flip", 0x8000000000000000);
     StubRoutines::x86::_vector_float_sign_mask = generate_vector_mask("vector_float_sign_mask", 0x7FFFFFFF7FFFFFFF);
     StubRoutines::x86::_vector_float_sign_flip = generate_vector_mask("vector_float_sign_flip", 0x8000000080000000);
     StubRoutines::x86::_vector_double_sign_mask = generate_vector_mask("vector_double_sign_mask", 0x7FFFFFFFFFFFFFFF);
--- a/src/hotspot/cpu/x86/templateInterpreterGenerator_x86_64.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/cpu/x86/templateInterpreterGenerator_x86_64.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -432,25 +432,14 @@
     } else {
       __ call_VM_leaf0(CAST_FROM_FN_PTR(address, SharedRuntime::dtan));
     }
+  } else if (kind == Interpreter::java_lang_math_abs) {
+    assert(StubRoutines::x86::double_sign_mask() != NULL, "not initialized");
+    __ movdbl(xmm0, Address(rsp, wordSize));
+    __ andpd(xmm0, ExternalAddress(StubRoutines::x86::double_sign_mask()));
   } else {
-    __ fld_d(Address(rsp, wordSize));
-    switch (kind) {
-    case Interpreter::java_lang_math_abs:
-      __ fabs();
-      break;
-    default:
-      ShouldNotReachHere();
-    }
-
-    // return double result in xmm0 for interpreter and compilers.
-    __ subptr(rsp, 2*wordSize);
-    // Round to 64bit precision
-    __ fstp_d(Address(rsp, 0));
-    __ movdbl(xmm0, Address(rsp, 0));
-    __ addptr(rsp, 2*wordSize);
+    ShouldNotReachHere();
   }
 
-
   __ pop(rax);
   __ mov(rsp, r13);
   __ jmp(rax);
--- a/src/hotspot/cpu/x86/x86_64.ad	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/cpu/x86/x86_64.ad	Fri Feb 07 11:09:59 2020 -0800
@@ -10588,25 +10588,9 @@
 %{
   match(Set dst (ConvF2I src));
   effect(KILL cr);
-
-  format %{ "cvttss2sil $dst, $src\t# f2i\n\t"
-            "cmpl    $dst, #0x80000000\n\t"
-            "jne,s   done\n\t"
-            "subq    rsp, #8\n\t"
-            "movss   [rsp], $src\n\t"
-            "call    f2i_fixup\n\t"
-            "popq    $dst\n"
-    "done:   "%}
-  ins_encode %{
-    Label done;
-    __ cvttss2sil($dst$$Register, $src$$XMMRegister);
-    __ cmpl($dst$$Register, 0x80000000);
-    __ jccb(Assembler::notEqual, done);
-    __ subptr(rsp, 8);
-    __ movflt(Address(rsp, 0), $src$$XMMRegister);
-    __ call(RuntimeAddress(CAST_FROM_FN_PTR(address, StubRoutines::x86::f2i_fixup())));
-    __ pop($dst$$Register);
-    __ bind(done);
+  format %{ "convert_f2i $dst,$src" %}
+  ins_encode %{
+    __ convert_f2i($dst$$Register, $src$$XMMRegister);
   %}
   ins_pipe(pipe_slow);
 %}
@@ -10615,26 +10599,9 @@
 %{
   match(Set dst (ConvF2L src));
   effect(KILL cr);
-
-  format %{ "cvttss2siq $dst, $src\t# f2l\n\t"
-            "cmpq    $dst, [0x8000000000000000]\n\t"
-            "jne,s   done\n\t"
-            "subq    rsp, #8\n\t"
-            "movss   [rsp], $src\n\t"
-            "call    f2l_fixup\n\t"
-            "popq    $dst\n"
-    "done:   "%}
-  ins_encode %{
-    Label done;
-    __ cvttss2siq($dst$$Register, $src$$XMMRegister);
-    __ cmp64($dst$$Register,
-             ExternalAddress((address) StubRoutines::x86::double_sign_flip()));
-    __ jccb(Assembler::notEqual, done);
-    __ subptr(rsp, 8);
-    __ movflt(Address(rsp, 0), $src$$XMMRegister);
-    __ call(RuntimeAddress(CAST_FROM_FN_PTR(address, StubRoutines::x86::f2l_fixup())));
-    __ pop($dst$$Register);
-    __ bind(done);
+  format %{ "convert_f2l $dst,$src"%}
+  ins_encode %{
+    __ convert_f2l($dst$$Register, $src$$XMMRegister);
   %}
   ins_pipe(pipe_slow);
 %}
@@ -10643,25 +10610,9 @@
 %{
   match(Set dst (ConvD2I src));
   effect(KILL cr);
-
-  format %{ "cvttsd2sil $dst, $src\t# d2i\n\t"
-            "cmpl    $dst, #0x80000000\n\t"
-            "jne,s   done\n\t"
-            "subq    rsp, #8\n\t"
-            "movsd   [rsp], $src\n\t"
-            "call    d2i_fixup\n\t"
-            "popq    $dst\n"
-    "done:   "%}
-  ins_encode %{
-    Label done;
-    __ cvttsd2sil($dst$$Register, $src$$XMMRegister);
-    __ cmpl($dst$$Register, 0x80000000);
-    __ jccb(Assembler::notEqual, done);
-    __ subptr(rsp, 8);
-    __ movdbl(Address(rsp, 0), $src$$XMMRegister);
-    __ call(RuntimeAddress(CAST_FROM_FN_PTR(address, StubRoutines::x86::d2i_fixup())));
-    __ pop($dst$$Register);
-    __ bind(done);
+  format %{ "convert_d2i $dst,$src"%}
+  ins_encode %{
+    __ convert_d2i($dst$$Register, $src$$XMMRegister);
   %}
   ins_pipe(pipe_slow);
 %}
@@ -10670,26 +10621,9 @@
 %{
   match(Set dst (ConvD2L src));
   effect(KILL cr);
-
-  format %{ "cvttsd2siq $dst, $src\t# d2l\n\t"
-            "cmpq    $dst, [0x8000000000000000]\n\t"
-            "jne,s   done\n\t"
-            "subq    rsp, #8\n\t"
-            "movsd   [rsp], $src\n\t"
-            "call    d2l_fixup\n\t"
-            "popq    $dst\n"
-    "done:   "%}
-  ins_encode %{
-    Label done;
-    __ cvttsd2siq($dst$$Register, $src$$XMMRegister);
-    __ cmp64($dst$$Register,
-             ExternalAddress((address) StubRoutines::x86::double_sign_flip()));
-    __ jccb(Assembler::notEqual, done);
-    __ subptr(rsp, 8);
-    __ movdbl(Address(rsp, 0), $src$$XMMRegister);
-    __ call(RuntimeAddress(CAST_FROM_FN_PTR(address, StubRoutines::x86::d2l_fixup())));
-    __ pop($dst$$Register);
-    __ bind(done);
+  format %{ "convert_d2l $dst,$src"%}
+  ins_encode %{
+    __ convert_d2l($dst$$Register, $src$$XMMRegister);
   %}
   ins_pipe(pipe_slow);
 %}
--- a/src/hotspot/os/aix/os_perf_aix.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/os/aix/os_perf_aix.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 2012, 2019, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2012, 2020, Oracle and/or its affiliates. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
  * This code is free software; you can redistribute it and/or modify it
@@ -371,19 +371,21 @@
 
 static int perf_context_switch_rate(double* rate) {
   static pthread_mutex_t contextSwitchLock = PTHREAD_MUTEX_INITIALIZER;
-  static uint64_t      lastTime;
+  static uint64_t      bootTime;
+  static uint64_t      lastTimeNanos;
   static uint64_t      lastSwitches;
   static double        lastRate;
 
-  uint64_t lt = 0;
+  uint64_t bt = 0;
   int res = 0;
 
-  if (lastTime == 0) {
+  // First time through bootTime will be zero.
+  if (bootTime == 0) {
     uint64_t tmp;
     if (get_boot_time(&tmp) < 0) {
       return OS_ERR;
     }
-    lt = tmp * 1000;
+    bt = tmp * 1000;
   }
 
   res = OS_OK;
@@ -394,20 +396,29 @@
     uint64_t sw;
     s8 t, d;
 
-    if (lastTime == 0) {
-      lastTime = lt;
+    if (bootTime == 0) {
+      // First interval is measured from boot time which is
+      // seconds since the epoch. Thereafter we measure the
+      // elapsed time using javaTimeNanos as it is monotonic-
+      // non-decreasing.
+      lastTimeNanos = os::javaTimeNanos();
+      t = os::javaTimeMillis();
+      d = t - bt;
+      // keep bootTime zero for now to use as a first-time-through flag
+    } else {
+      t = os::javaTimeNanos();
+      d = nanos_to_millis(t - lastTimeNanos);
     }
 
-    t = os::javaTimeMillis();
-    d = t - lastTime;
-
     if (d == 0) {
       *rate = lastRate;
-    } else if (!get_noof_context_switches(&sw)) {
+    } else if (get_noof_context_switches(&sw) == 0) {
       *rate      = ( (double)(sw - lastSwitches) / d ) * 1000;
       lastRate     = *rate;
       lastSwitches = sw;
-      lastTime     = t;
+      if (bootTime != 0) {
+        lastTimeNanos = t;
+      }
     } else {
       *rate = 0;
       res   = OS_ERR;
@@ -416,6 +427,10 @@
       *rate = 0;
       lastRate = 0;
     }
+
+    if (bootTime == 0) {
+      bootTime = bt;
+    }
   }
   pthread_mutex_unlock(&contextSwitchLock);
 
--- a/src/hotspot/os/aix/perfMemory_aix.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/os/aix/perfMemory_aix.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,6 +1,6 @@
 /*
- * Copyright (c) 2001, 2019, Oracle and/or its affiliates. All rights reserved.
- * Copyright (c) 2012, 2018 SAP SE. All rights reserved.
+ * Copyright (c) 2001, 2020, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2012, 2020 SAP SE. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
  * This code is free software; you can redistribute it and/or modify it
@@ -98,8 +98,8 @@
 
   int result;
 
-  RESTARTABLE(::open(destfile, O_CREAT|O_WRONLY|O_TRUNC, S_IREAD|S_IWRITE),
-              result);;
+  RESTARTABLE(os::open(destfile, O_CREAT|O_WRONLY|O_TRUNC, S_IREAD|S_IWRITE),
+              result);
   if (result == OS_ERR) {
     if (PrintMiscellaneous && Verbose) {
       warning("Could not create Perfdata save file: %s: %s\n",
@@ -248,7 +248,6 @@
   return is_statbuf_secure(&statbuf);
 }
 
-// (Taken over from Solaris to support the O_NOFOLLOW case on AIX.)
 // Check if the given directory file descriptor is considered a secure
 // directory for the backing store files. Returns true if the directory
 // exists and is considered a secure location. Returns false if the path
@@ -290,89 +289,6 @@
   }
 }
 
-// Helper functions for open without O_NOFOLLOW which is not present on AIX 5.3/6.1.
-// We use the jdk6 implementation here.
-#ifndef O_NOFOLLOW
-// The O_NOFOLLOW oflag doesn't exist before solaris 5.10, this is to simulate that behaviour
-// was done in jdk 5/6 hotspot by Oracle this way
-static int open_o_nofollow_impl(const char* path, int oflag, mode_t mode, bool use_mode) {
-  struct stat orig_st;
-  struct stat new_st;
-  bool create;
-  int error;
-  int fd;
-  int result;
-
-  create = false;
-
-  RESTARTABLE(::lstat(path, &orig_st), result);
-
-  if (result == OS_ERR) {
-    if (errno == ENOENT && (oflag & O_CREAT) != 0) {
-      // File doesn't exist, but_we want to create it, add O_EXCL flag
-      // to make sure no-one creates it (or a symlink) before us
-      // This works as we expect with symlinks, from posix man page:
-      // 'If O_EXCL  and  O_CREAT  are set, and path names a symbolic
-      // link, open() shall fail and set errno to [EEXIST]'.
-      oflag |= O_EXCL;
-      create = true;
-    } else {
-      // File doesn't exist, and we are not creating it.
-      return OS_ERR;
-    }
-  } else {
-    // lstat success, check if existing file is a link.
-    if ((orig_st.st_mode & S_IFMT) == S_IFLNK)  {
-      // File is a symlink.
-      errno = ELOOP;
-      return OS_ERR;
-    }
-  }
-
-  if (use_mode == true) {
-    RESTARTABLE(::open(path, oflag, mode), fd);
-  } else {
-    RESTARTABLE(::open(path, oflag), fd);
-  }
-
-  if (fd == OS_ERR) {
-    return fd;
-  }
-
-  // Can't do inode checks on before/after if we created the file.
-  if (create == false) {
-    RESTARTABLE(::fstat(fd, &new_st), result);
-    if (result == OS_ERR) {
-      // Keep errno from fstat, in case close also fails.
-      error = errno;
-      ::close(fd);
-      errno = error;
-      return OS_ERR;
-    }
-
-    if (orig_st.st_dev != new_st.st_dev || orig_st.st_ino != new_st.st_ino) {
-      // File was tampered with during race window.
-      ::close(fd);
-      errno = EEXIST;
-      if (PrintMiscellaneous && Verbose) {
-        warning("possible file tampering attempt detected when opening %s", path);
-      }
-      return OS_ERR;
-    }
-  }
-
-  return fd;
-}
-
-static int open_o_nofollow(const char* path, int oflag, mode_t mode) {
-  return open_o_nofollow_impl(path, oflag, mode, true);
-}
-
-static int open_o_nofollow(const char* path, int oflag) {
-  return open_o_nofollow_impl(path, oflag, 0, false);
-}
-#endif
-
 // Open the directory of the given path and validate it.
 // Return a DIR * of the open directory.
 static DIR *open_directory_secure(const char* dirname) {
@@ -383,15 +299,7 @@
   // calling opendir() and is_directory_secure() does.
   int result;
   DIR *dirp = NULL;
-
-  // No O_NOFOLLOW defined at buildtime, and it is not documented for open;
-  // so provide a workaround in this case.
-#ifdef O_NOFOLLOW
   RESTARTABLE(::open(dirname, O_RDONLY|O_NOFOLLOW), result);
-#else
-  // workaround (jdk6 coding)
-  result = open_o_nofollow(dirname, O_RDONLY);
-#endif
 
   if (result == OS_ERR) {
     // Directory doesn't exist or is a symlink, so there is nothing to cleanup.
@@ -879,15 +787,7 @@
   // Cannot use O_TRUNC here; truncation of an existing file has to happen
   // after the is_file_secure() check below.
   int result;
-
-  // No O_NOFOLLOW defined at buildtime, and it is not documented for open;
-  // so provide a workaround in this case.
-#ifdef O_NOFOLLOW
-  RESTARTABLE(::open(filename, O_RDWR|O_CREAT|O_NOFOLLOW, S_IREAD|S_IWRITE), result);
-#else
-  // workaround function (jdk6 code)
-  result = open_o_nofollow(filename, O_RDWR|O_CREAT, S_IREAD|S_IWRITE);
-#endif
+  RESTARTABLE(os::open(filename, O_RDWR|O_CREAT|O_NOFOLLOW, S_IREAD|S_IWRITE), result);
 
   if (result == OS_ERR) {
     if (PrintMiscellaneous && Verbose) {
@@ -944,12 +844,8 @@
 
   // open the file
   int result;
-  // provide a workaround in case no O_NOFOLLOW is defined at buildtime
-#ifdef O_NOFOLLOW
-  RESTARTABLE(::open(filename, oflags), result);
-#else
-  result = open_o_nofollow(filename, oflags);
-#endif
+  RESTARTABLE(os::open(filename, oflags, 0), result);
+
   if (result == OS_ERR) {
     if (errno == ENOENT) {
       THROW_MSG_(vmSymbols::java_lang_IllegalArgumentException(),
@@ -1137,12 +1033,7 @@
   // constructs for the file and the shared memory mapping.
   if (mode == PerfMemory::PERF_MODE_RO) {
     mmap_prot = PROT_READ;
-  // No O_NOFOLLOW defined at buildtime, and it is not documented for open.
-#ifdef O_NOFOLLOW
     file_flags = O_RDONLY | O_NOFOLLOW;
-#else
-    file_flags = O_RDONLY;
-#endif
   }
   else if (mode == PerfMemory::PERF_MODE_RW) {
 #ifdef LATER
--- a/src/hotspot/os/bsd/semaphore_bsd.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/os/bsd/semaphore_bsd.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 2018, 2019, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2018, 2020, Oracle and/or its affiliates. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
  * This code is free software; you can redistribute it and/or modify it
@@ -76,17 +76,17 @@
   // kernel semaphores take a relative timeout
   mach_timespec_t waitspec;
   int secs = millis / MILLIUNITS;
-  int nsecs = (millis % MILLIUNITS) * NANOSECS_PER_MILLISEC;
+  int nsecs = millis_to_nanos(millis % MILLIUNITS);
   waitspec.tv_sec = secs;
   waitspec.tv_nsec = nsecs;
 
-  int64_t starttime = os::javaTimeMillis() * NANOSECS_PER_MILLISEC;
+  int64_t starttime = os::javaTimeNanos();
 
   kr = semaphore_timedwait(_semaphore, waitspec);
   while (kr == KERN_ABORTED) {
     // reduce the timout and try again
-    int64_t totalwait = millis * NANOSECS_PER_MILLISEC;
-    int64_t current = os::javaTimeMillis() * NANOSECS_PER_MILLISEC;
+    int64_t totalwait = millis_to_nanos(millis);
+    int64_t current = os::javaTimeNanos();
     int64_t passedtime = current - starttime;
 
     if (passedtime >= totalwait) {
--- a/src/hotspot/os/linux/gc/z/zMountPoint_linux.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/os/linux/gc/z/zMountPoint_linux.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -25,6 +25,7 @@
 #include "gc/z/zArray.inline.hpp"
 #include "gc/z/zErrno.hpp"
 #include "gc/z/zMountPoint_linux.hpp"
+#include "runtime/globals.hpp"
 #include "logging/log.hpp"
 
 #include <stdio.h>
@@ -34,9 +35,9 @@
 #define PROC_SELF_MOUNTINFO        "/proc/self/mountinfo"
 
 ZMountPoint::ZMountPoint(const char* filesystem, const char** preferred_mountpoints) {
-  if (ZPath != NULL) {
+  if (AllocateHeapAt != NULL) {
     // Use specified path
-    _path = strdup(ZPath);
+    _path = strdup(AllocateHeapAt);
   } else {
     // Find suitable path
     _path = find_mountpoint(filesystem, preferred_mountpoints);
--- a/src/hotspot/os/linux/gc/z/zNUMA_linux.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/os/linux/gc/z/zNUMA_linux.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 2016, 2019, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2016, 2020, Oracle and/or its affiliates. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
  * This code is free software; you can redistribute it and/or modify it
@@ -21,28 +21,14 @@
  * questions.
  */
 
+#include "gc/z/zCPU.inline.hpp"
 #include "gc/z/zErrno.hpp"
-#include "gc/z/zCPU.inline.hpp"
 #include "gc/z/zNUMA.hpp"
+#include "gc/z/zSyscall_linux.hpp"
 #include "runtime/globals.hpp"
 #include "runtime/os.hpp"
 #include "utilities/debug.hpp"
 
-#include <unistd.h>
-#include <sys/syscall.h>
-
-#ifndef MPOL_F_NODE
-#define MPOL_F_NODE     (1<<0)  // Return next IL mode instead of node mask
-#endif
-
-#ifndef MPOL_F_ADDR
-#define MPOL_F_ADDR     (1<<1)  // Look up VMA using address
-#endif
-
-static int z_get_mempolicy(uint32_t* mode, const unsigned long *nmask, unsigned long maxnode, uintptr_t addr, int flags) {
-  return syscall(SYS_get_mempolicy, mode, nmask, maxnode, addr, flags);
-}
-
 void ZNUMA::initialize_platform() {
   _enabled = UseNUMA;
 }
@@ -73,7 +59,7 @@
 
   uint32_t id = (uint32_t)-1;
 
-  if (z_get_mempolicy(&id, NULL, 0, addr, MPOL_F_NODE | MPOL_F_ADDR) == -1) {
+  if (ZSyscall::get_mempolicy((int*)&id, NULL, 0, (void*)addr, MPOL_F_NODE | MPOL_F_ADDR) == -1) {
     ZErrno err;
     fatal("Failed to get NUMA id for memory at " PTR_FORMAT " (%s)", addr, err.to_string());
   }
--- a/src/hotspot/os/linux/gc/z/zPhysicalMemoryBacking_linux.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/os/linux/gc/z/zPhysicalMemoryBacking_linux.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -27,6 +27,7 @@
 #include "gc/z/zGlobals.hpp"
 #include "gc/z/zLargePages.inline.hpp"
 #include "gc/z/zMountPoint_linux.hpp"
+#include "gc/z/zNUMA.inline.hpp"
 #include "gc/z/zPhysicalMemoryBacking_linux.hpp"
 #include "gc/z/zSyscall_linux.hpp"
 #include "logging/log.hpp"
@@ -34,6 +35,7 @@
 #include "runtime/os.hpp"
 #include "utilities/align.hpp"
 #include "utilities/debug.hpp"
+#include "utilities/growableArray.hpp"
 
 #include <fcntl.h>
 #include <stdio.h>
@@ -209,7 +211,7 @@
   // Find mountpoint
   ZMountPoint mountpoint(filesystem, preferred_mountpoints);
   if (mountpoint.get() == NULL) {
-    log_error(gc)("Use -XX:ZPath to specify the path to a %s filesystem", filesystem);
+    log_error(gc)("Use -XX:AllocateHeapAt to specify the path to a %s filesystem", filesystem);
     return -1;
   }
 
@@ -261,7 +263,7 @@
 }
 
 int ZPhysicalMemoryBacking::create_fd(const char* name) const {
-  if (ZPath == NULL) {
+  if (AllocateHeapAt == NULL) {
     // If the path is not explicitly specified, then we first try to create a memfd file
     // instead of looking for a tmpfd/hugetlbfs mount point. Note that memfd_create() might
     // not be supported at all (requires kernel >= 3.17), or it might not support large
@@ -596,7 +598,38 @@
   return true;
 }
 
-size_t ZPhysicalMemoryBacking::commit(size_t offset, size_t length) {
+static int offset_to_node(size_t offset) {
+  const GrowableArray<int>* mapping = os::Linux::numa_nindex_to_node();
+  const size_t nindex = (offset >> ZGranuleSizeShift) % mapping->length();
+  return mapping->at((int)nindex);
+}
+
+size_t ZPhysicalMemoryBacking::commit_numa_interleaved(size_t offset, size_t length) {
+  size_t committed = 0;
+
+  // Commit one granule at a time, so that each granule
+  // can be allocated from a different preferred node.
+  while (committed < length) {
+    const size_t granule_offset = offset + committed;
+
+    // Setup NUMA policy to allocate memory from a preferred node
+    os::Linux::numa_set_preferred(offset_to_node(granule_offset));
+
+    if (!commit_inner(granule_offset, ZGranuleSize)) {
+      // Failed
+      break;
+    }
+
+    committed += ZGranuleSize;
+  }
+
+  // Restore NUMA policy
+  os::Linux::numa_set_preferred(-1);
+
+  return committed;
+}
+
+size_t ZPhysicalMemoryBacking::commit_default(size_t offset, size_t length) {
   // Try to commit the whole region
   if (commit_inner(offset, length)) {
     // Success
@@ -624,6 +657,16 @@
   }
 }
 
+size_t ZPhysicalMemoryBacking::commit(size_t offset, size_t length) {
+  if (ZNUMA::is_enabled() && !ZLargePages::is_explicit()) {
+    // To get granule-level NUMA interleaving when using non-large pages,
+    // we must explicitly interleave the memory at commit/fallocate time.
+    return commit_numa_interleaved(offset, length);
+  }
+
+  return commit_default(offset, length);
+}
+
 size_t ZPhysicalMemoryBacking::uncommit(size_t offset, size_t length) {
   log_trace(gc, heap)("Uncommitting memory: " SIZE_FORMAT "M-" SIZE_FORMAT "M (" SIZE_FORMAT "M)",
                       offset / M, (offset + length) / M, length / M);
--- a/src/hotspot/os/linux/gc/z/zPhysicalMemoryBacking_linux.hpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/os/linux/gc/z/zPhysicalMemoryBacking_linux.hpp	Fri Feb 07 11:09:59 2020 -0800
@@ -57,6 +57,8 @@
   ZErrno fallocate(bool punch_hole, size_t offset, size_t length);
 
   bool commit_inner(size_t offset, size_t length);
+  size_t commit_numa_interleaved(size_t offset, size_t length);
+  size_t commit_default(size_t offset, size_t length);
 
 public:
   ZPhysicalMemoryBacking();
--- a/src/hotspot/os/linux/gc/z/zSyscall_linux.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/os/linux/gc/z/zSyscall_linux.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 2019, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2019, 2020, Oracle and/or its affiliates. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
  * This code is free software; you can redistribute it and/or modify it
@@ -34,3 +34,7 @@
 int ZSyscall::fallocate(int fd, int mode, size_t offset, size_t length) {
   return syscall(SYS_fallocate, fd, mode, offset, length);
 }
+
+long ZSyscall::get_mempolicy(int* mode, unsigned long* nodemask, unsigned long maxnode, void* addr, unsigned long flags) {
+  return syscall(SYS_get_mempolicy, mode, nodemask, maxnode, addr, flags);
+}
--- a/src/hotspot/os/linux/gc/z/zSyscall_linux.hpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/os/linux/gc/z/zSyscall_linux.hpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 2019, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2019, 2020, Oracle and/or its affiliates. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
  * This code is free software; you can redistribute it and/or modify it
@@ -26,10 +26,19 @@
 
 #include "memory/allocation.hpp"
 
+// Flags for get_mempolicy()
+#ifndef MPOL_F_NODE
+#define MPOL_F_NODE        (1<<0)
+#endif
+#ifndef MPOL_F_ADDR
+#define MPOL_F_ADDR        (1<<1)
+#endif
+
 class ZSyscall : public AllStatic {
 public:
-  static int memfd_create(const char *name, unsigned int flags);
+  static int memfd_create(const char* name, unsigned int flags);
   static int fallocate(int fd, int mode, size_t offset, size_t length);
+  static long get_mempolicy(int* mode, unsigned long* nodemask, unsigned long maxnode, void* addr, unsigned long flags);
 };
 
 #endif // OS_LINUX_GC_Z_ZSYSCALL_LINUX_HPP
--- a/src/hotspot/os/linux/os_linux.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/os/linux/os_linux.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -3163,6 +3163,8 @@
                                                   libnuma_v2_dlsym(handle, "numa_get_interleave_mask")));
       set_numa_move_pages(CAST_TO_FN_PTR(numa_move_pages_func_t,
                                          libnuma_dlsym(handle, "numa_move_pages")));
+      set_numa_set_preferred(CAST_TO_FN_PTR(numa_set_preferred_func_t,
+                                            libnuma_dlsym(handle, "numa_set_preferred")));
 
       if (numa_available() != -1) {
         set_numa_all_nodes((unsigned long*)libnuma_dlsym(handle, "numa_all_nodes"));
@@ -3298,6 +3300,7 @@
 os::Linux::numa_get_membind_func_t os::Linux::_numa_get_membind;
 os::Linux::numa_get_interleave_mask_func_t os::Linux::_numa_get_interleave_mask;
 os::Linux::numa_move_pages_func_t os::Linux::_numa_move_pages;
+os::Linux::numa_set_preferred_func_t os::Linux::_numa_set_preferred;
 os::Linux::NumaAllocationPolicy os::Linux::_current_numa_policy;
 unsigned long* os::Linux::_numa_all_nodes;
 struct bitmask* os::Linux::_numa_all_nodes_ptr;
--- a/src/hotspot/os/linux/os_linux.hpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/os/linux/os_linux.hpp	Fri Feb 07 11:09:59 2020 -0800
@@ -219,7 +219,7 @@
   typedef struct bitmask* (*numa_get_membind_func_t)(void);
   typedef struct bitmask* (*numa_get_interleave_mask_func_t)(void);
   typedef long (*numa_move_pages_func_t)(int pid, unsigned long count, void **pages, const int *nodes, int *status, int flags);
-
+  typedef void (*numa_set_preferred_func_t)(int node);
   typedef void (*numa_set_bind_policy_func_t)(int policy);
   typedef int (*numa_bitmask_isbitset_func_t)(struct bitmask *bmp, unsigned int n);
   typedef int (*numa_distance_func_t)(int node1, int node2);
@@ -238,6 +238,7 @@
   static numa_get_membind_func_t _numa_get_membind;
   static numa_get_interleave_mask_func_t _numa_get_interleave_mask;
   static numa_move_pages_func_t _numa_move_pages;
+  static numa_set_preferred_func_t _numa_set_preferred;
   static unsigned long* _numa_all_nodes;
   static struct bitmask* _numa_all_nodes_ptr;
   static struct bitmask* _numa_nodes_ptr;
@@ -258,6 +259,7 @@
   static void set_numa_get_membind(numa_get_membind_func_t func) { _numa_get_membind = func; }
   static void set_numa_get_interleave_mask(numa_get_interleave_mask_func_t func) { _numa_get_interleave_mask = func; }
   static void set_numa_move_pages(numa_move_pages_func_t func) { _numa_move_pages = func; }
+  static void set_numa_set_preferred(numa_set_preferred_func_t func) { _numa_set_preferred = func; }
   static void set_numa_all_nodes(unsigned long* ptr) { _numa_all_nodes = ptr; }
   static void set_numa_all_nodes_ptr(struct bitmask **ptr) { _numa_all_nodes_ptr = (ptr == NULL ? NULL : *ptr); }
   static void set_numa_nodes_ptr(struct bitmask **ptr) { _numa_nodes_ptr = (ptr == NULL ? NULL : *ptr); }
@@ -315,6 +317,11 @@
       _numa_interleave_memory(start, size, _numa_all_nodes);
     }
   }
+  static void numa_set_preferred(int node) {
+    if (_numa_set_preferred != NULL) {
+      _numa_set_preferred(node);
+    }
+  }
   static void numa_set_bind_policy(int policy) {
     if (_numa_set_bind_policy != NULL) {
       _numa_set_bind_policy(policy);
@@ -392,6 +399,10 @@
       return false;
     }
   }
+
+  static const GrowableArray<int>* numa_nindex_to_node() {
+    return _nindex_to_node;
+  }
 };
 
 #endif // OS_LINUX_OS_LINUX_HPP
--- a/src/hotspot/os/linux/os_perf_linux.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/os/linux/os_perf_linux.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 2012, 2019, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2012, 2020, Oracle and/or its affiliates. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
  * This code is free software; you can redistribute it and/or modify it
@@ -431,19 +431,21 @@
 
 static int perf_context_switch_rate(double* rate) {
   static pthread_mutex_t contextSwitchLock = PTHREAD_MUTEX_INITIALIZER;
-  static uint64_t      lastTime;
+  static uint64_t      bootTime;
+  static uint64_t      lastTimeNanos;
   static uint64_t      lastSwitches;
   static double        lastRate;
 
-  uint64_t lt = 0;
+  uint64_t bt = 0;
   int res = 0;
 
-  if (lastTime == 0) {
+  // First time through bootTime will be zero.
+  if (bootTime == 0) {
     uint64_t tmp;
     if (get_boot_time(&tmp) < 0) {
       return OS_ERR;
     }
-    lt = tmp * 1000;
+    bt = tmp * 1000;
   }
 
   res = OS_OK;
@@ -454,20 +456,29 @@
     uint64_t sw;
     s8 t, d;
 
-    if (lastTime == 0) {
-      lastTime = lt;
+    if (bootTime == 0) {
+      // First interval is measured from boot time which is
+      // seconds since the epoch. Thereafter we measure the
+      // elapsed time using javaTimeNanos as it is monotonic-
+      // non-decreasing.
+      lastTimeNanos = os::javaTimeNanos();
+      t = os::javaTimeMillis();
+      d = t - bt;
+      // keep bootTime zero for now to use as a first-time-through flag
+    } else {
+      t = os::javaTimeNanos();
+      d = nanos_to_millis(t - lastTimeNanos);
     }
 
-    t = os::javaTimeMillis();
-    d = t - lastTime;
-
     if (d == 0) {
       *rate = lastRate;
-    } else if (!get_noof_context_switches(&sw)) {
+    } else if (get_noof_context_switches(&sw) == 0) {
       *rate      = ( (double)(sw - lastSwitches) / d ) * 1000;
       lastRate     = *rate;
       lastSwitches = sw;
-      lastTime     = t;
+      if (bootTime != 0) {
+        lastTimeNanos = t;
+      }
     } else {
       *rate = 0;
       res   = OS_ERR;
@@ -476,6 +487,10 @@
       *rate = 0;
       lastRate = 0;
     }
+
+    if (bootTime == 0) {
+      bootTime = bt;
+    }
   }
   pthread_mutex_unlock(&contextSwitchLock);
 
--- a/src/hotspot/os/posix/os_posix.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/os/posix/os_posix.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 1999, 2019, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 1999, 2020, Oracle and/or its affiliates. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
  * This code is free software; you can redistribute it and/or modify it
@@ -681,7 +681,7 @@
 
 void os::naked_short_sleep(jlong ms) {
   assert(ms < MILLIUNITS, "Un-interruptable sleep, short time use only");
-  os::naked_short_nanosleep(ms * (NANOUNITS / MILLIUNITS));
+  os::naked_short_nanosleep(millis_to_nanos(ms));
   return;
 }
 
@@ -1833,18 +1833,18 @@
     abstime->tv_nsec = 0;
   } else {
     abstime->tv_sec = seconds;
-    abstime->tv_nsec = millis * (NANOUNITS / MILLIUNITS);
+    abstime->tv_nsec = millis_to_nanos(millis);
   }
 }
 
-static jlong millis_to_nanos(jlong millis) {
+static jlong millis_to_nanos_bounded(jlong millis) {
   // We have to watch for overflow when converting millis to nanos,
   // but if millis is that large then we will end up limiting to
   // MAX_SECS anyway, so just do that here.
   if (millis / MILLIUNITS > MAX_SECS) {
     millis = jlong(MAX_SECS) * MILLIUNITS;
   }
-  return millis * (NANOUNITS / MILLIUNITS);
+  return millis_to_nanos(millis);
 }
 
 static void to_abstime(timespec* abstime, jlong timeout,
@@ -1897,7 +1897,7 @@
 // Create an absolute time 'millis' milliseconds in the future, using the
 // real-time (time-of-day) clock. Used by PosixSemaphore.
 void os::Posix::to_RTC_abstime(timespec* abstime, int64_t millis) {
-  to_abstime(abstime, millis_to_nanos(millis),
+  to_abstime(abstime, millis_to_nanos_bounded(millis),
              false /* not absolute */,
              true  /* use real-time clock */);
 }
@@ -1992,7 +1992,7 @@
 
   if (v == 0) { // Do this the hard way by blocking ...
     struct timespec abst;
-    to_abstime(&abst, millis_to_nanos(millis), false, false);
+    to_abstime(&abst, millis_to_nanos_bounded(millis), false, false);
 
     int ret = OS_TIMEOUT;
     int status = pthread_mutex_lock(_mutex);
@@ -2318,7 +2318,7 @@
     if (millis / MILLIUNITS > MAX_SECS) {
       millis = jlong(MAX_SECS) * MILLIUNITS;
     }
-    to_abstime(&abst, millis * (NANOUNITS / MILLIUNITS), false, false);
+    to_abstime(&abst, millis_to_nanos(millis), false, false);
 
     int ret = OS_TIMEOUT;
     int status = pthread_cond_timedwait(cond(), mutex(), &abst);
--- a/src/hotspot/os/windows/os_perf_windows.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/os/windows/os_perf_windows.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 2012, 2019, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2012, 2020, Oracle and/or its affiliates. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
  * This code is free software; you can redistribute it and/or modify it
@@ -97,7 +97,7 @@
 */
 typedef struct {
   HQUERY query;
-  s8     lastUpdate; // Last time query was updated (current millis).
+  s8     lastUpdate; // Last time query was updated.
 } UpdateQueryS, *UpdateQueryP;
 
 
@@ -287,8 +287,8 @@
 
 static int collect_query_data(UpdateQueryP update_query) {
   assert(update_query != NULL, "invariant");
-  const s8 now = os::javaTimeMillis();
-  if (now - update_query->lastUpdate > min_update_interval_millis) {
+  const s8 now = os::javaTimeNanos();
+  if (nanos_to_millis(now - update_query->lastUpdate) > min_update_interval_millis) {
     if (PdhDll::PdhCollectQueryData(update_query->query) != ERROR_SUCCESS) {
       return OS_ERR;
     }
--- a/src/hotspot/os_cpu/aix_ppc/atomic_aix_ppc.hpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/os_cpu/aix_ppc/atomic_aix_ppc.hpp	Fri Feb 07 11:09:59 2020 -0800
@@ -93,11 +93,14 @@
 
 
 template<size_t byte_size>
-struct Atomic::PlatformAdd
-  : Atomic::AddAndFetch<Atomic::PlatformAdd<byte_size> >
-{
+struct Atomic::PlatformAdd {
   template<typename D, typename I>
   D add_and_fetch(D volatile* dest, I add_value, atomic_memory_order order) const;
+
+  template<typename D, typename I>
+  D fetch_and_add(D volatile* dest, I add_value, atomic_memory_order order) const {
+    return add_and_fetch(dest, add_value, order) - add_value;
+  }
 };
 
 template<>
--- a/src/hotspot/os_cpu/bsd_x86/atomic_bsd_x86.hpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/os_cpu/bsd_x86/atomic_bsd_x86.hpp	Fri Feb 07 11:09:59 2020 -0800
@@ -28,11 +28,14 @@
 // Implementation of class atomic
 
 template<size_t byte_size>
-struct Atomic::PlatformAdd
-  : Atomic::FetchAndAdd<Atomic::PlatformAdd<byte_size> >
-{
+struct Atomic::PlatformAdd {
   template<typename D, typename I>
   D fetch_and_add(D volatile* dest, I add_value, atomic_memory_order /* order */) const;
+
+  template<typename D, typename I>
+  D add_and_fetch(D volatile* dest, I add_value, atomic_memory_order order) const {
+    return fetch_and_add(dest, add_value, order) + add_value;
+  }
 };
 
 template<>
--- a/src/hotspot/os_cpu/bsd_zero/atomic_bsd_zero.hpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/os_cpu/bsd_zero/atomic_bsd_zero.hpp	Fri Feb 07 11:09:59 2020 -0800
@@ -160,11 +160,14 @@
 #endif // ARM
 
 template<size_t byte_size>
-struct Atomic::PlatformAdd
-  : Atomic::AddAndFetch<Atomic::PlatformAdd<byte_size> >
-{
+struct Atomic::PlatformAdd {
   template<typename D, typename I>
   D add_and_fetch(D volatile* dest, I add_value, atomic_memory_order order) const;
+
+  template<typename D, typename I>
+  D fetch_and_add(D volatile* dest, I add_value, atomic_memory_order order) const {
+    return add_and_fetch(dest, add_value, order) - add_value;
+  }
 };
 
 template<>
@@ -186,7 +189,7 @@
 }
 
 template<>
-template<typename D, typename !>
+template<typename D, typename I>
 inline D Atomic::PlatformAdd<8>::add_and_fetch(D volatile* dest, I add_value,
                                                atomic_memory_order order) const {
   STATIC_ASSERT(8 == sizeof(I));
--- a/src/hotspot/os_cpu/bsd_zero/os_bsd_zero.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/os_cpu/bsd_zero/os_bsd_zero.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -366,42 +366,42 @@
     return 1;
   }
 
-  void _Copy_conjoint_jshorts_atomic(jshort* from, jshort* to, size_t count) {
+  void _Copy_conjoint_jshorts_atomic(const jshort* from, jshort* to, size_t count) {
     if (from > to) {
-      jshort *end = from + count;
+      const jshort *end = from + count;
       while (from < end)
         *(to++) = *(from++);
     }
     else if (from < to) {
-      jshort *end = from;
+      const jshort *end = from;
       from += count - 1;
       to   += count - 1;
       while (from >= end)
         *(to--) = *(from--);
     }
   }
-  void _Copy_conjoint_jints_atomic(jint* from, jint* to, size_t count) {
+  void _Copy_conjoint_jints_atomic(const jint* from, jint* to, size_t count) {
     if (from > to) {
-      jint *end = from + count;
+      const jint *end = from + count;
       while (from < end)
         *(to++) = *(from++);
     }
     else if (from < to) {
-      jint *end = from;
+      const jint *end = from;
       from += count - 1;
       to   += count - 1;
       while (from >= end)
         *(to--) = *(from--);
     }
   }
-  void _Copy_conjoint_jlongs_atomic(jlong* from, jlong* to, size_t count) {
+  void _Copy_conjoint_jlongs_atomic(const jlong* from, jlong* to, size_t count) {
     if (from > to) {
-      jlong *end = from + count;
+      const jlong *end = from + count;
       while (from < end)
         os::atomic_copy64(from++, to++);
     }
     else if (from < to) {
-      jlong *end = from;
+      const jlong *end = from;
       from += count - 1;
       to   += count - 1;
       while (from >= end)
@@ -409,22 +409,22 @@
     }
   }
 
-  void _Copy_arrayof_conjoint_bytes(HeapWord* from,
+  void _Copy_arrayof_conjoint_bytes(const HeapWord* from,
                                     HeapWord* to,
                                     size_t    count) {
     memmove(to, from, count);
   }
-  void _Copy_arrayof_conjoint_jshorts(HeapWord* from,
+  void _Copy_arrayof_conjoint_jshorts(const HeapWord* from,
                                       HeapWord* to,
                                       size_t    count) {
     memmove(to, from, count * 2);
   }
-  void _Copy_arrayof_conjoint_jints(HeapWord* from,
+  void _Copy_arrayof_conjoint_jints(const HeapWord* from,
                                     HeapWord* to,
                                     size_t    count) {
     memmove(to, from, count * 4);
   }
-  void _Copy_arrayof_conjoint_jlongs(HeapWord* from,
+  void _Copy_arrayof_conjoint_jlongs(const HeapWord* from,
                                      HeapWord* to,
                                      size_t    count) {
     memmove(to, from, count * 8);
--- a/src/hotspot/os_cpu/linux_aarch64/atomic_linux_aarch64.hpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/os_cpu/linux_aarch64/atomic_linux_aarch64.hpp	Fri Feb 07 11:09:59 2020 -0800
@@ -33,15 +33,18 @@
 // See https://patchwork.kernel.org/patch/3575821/
 
 template<size_t byte_size>
-struct Atomic::PlatformAdd
-  : Atomic::AddAndFetch<Atomic::PlatformAdd<byte_size> >
-{
+struct Atomic::PlatformAdd {
   template<typename D, typename I>
   D add_and_fetch(D volatile* dest, I add_value, atomic_memory_order order) const {
     D res = __atomic_add_fetch(dest, add_value, __ATOMIC_RELEASE);
     FULL_MEM_BARRIER;
     return res;
   }
+
+  template<typename D, typename I>
+  D fetch_and_add(D volatile* dest, I add_value, atomic_memory_order order) const {
+    return add_and_fetch(dest, add_value, order) - add_value;
+  }
 };
 
 template<size_t byte_size>
--- a/src/hotspot/os_cpu/linux_arm/atomic_linux_arm.hpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/os_cpu/linux_arm/atomic_linux_arm.hpp	Fri Feb 07 11:09:59 2020 -0800
@@ -67,11 +67,14 @@
 // For ARMv7 we add explicit barriers in the stubs.
 
 template<size_t byte_size>
-struct Atomic::PlatformAdd
-  : Atomic::AddAndFetch<Atomic::PlatformAdd<byte_size> >
-{
+struct Atomic::PlatformAdd {
   template<typename D, typename I>
   D add_and_fetch(D volatile* dest, I add_value, atomic_memory_order order) const;
+
+  template<typename D, typename I>
+  D fetch_and_add(D volatile* dest, I add_value, atomic_memory_order order) const {
+    return add_and_fetch(dest, add_value, order) - add_value;
+  }
 };
 
 template<>
--- a/src/hotspot/os_cpu/linux_ppc/atomic_linux_ppc.hpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/os_cpu/linux_ppc/atomic_linux_ppc.hpp	Fri Feb 07 11:09:59 2020 -0800
@@ -93,11 +93,14 @@
 
 
 template<size_t byte_size>
-struct Atomic::PlatformAdd
-  : Atomic::AddAndFetch<Atomic::PlatformAdd<byte_size> >
-{
+struct Atomic::PlatformAdd {
   template<typename D, typename I>
   D add_and_fetch(D volatile* dest, I add_value, atomic_memory_order order) const;
+
+  template<typename D, typename I>
+  D fetch_and_add(D volatile* dest, I add_value, atomic_memory_order order) const {
+    return add_and_fetch(dest, add_value, order) - add_value;
+  }
 };
 
 template<>
--- a/src/hotspot/os_cpu/linux_s390/atomic_linux_s390.hpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/os_cpu/linux_s390/atomic_linux_s390.hpp	Fri Feb 07 11:09:59 2020 -0800
@@ -75,11 +75,14 @@
 }
 
 template<size_t byte_size>
-struct Atomic::PlatformAdd
-  : Atomic::AddAndFetch<Atomic::PlatformAdd<byte_size> >
-{
+struct Atomic::PlatformAdd {
   template<typename D, typename I>
   D add_and_fetch(D volatile* dest, I add_value, atomic_memory_order order) const;
+
+  template<typename D, typename I>
+  D fetch_and_add(D volatile* dest, I add_value, atomic_memory_order order) const {
+    return add_and_fetch(dest, add_value, order) - add_value;
+  }
 };
 
 template<>
--- a/src/hotspot/os_cpu/linux_sparc/atomic_linux_sparc.hpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/os_cpu/linux_sparc/atomic_linux_sparc.hpp	Fri Feb 07 11:09:59 2020 -0800
@@ -28,11 +28,14 @@
 // Implementation of class atomic
 
 template<size_t byte_size>
-struct Atomic::PlatformAdd
-  : Atomic::AddAndFetch<Atomic::PlatformAdd<byte_size> >
-{
+struct Atomic::PlatformAdd {
   template<typename D, typename I>
   D add_and_fetch(D volatile* dest, I add_value, atomic_memory_order order) const;
+
+  template<typename D, typename I>
+  D fetch_and_add(D volatile* dest, I add_value, atomic_memory_order order) const {
+    return add_and_fetch(dest, add_value, order) - add_value;
+  }
 };
 
 template<>
--- a/src/hotspot/os_cpu/linux_x86/atomic_linux_x86.hpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/os_cpu/linux_x86/atomic_linux_x86.hpp	Fri Feb 07 11:09:59 2020 -0800
@@ -28,11 +28,14 @@
 // Implementation of class atomic
 
 template<size_t byte_size>
-struct Atomic::PlatformAdd
-  : Atomic::FetchAndAdd<Atomic::PlatformAdd<byte_size> >
-{
+struct Atomic::PlatformAdd {
   template<typename D, typename I>
   D fetch_and_add(D volatile* dest, I add_value, atomic_memory_order order) const;
+
+  template<typename D, typename I>
+  D add_and_fetch(D volatile* dest, I add_value, atomic_memory_order order) const {
+    return fetch_and_add(dest, add_value, order) + add_value;
+  }
 };
 
 template<>
--- a/src/hotspot/os_cpu/linux_zero/atomic_linux_zero.hpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/os_cpu/linux_zero/atomic_linux_zero.hpp	Fri Feb 07 11:09:59 2020 -0800
@@ -31,11 +31,14 @@
 // Implementation of class atomic
 
 template<size_t byte_size>
-struct Atomic::PlatformAdd
-  : Atomic::AddAndFetch<Atomic::PlatformAdd<byte_size> >
-{
+struct Atomic::PlatformAdd {
   template<typename D, typename I>
   D add_and_fetch(D volatile* dest, I add_value, atomic_memory_order order) const;
+
+  template<typename D, typename I>
+  D fetch_and_add(D volatile* dest, I add_value, atomic_memory_order order) const {
+    return add_and_fetch(dest, add_value, order) - add_value;
+  }
 };
 
 template<>
--- a/src/hotspot/os_cpu/linux_zero/os_linux_zero.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/os_cpu/linux_zero/os_linux_zero.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -410,42 +410,42 @@
   }
 
 
-  void _Copy_conjoint_jshorts_atomic(jshort* from, jshort* to, size_t count) {
+  void _Copy_conjoint_jshorts_atomic(const jshort* from, jshort* to, size_t count) {
     if (from > to) {
-      jshort *end = from + count;
+      const jshort *end = from + count;
       while (from < end)
         *(to++) = *(from++);
     }
     else if (from < to) {
-      jshort *end = from;
+      const jshort *end = from;
       from += count - 1;
       to   += count - 1;
       while (from >= end)
         *(to--) = *(from--);
     }
   }
-  void _Copy_conjoint_jints_atomic(jint* from, jint* to, size_t count) {
+  void _Copy_conjoint_jints_atomic(const jint* from, jint* to, size_t count) {
     if (from > to) {
-      jint *end = from + count;
+      const jint *end = from + count;
       while (from < end)
         *(to++) = *(from++);
     }
     else if (from < to) {
-      jint *end = from;
+      const jint *end = from;
       from += count - 1;
       to   += count - 1;
       while (from >= end)
         *(to--) = *(from--);
     }
   }
-  void _Copy_conjoint_jlongs_atomic(jlong* from, jlong* to, size_t count) {
+  void _Copy_conjoint_jlongs_atomic(const jlong* from, jlong* to, size_t count) {
     if (from > to) {
-      jlong *end = from + count;
+      const jlong *end = from + count;
       while (from < end)
         os::atomic_copy64(from++, to++);
     }
     else if (from < to) {
-      jlong *end = from;
+      const jlong *end = from;
       from += count - 1;
       to   += count - 1;
       while (from >= end)
@@ -453,22 +453,22 @@
     }
   }
 
-  void _Copy_arrayof_conjoint_bytes(HeapWord* from,
+  void _Copy_arrayof_conjoint_bytes(const HeapWord* from,
                                     HeapWord* to,
                                     size_t    count) {
     memmove(to, from, count);
   }
-  void _Copy_arrayof_conjoint_jshorts(HeapWord* from,
+  void _Copy_arrayof_conjoint_jshorts(const HeapWord* from,
                                       HeapWord* to,
                                       size_t    count) {
     memmove(to, from, count * 2);
   }
-  void _Copy_arrayof_conjoint_jints(HeapWord* from,
+  void _Copy_arrayof_conjoint_jints(const HeapWord* from,
                                     HeapWord* to,
                                     size_t    count) {
     memmove(to, from, count * 4);
   }
-  void _Copy_arrayof_conjoint_jlongs(HeapWord* from,
+  void _Copy_arrayof_conjoint_jlongs(const HeapWord* from,
                                      HeapWord* to,
                                      size_t    count) {
     memmove(to, from, count * 8);
--- a/src/hotspot/os_cpu/solaris_sparc/atomic_solaris_sparc.hpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/os_cpu/solaris_sparc/atomic_solaris_sparc.hpp	Fri Feb 07 11:09:59 2020 -0800
@@ -31,7 +31,7 @@
 template<size_t byte_size>
 struct Atomic::PlatformAdd {
   template<typename D, typename I>
-  inline D operator()(D volatile* dest, I add_value, atomic_memory_order order) const {
+  inline D add_and_fetch(D volatile* dest, I add_value, atomic_memory_order order) const {
     D old_value = *dest;
     while (true) {
       D new_value = old_value + add_value;
@@ -41,6 +41,11 @@
     }
     return old_value + add_value;
   }
+
+  template<typename D, typename I>
+  inline D fetch_and_add(D volatile* dest, I add_value, atomic_memory_order order) const {
+    return add_and_fetch(dest, add_value, order) - add_value;
+  }
 };
 
 template<>
--- a/src/hotspot/os_cpu/solaris_x86/atomic_solaris_x86.hpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/os_cpu/solaris_x86/atomic_solaris_x86.hpp	Fri Feb 07 11:09:59 2020 -0800
@@ -41,11 +41,14 @@
 }
 
 template<size_t byte_size>
-struct Atomic::PlatformAdd
-  : Atomic::AddAndFetch<Atomic::PlatformAdd<byte_size> >
-{
+struct Atomic::PlatformAdd {
   template<typename D, typename I>
   D add_and_fetch(D volatile* dest, I add_value, atomic_memory_order order) const;
+
+  template<typename D, typename I>
+  D fetch_and_add(D volatile* dest, I add_value, atomic_memory_order order) const {
+    return add_and_fetch(dest, add_value, order) - add_value;
+  }
 };
 
 // Not using add_using_helper; see comment for cmpxchg.
--- a/src/hotspot/os_cpu/windows_x86/atomic_windows_x86.hpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/os_cpu/windows_x86/atomic_windows_x86.hpp	Fri Feb 07 11:09:59 2020 -0800
@@ -54,11 +54,14 @@
 #pragma warning(disable: 4035) // Disables warnings reporting missing return statement
 
 template<size_t byte_size>
-struct Atomic::PlatformAdd
-  : Atomic::AddAndFetch<Atomic::PlatformAdd<byte_size> >
-{
+struct Atomic::PlatformAdd {
   template<typename D, typename I>
   D add_and_fetch(D volatile* dest, I add_value, atomic_memory_order order) const;
+
+  template<typename D, typename I>
+  D fetch_and_add(D volatile* dest, I add_value, atomic_memory_order order) const {
+    return add_and_fetch(dest, add_value, order) - add_value;
+  }
 };
 
 #ifdef AMD64
--- a/src/hotspot/share/aot/aotCodeHeap.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/share/aot/aotCodeHeap.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -463,6 +463,7 @@
     SET_AOT_GLOBAL_SYMBOL_VALUE("_resolve_virtual_entry", address, SharedRuntime::get_resolve_virtual_call_stub());
     SET_AOT_GLOBAL_SYMBOL_VALUE("_resolve_opt_virtual_entry", address, SharedRuntime::get_resolve_opt_virtual_call_stub());
     SET_AOT_GLOBAL_SYMBOL_VALUE("_aot_deopt_blob_unpack", address, SharedRuntime::deopt_blob()->unpack());
+    SET_AOT_GLOBAL_SYMBOL_VALUE("_aot_deopt_blob_unpack_with_exception_in_tls", address, SharedRuntime::deopt_blob()->unpack_with_exception_in_tls());
     SET_AOT_GLOBAL_SYMBOL_VALUE("_aot_deopt_blob_uncommon_trap", address, SharedRuntime::deopt_blob()->uncommon_trap());
     SET_AOT_GLOBAL_SYMBOL_VALUE("_aot_ic_miss_stub", address, SharedRuntime::get_ic_miss_stub());
     SET_AOT_GLOBAL_SYMBOL_VALUE("_aot_handle_wrong_method_stub", address, SharedRuntime::get_handle_wrong_method_stub());
--- a/src/hotspot/share/c1/c1_CodeStubs.hpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/share/c1/c1_CodeStubs.hpp	Fri Feb 07 11:09:59 2020 -0800
@@ -123,6 +123,7 @@
  public:
   ConversionStub(Bytecodes::Code bytecode, LIR_Opr input, LIR_Opr result)
     : _bytecode(bytecode), _input(input), _result(result) {
+    NOT_IA32( ShouldNotReachHere(); ) // used only on x86-32
   }
 
   Bytecodes::Code bytecode() { return _bytecode; }
--- a/src/hotspot/share/c1/c1_LIR.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/share/c1/c1_LIR.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -424,8 +424,6 @@
     case lir_backwardbranch_target:    // result and info always invalid
     case lir_build_frame:              // result and info always invalid
     case lir_fpop_raw:                 // result and info always invalid
-    case lir_24bit_FPU:                // result and info always invalid
-    case lir_reset_FPU:                // result and info always invalid
     case lir_breakpoint:               // result and info always invalid
     case lir_membar:                   // result and info always invalid
     case lir_membar_acquire:           // result and info always invalid
@@ -467,7 +465,6 @@
 // LIR_Op1
     case lir_fxch:           // input always valid, result and info always invalid
     case lir_fld:            // input always valid, result and info always invalid
-    case lir_ffree:          // input always valid, result and info always invalid
     case lir_push:           // input always valid, result and info always invalid
     case lir_pop:            // input always valid, result and info always invalid
     case lir_return:         // input always valid, result and info always invalid
@@ -1649,14 +1646,11 @@
      case lir_osr_entry:             s = "osr_entry";     break;
      case lir_build_frame:           s = "build_frm";     break;
      case lir_fpop_raw:              s = "fpop_raw";      break;
-     case lir_24bit_FPU:             s = "24bit_FPU";     break;
-     case lir_reset_FPU:             s = "reset_FPU";     break;
      case lir_breakpoint:            s = "breakpoint";    break;
      case lir_get_thread:            s = "get_thread";    break;
      // LIR_Op1
      case lir_fxch:                  s = "fxch";          break;
      case lir_fld:                   s = "fld";           break;
-     case lir_ffree:                 s = "ffree";         break;
      case lir_push:                  s = "push";          break;
      case lir_pop:                   s = "pop";           break;
      case lir_null_check:            s = "null_check";    break;
--- a/src/hotspot/share/c1/c1_LIR.hpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/share/c1/c1_LIR.hpp	Fri Feb 07 11:09:59 2020 -0800
@@ -888,8 +888,6 @@
       , lir_osr_entry
       , lir_build_frame
       , lir_fpop_raw
-      , lir_24bit_FPU
-      , lir_reset_FPU
       , lir_breakpoint
       , lir_rtcall
       , lir_membar
@@ -905,7 +903,6 @@
   , begin_op1
       , lir_fxch
       , lir_fld
-      , lir_ffree
       , lir_push
       , lir_pop
       , lir_null_check
@@ -2232,8 +2229,6 @@
   void unlock_object(LIR_Opr hdr, LIR_Opr obj, LIR_Opr lock, LIR_Opr scratch, CodeStub* stub);
   void lock_object(LIR_Opr hdr, LIR_Opr obj, LIR_Opr lock, LIR_Opr scratch, CodeStub* stub, CodeEmitInfo* info);
 
-  void set_24bit_fpu()                                               { append(new LIR_Op0(lir_24bit_FPU )); }
-  void restore_fpu()                                                 { append(new LIR_Op0(lir_reset_FPU )); }
   void breakpoint()                                                  { append(new LIR_Op0(lir_breakpoint)); }
 
   void arraycopy(LIR_Opr src, LIR_Opr src_pos, LIR_Opr dst, LIR_Opr dst_pos, LIR_Opr length, LIR_Opr tmp, ciArrayKlass* expected_type, int flags, CodeEmitInfo* info) { append(new LIR_OpArrayCopy(src, src_pos, dst, dst_pos, length, tmp, expected_type, flags, info)); }
--- a/src/hotspot/share/c1/c1_LIRAssembler.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/share/c1/c1_LIRAssembler.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -481,7 +481,7 @@
     compilation()->set_has_method_handle_invokes(true);
   }
 
-#if defined(X86) && defined(TIERED)
+#if defined(IA32) && defined(TIERED)
   // C2 leave fpu stack dirty clean it
   if (UseSSE < 2) {
     int i;
@@ -532,6 +532,7 @@
       safepoint_poll(op->in_opr(), op->info());
       break;
 
+#ifdef IA32
     case lir_fxch:
       fxch(op->in_opr()->as_jint());
       break;
@@ -539,10 +540,7 @@
     case lir_fld:
       fld(op->in_opr()->as_jint());
       break;
-
-    case lir_ffree:
-      ffree(op->in_opr()->as_jint());
-      break;
+#endif // IA32
 
     case lir_branch:
       break;
@@ -636,22 +634,16 @@
       osr_entry();
       break;
 
-    case lir_24bit_FPU:
-      set_24bit_FPU();
+#ifdef IA32
+    case lir_fpop_raw:
+      fpop();
       break;
-
-    case lir_reset_FPU:
-      reset_FPU();
-      break;
+#endif // IA32
 
     case lir_breakpoint:
       breakpoint();
       break;
 
-    case lir_fpop_raw:
-      fpop();
-      break;
-
     case lir_membar:
       membar();
       break;
--- a/src/hotspot/share/c1/c1_LIRAssembler.hpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/share/c1/c1_LIRAssembler.hpp	Fri Feb 07 11:09:59 2020 -0800
@@ -105,13 +105,6 @@
   ImplicitNullCheckStub* add_debug_info_for_null_check(int pc_offset, CodeEmitInfo* cinfo);
   ImplicitNullCheckStub* add_debug_info_for_null_check_here(CodeEmitInfo* info);
 
-  void set_24bit_FPU();
-  void reset_FPU();
-  void fpop();
-  void fxch(int i);
-  void fld(int i);
-  void ffree(int i);
-
   void breakpoint();
   void push(LIR_Opr opr);
   void pop(LIR_Opr opr);
--- a/src/hotspot/share/c1/c1_LinearScan.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/share/c1/c1_LinearScan.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -90,7 +90,7 @@
  , _has_call(0)
  , _interval_in_loop(0)  // initialized later with correct length
  , _scope_value_cache(0) // initialized later with correct length
-#ifdef X86
+#ifdef IA32
  , _fpu_stack_allocator(NULL)
 #endif
 {
@@ -2653,13 +2653,15 @@
 #endif
 
   } else if (opr->is_single_fpu()) {
-#ifdef X86
+#ifdef IA32
     // the exact location of fpu stack values is only known
     // during fpu stack allocation, so the stack allocator object
     // must be present
     assert(use_fpu_stack_allocation(), "should not have float stack values without fpu stack allocation (all floats must be SSE2)");
     assert(_fpu_stack_allocator != NULL, "must be present");
     opr = _fpu_stack_allocator->to_fpu_stack(opr);
+#elif defined(AMD64)
+    assert(false, "FPU not used on x86-64");
 #endif
 
     Location::Type loc_type = float_saved_as_double ? Location::float_in_dbl : Location::normal;
@@ -2764,7 +2766,7 @@
       // name for the other half.  *first and *second must represent the
       // least and most significant words, respectively.
 
-#ifdef X86
+#ifdef IA32
       // the exact location of fpu stack values is only known
       // during fpu stack allocation, so the stack allocator object
       // must be present
@@ -2774,6 +2776,9 @@
 
       assert(opr->fpu_regnrLo() == opr->fpu_regnrHi(), "assumed in calculation (only fpu_regnrLo is used)");
 #endif
+#ifdef AMD64
+      assert(false, "FPU not used on x86-64");
+#endif
 #ifdef SPARC
       assert(opr->fpu_regnrLo() == opr->fpu_regnrHi() + 1, "assumed in calculation (only fpu_regnrHi is used)");
 #endif
--- a/src/hotspot/share/c1/c1_LinearScan.hpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/share/c1/c1_LinearScan.hpp	Fri Feb 07 11:09:59 2020 -0800
@@ -177,7 +177,7 @@
   bool          is_interval_in_loop(int interval, int loop) const { return _interval_in_loop.at(interval, loop); }
 
   // handling of fpu stack allocation (platform dependent, needed for debug information generation)
-#ifdef X86
+#ifdef IA32
   FpuStackAllocator* _fpu_stack_allocator;
   bool use_fpu_stack_allocation() const          { return UseSSE < 2 && has_fpu_registers(); }
 #else
--- a/src/hotspot/share/c1/c1_ValueMap.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/share/c1/c1_ValueMap.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 1999, 2017, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 1999, 2020, Oracle and/or its affiliates. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
  * This code is free software; you can redistribute it and/or modify it
@@ -265,8 +265,8 @@
   GlobalValueNumbering* _gvn;
   BlockList             _loop_blocks;
   bool                  _too_complicated_loop;
-  bool                  _has_field_store[T_ARRAY + 1];
-  bool                  _has_indexed_store[T_ARRAY + 1];
+  bool                  _has_field_store[T_VOID];
+  bool                  _has_indexed_store[T_VOID];
 
   // simplified access to methods of GlobalValueNumbering
   ValueMap* current_map()                        { return _gvn->current_map(); }
@@ -276,12 +276,12 @@
   void      kill_memory()                                 { _too_complicated_loop = true; }
   void      kill_field(ciField* field, bool all_offsets)  {
     current_map()->kill_field(field, all_offsets);
-    assert(field->type()->basic_type() >= 0 && field->type()->basic_type() <= T_ARRAY, "Invalid type");
+    assert(field->type()->basic_type() >= 0 && field->type()->basic_type() < T_VOID, "Invalid type");
     _has_field_store[field->type()->basic_type()] = true;
   }
   void      kill_array(ValueType* type)                   {
     current_map()->kill_array(type);
-    BasicType basic_type = as_BasicType(type); assert(basic_type >= 0 && basic_type <= T_ARRAY, "Invalid type");
+    BasicType basic_type = as_BasicType(type); assert(basic_type >= 0 && basic_type < T_VOID, "Invalid type");
     _has_indexed_store[basic_type] = true;
   }
 
@@ -291,19 +291,19 @@
     , _loop_blocks(ValueMapMaxLoopSize)
     , _too_complicated_loop(false)
   {
-    for (int i=0; i<= T_ARRAY; i++){
+    for (int i = 0; i < T_VOID; i++) {
       _has_field_store[i] = false;
       _has_indexed_store[i] = false;
     }
   }
 
   bool has_field_store(BasicType type) {
-    assert(type >= 0 && type <= T_ARRAY, "Invalid type");
+    assert(type >= 0 && type < T_VOID, "Invalid type");
     return _has_field_store[type];
   }
 
   bool has_indexed_store(BasicType type) {
-    assert(type >= 0 && type <= T_ARRAY, "Invalid type");
+    assert(type >= 0 && type < T_VOID, "Invalid type");
     return _has_indexed_store[type];
   }
 
--- a/src/hotspot/share/ci/ciEnv.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/share/ci/ciEnv.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -413,12 +413,10 @@
 
   // Now we need to check the SystemDictionary
   Symbol* sym = name->get_symbol();
-  if (sym->char_at(0) == JVM_SIGNATURE_CLASS &&
-      sym->char_at(sym->utf8_length()-1) == JVM_SIGNATURE_ENDCLASS) {
+  if (Signature::has_envelope(sym)) {
     // This is a name from a signature.  Strip off the trimmings.
     // Call recursive to keep scope of strippedsym.
-    TempNewSymbol strippedsym = SymbolTable::new_symbol(sym->as_utf8()+1,
-                                                        sym->utf8_length()-2);
+    TempNewSymbol strippedsym = Signature::strip_envelope(sym);
     ciSymbol* strippedname = get_symbol(strippedsym);
     return get_klass_by_name_impl(accessing_klass, cpool, strippedname, require_local);
   }
@@ -466,18 +464,17 @@
   // we must build an array type around it.  The CI requires array klasses
   // to be loaded if their element klasses are loaded, except when memory
   // is exhausted.
-  if (sym->char_at(0) == JVM_SIGNATURE_ARRAY &&
+  if (Signature::is_array(sym) &&
       (sym->char_at(1) == JVM_SIGNATURE_ARRAY || sym->char_at(1) == JVM_SIGNATURE_CLASS)) {
     // We have an unloaded array.
     // Build it on the fly if the element class exists.
-    TempNewSymbol elem_sym = SymbolTable::new_symbol(sym->as_utf8()+1,
-                                                     sym->utf8_length()-1);
-
+    SignatureStream ss(sym, false);
+    ss.skip_array_prefix(1);
     // Get element ciKlass recursively.
     ciKlass* elem_klass =
       get_klass_by_name_impl(accessing_klass,
                              cpool,
-                             get_symbol(elem_sym),
+                             get_symbol(ss.as_symbol()),
                              require_local);
     if (elem_klass != NULL && elem_klass->is_loaded()) {
       // Now make an array for it
@@ -609,7 +606,7 @@
       }
       BasicType bt = T_OBJECT;
       if (cpool->tag_at(index).is_dynamic_constant())
-        bt = FieldType::basic_type(cpool->uncached_signature_ref_at(index));
+        bt = Signature::basic_type(cpool->uncached_signature_ref_at(index));
       if (is_reference_type(bt)) {
       } else {
         // we have to unbox the primitive value
@@ -791,6 +788,8 @@
 ciMethod* ciEnv::get_method_by_index_impl(const constantPoolHandle& cpool,
                                           int index, Bytecodes::Code bc,
                                           ciInstanceKlass* accessor) {
+  assert(cpool.not_null(), "need constant pool");
+  assert(accessor != NULL, "need origin of access");
   if (bc == Bytecodes::_invokedynamic) {
     ConstantPoolCacheEntry* cpce = cpool->invokedynamic_cp_cache_entry_at(index);
     bool is_resolved = !cpce->is_f1_null();
--- a/src/hotspot/share/ci/ciField.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/share/ci/ciField.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 1999, 2019, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 1999, 2020, Oracle and/or its affiliates. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
  * This code is free software; you can redistribute it and/or modify it
@@ -86,7 +86,7 @@
   Symbol* signature = cpool->symbol_at(sig_index);
   _signature = ciEnv::current(THREAD)->get_symbol(signature);
 
-  BasicType field_type = FieldType::basic_type(signature);
+  BasicType field_type = Signature::basic_type(signature);
 
   // If the field is a pointer type, get the klass of the
   // field.
--- a/src/hotspot/share/ci/ciInstanceKlass.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/share/ci/ciInstanceKlass.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -315,7 +315,7 @@
 // Implementation of the print method.
 void ciInstanceKlass::print_impl(outputStream* st) {
   ciKlass::print_impl(st);
-  GUARDED_VM_ENTRY(st->print(" loader=" INTPTR_FORMAT, p2i((address)loader()));)
+  GUARDED_VM_ENTRY(st->print(" loader=" INTPTR_FORMAT, p2i(loader()));)
   if (is_loaded()) {
     st->print(" loaded=true initialized=%s finalized=%s subklass=%s size=%d flags=",
               bool_to_str(is_initialized()),
--- a/src/hotspot/share/ci/ciKlass.hpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/share/ci/ciKlass.hpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 1999, 2019, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 1999, 2020, Oracle and/or its affiliates. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
  * This code is free software; you can redistribute it and/or modify it
@@ -44,6 +44,7 @@
   friend class ciMethod;
   friend class ciMethodData;
   friend class ciObjArrayKlass;
+  friend class ciSignature;
   friend class ciReceiverTypeData;
 
 private:
--- a/src/hotspot/share/ci/ciObjArrayKlass.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/share/ci/ciObjArrayKlass.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 1999, 2019, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 1999, 2020, Oracle and/or its affiliates. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
  * This code is free software; you can redistribute it and/or modify it
@@ -108,37 +108,23 @@
                                                 int dimension) {
   EXCEPTION_CONTEXT;
   int element_len = element_name->utf8_length();
+  int buflen = dimension + element_len + 3;  // '['+ + 'L'? + (element) + ';'? + '\0'
+  char* name = CURRENT_THREAD_ENV->name_buffer(buflen);
+  int pos = 0;
+  for ( ; pos < dimension; pos++) {
+    name[pos] = JVM_SIGNATURE_ARRAY;
+  }
+  Symbol* base_name_sym = element_name->get_symbol();
 
-  Symbol* base_name_sym = element_name->get_symbol();
-  char* name;
-
-  if (base_name_sym->char_at(0) == JVM_SIGNATURE_ARRAY ||
-      (base_name_sym->char_at(0) == JVM_SIGNATURE_CLASS &&  // watch package name 'Lxx'
-       base_name_sym->char_at(element_len-1) == JVM_SIGNATURE_ENDCLASS)) {
-
-    int new_len = element_len + dimension + 1; // for the ['s and '\0'
-    name = CURRENT_THREAD_ENV->name_buffer(new_len);
-
-    int pos = 0;
-    for ( ; pos < dimension; pos++) {
-      name[pos] = JVM_SIGNATURE_ARRAY;
-    }
-    strncpy(name+pos, (char*)element_name->base(), element_len);
-    name[new_len-1] = '\0';
+  if (Signature::is_array(base_name_sym) ||
+      Signature::has_envelope(base_name_sym)) {
+    strncpy(&name[pos], (char*)element_name->base(), element_len);
+    name[pos + element_len] = '\0';
   } else {
-    int new_len =   3                       // for L, ;, and '\0'
-                  + dimension               // for ['s
-                  + element_len;
-
-    name = CURRENT_THREAD_ENV->name_buffer(new_len);
-    int pos = 0;
-    for ( ; pos < dimension; pos++) {
-      name[pos] = JVM_SIGNATURE_ARRAY;
-    }
     name[pos++] = JVM_SIGNATURE_CLASS;
-    strncpy(name+pos, (char*)element_name->base(), element_len);
-    name[new_len-2] = JVM_SIGNATURE_ENDCLASS;
-    name[new_len-1] = '\0';
+    strncpy(&name[pos], (char*)element_name->base(), element_len);
+    name[pos + element_len] = JVM_SIGNATURE_ENDCLASS;
+    name[pos + element_len + 1] = '\0';
   }
   return ciSymbol::make(name);
 }
--- a/src/hotspot/share/ci/ciObjectFactory.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/share/ci/ciObjectFactory.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 1999, 2019, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 1999, 2020, Oracle and/or its affiliates. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
  * This code is free software; you can redistribute it and/or modify it
@@ -46,7 +46,6 @@
 #include "memory/allocation.inline.hpp"
 #include "memory/universe.hpp"
 #include "oops/oop.inline.hpp"
-#include "runtime/fieldType.hpp"
 #include "runtime/handles.inline.hpp"
 #include "utilities/macros.hpp"
 
@@ -418,6 +417,7 @@
                                                ciSymbol*        name,
                                                ciSymbol*        signature,
                                                ciInstanceKlass* accessor) {
+  assert(accessor != NULL, "need origin of access");
   ciSignature* that = NULL;
   for (int i = 0; i < _unloaded_methods->length(); i++) {
     ciMethod* entry = _unloaded_methods->at(i);
@@ -488,20 +488,14 @@
   // unloaded InstanceKlass.  Deal with both.
   if (name->char_at(0) == JVM_SIGNATURE_ARRAY) {
     // Decompose the name.'
-    FieldArrayInfo fd;
-    BasicType element_type = FieldType::get_array_info(name->get_symbol(),
-                                                       fd, THREAD);
-    if (HAS_PENDING_EXCEPTION) {
-      CLEAR_PENDING_EXCEPTION;
-      CURRENT_THREAD_ENV->record_out_of_memory_failure();
-      return ciEnv::_unloaded_ciobjarrayklass;
-    }
-    int dimension = fd.dimension();
+    SignatureStream ss(name->get_symbol(), false);
+    int dimension = ss.skip_array_prefix();  // skip all '['s
+    BasicType element_type = ss.type();
     assert(element_type != T_ARRAY, "unsuccessful decomposition");
     ciKlass* element_klass = NULL;
     if (element_type == T_OBJECT) {
       ciEnv *env = CURRENT_THREAD_ENV;
-      ciSymbol* ci_name = env->get_symbol(fd.object_key());
+      ciSymbol* ci_name = env->get_symbol(ss.as_symbol());
       element_klass =
         env->get_klass_by_name(accessing_klass, ci_name, false)->as_instance_klass();
     } else {
--- a/src/hotspot/share/ci/ciSignature.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/share/ci/ciSignature.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 1999, 2019, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 1999, 2020, Oracle and/or its affiliates. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
  * This code is free software; you can redistribute it and/or modify it
@@ -40,6 +40,7 @@
 ciSignature::ciSignature(ciKlass* accessing_klass, const constantPoolHandle& cpool, ciSymbol* symbol) {
   ASSERT_IN_VM;
   EXCEPTION_CONTEXT;
+  assert(accessing_klass != NULL, "need origin of access");
   _accessing_klass = accessing_klass;
   _symbol = symbol;
 
@@ -55,11 +56,10 @@
   for (; ; ss.next()) {
     // Process one element of the signature
     ciType* type;
-    if (!ss.is_object()) {
+    if (!ss.is_reference()) {
       type = ciType::make(ss.type());
     } else {
-      Symbol* name = ss.as_symbol();
-      ciSymbol* klass_name = env->get_symbol(name);
+      ciSymbol* klass_name = env->get_symbol(ss.as_symbol());
       type = env->get_klass_by_name_impl(_accessing_klass, cpool, klass_name, false);
     }
     _types->append(type);
--- a/src/hotspot/share/classfile/classFileParser.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/share/classfile/classFileParser.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -665,7 +665,7 @@
             "Illegal zero length constant pool entry at %d in class %s",
             name_index, CHECK);
 
-          if (sig->char_at(0) == JVM_SIGNATURE_FUNC) {
+          if (Signature::is_method(sig)) {
             // Format check method name and signature
             verify_legal_method_name(name, CHECK);
             verify_legal_method_signature(name, sig, CHECK);
@@ -690,9 +690,8 @@
         const Symbol* const signature = cp->symbol_at(signature_ref_index);
         if (_need_verify) {
           // CONSTANT_Dynamic's name and signature are verified above, when iterating NameAndType_info.
-          // Need only to be sure signature is non-zero length and the right type.
-          if (signature->utf8_length() == 0 ||
-              signature->char_at(0) == JVM_SIGNATURE_FUNC) {
+          // Need only to be sure signature is the right type.
+          if (Signature::is_method(signature)) {
             throwIllegalSignature("CONSTANT_Dynamic", name, signature, CHECK);
           }
         }
@@ -716,8 +715,7 @@
           if (_need_verify) {
             // Field name and signature are verified above, when iterating NameAndType_info.
             // Need only to be sure signature is non-zero length and the right type.
-            if (signature->utf8_length() == 0 ||
-                signature->char_at(0) == JVM_SIGNATURE_FUNC) {
+            if (Signature::is_method(signature)) {
               throwIllegalSignature("Field", name, signature, CHECK);
             }
           }
@@ -725,8 +723,7 @@
           if (_need_verify) {
             // Method name and signature are verified above, when iterating NameAndType_info.
             // Need only to be sure signature is non-zero length and the right type.
-            if (signature->utf8_length() == 0 ||
-                signature->char_at(0) != JVM_SIGNATURE_FUNC) {
+            if (!Signature::is_method(signature)) {
               throwIllegalSignature("Method", name, signature, CHECK);
             }
           }
@@ -1723,7 +1720,7 @@
                         injected[n].signature_index,
                         0);
 
-      const BasicType type = FieldType::basic_type(injected[n].signature());
+      const BasicType type = Signature::basic_type(injected[n].signature());
 
       // Remember how many oops we encountered and compute allocation type
       const FieldAllocationType atype = fac->update(false, type);
@@ -2796,21 +2793,8 @@
   m->set_constants(_cp);
   m->set_name_index(name_index);
   m->set_signature_index(signature_index);
-
-  ResultTypeFinder rtf(cp->symbol_at(signature_index));
-  m->constMethod()->set_result_type(rtf.type());
-
-  if (args_size >= 0) {
-    m->set_size_of_parameters(args_size);
-  } else {
-    m->compute_size_of_parameters(THREAD);
-  }
-#ifdef ASSERT
-  if (args_size >= 0) {
-    m->compute_size_of_parameters(THREAD);
-    assert(args_size == m->size_of_parameters(), "");
-  }
-#endif
+  m->compute_from_signature(cp->symbol_at(signature_index));
+  assert(args_size < 0 || args_size == m->size_of_parameters(), "");
 
   // Fill in code attribute information
   m->set_max_stack(max_stack);
--- a/src/hotspot/share/classfile/classListParser.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/share/classfile/classListParser.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 2015, 2019, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2015, 2020, Oracle and/or its affiliates. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
  * This code is free software; you can redistribute it and/or modify it
@@ -34,7 +34,6 @@
 #include "logging/logTag.hpp"
 #include "memory/metaspaceShared.hpp"
 #include "memory/resourceArea.hpp"
-#include "runtime/fieldType.hpp"
 #include "runtime/handles.inline.hpp"
 #include "runtime/javaCalls.hpp"
 #include "utilities/defaultStream.hpp"
@@ -338,7 +337,7 @@
       error("If source location is not specified, interface(s) must not be specified");
     }
 
-    bool non_array = !FieldType::is_array(class_name_symbol);
+    bool non_array = !Signature::is_array(class_name_symbol);
 
     JavaValue result(T_OBJECT);
     if (non_array) {
--- a/src/hotspot/share/classfile/defaultMethods.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/share/classfile/defaultMethods.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 2012, 2019, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2012, 2020, Oracle and/or its affiliates. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
  * This code is free software; you can redistribute it and/or modify it
@@ -901,8 +901,7 @@
   m->set_constants(NULL); // This will get filled in later
   m->set_name_index(cp->utf8(name));
   m->set_signature_index(cp->utf8(sig));
-  ResultTypeFinder rtf(sig);
-  m->constMethod()->set_result_type(rtf.type());
+  m->compute_from_signature(sig);
   m->set_size_of_parameters(params);
   m->set_max_stack(max_stack);
   m->set_max_locals(params);
--- a/src/hotspot/share/classfile/placeholders.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/share/classfile/placeholders.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 2003, 2018, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2003, 2020, Oracle and/or its affiliates. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
  * This code is free software; you can redistribute it and/or modify it
@@ -27,7 +27,6 @@
 #include "classfile/placeholders.hpp"
 #include "classfile/systemDictionary.hpp"
 #include "oops/oop.inline.hpp"
-#include "runtime/fieldType.hpp"
 #include "utilities/hashtable.inline.hpp"
 
 // Placeholder methods
--- a/src/hotspot/share/classfile/stackMapTable.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/share/classfile/stackMapTable.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 2003, 2016, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2003, 2020, Oracle and/or its affiliates. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
  * This code is free software; you can redistribute it and/or modify it
@@ -28,7 +28,6 @@
 #include "memory/resourceArea.hpp"
 #include "oops/constantPool.hpp"
 #include "oops/oop.inline.hpp"
-#include "runtime/fieldType.hpp"
 #include "runtime/handles.inline.hpp"
 
 StackMapTable::StackMapTable(StackMapReader* reader, StackMapFrame* init_frame,
--- a/src/hotspot/share/classfile/systemDictionary.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/share/classfile/systemDictionary.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -74,7 +74,6 @@
 #include "prims/methodHandles.hpp"
 #include "runtime/arguments.hpp"
 #include "runtime/biasedLocking.hpp"
-#include "runtime/fieldType.hpp"
 #include "runtime/handles.inline.hpp"
 #include "runtime/java.hpp"
 #include "runtime/javaCalls.hpp"
@@ -240,7 +239,7 @@
 // Forwards to resolve_array_class_or_null or resolve_instance_class_or_null
 
 Klass* SystemDictionary::resolve_or_null(Symbol* class_name, Handle class_loader, Handle protection_domain, TRAPS) {
-  if (FieldType::is_array(class_name)) {
+  if (Signature::is_array(class_name)) {
     return resolve_array_class_or_null(class_name, class_loader, protection_domain, THREAD);
   } else {
     return resolve_instance_class_or_null_helper(class_name, class_loader, protection_domain, THREAD);
@@ -252,8 +251,8 @@
                                                                        Handle class_loader,
                                                                        Handle protection_domain,
                                                                        TRAPS) {
-  assert(class_name != NULL && !FieldType::is_array(class_name), "must be");
-  if (FieldType::is_obj(class_name)) {
+  assert(class_name != NULL && !Signature::is_array(class_name), "must be");
+  if (Signature::has_envelope(class_name)) {
     ResourceMark rm(THREAD);
     // Ignore wrapping L and ;.
     TempNewSymbol name = SymbolTable::new_symbol(class_name->as_C_string() + 1,
@@ -274,24 +273,24 @@
                                                      Handle class_loader,
                                                      Handle protection_domain,
                                                      TRAPS) {
-  assert(FieldType::is_array(class_name), "must be array");
+  assert(Signature::is_array(class_name), "must be array");
+  ResourceMark rm(THREAD);
+  SignatureStream ss(class_name, false);
+  int ndims = ss.skip_array_prefix();  // skip all '['s
   Klass* k = NULL;
-  FieldArrayInfo fd;
-  // dimension and object_key in FieldArrayInfo are assigned as a side-effect
-  // of this call
-  BasicType t = FieldType::get_array_info(class_name, fd, CHECK_NULL);
-  if (t == T_OBJECT) {
-    // naked oop "k" is OK here -- we assign back into it
-    k = SystemDictionary::resolve_instance_class_or_null(fd.object_key(),
+  BasicType t = ss.type();
+  if (ss.has_envelope()) {
+    Symbol* obj_class = ss.as_symbol();
+    k = SystemDictionary::resolve_instance_class_or_null(obj_class,
                                                          class_loader,
                                                          protection_domain,
                                                          CHECK_NULL);
     if (k != NULL) {
-      k = k->array_klass(fd.dimension(), CHECK_NULL);
+      k = k->array_klass(ndims, CHECK_NULL);
     }
   } else {
     k = Universe::typeArrayKlassObj(t);
-    k = TypeArrayKlass::cast(k)->array_klass(fd.dimension(), CHECK_NULL);
+    k = TypeArrayKlass::cast(k)->array_klass(ndims, CHECK_NULL);
   }
   return k;
 }
@@ -342,7 +341,7 @@
                                                        Handle protection_domain,
                                                        bool is_superclass,
                                                        TRAPS) {
-  assert(!FieldType::is_array(super_name), "invalid super class name");
+  assert(!Signature::is_array(super_name), "invalid super class name");
 #if INCLUDE_CDS
   if (DumpSharedSpaces) {
     // Special processing for handling UNREGISTERED shared classes.
@@ -654,8 +653,8 @@
                                                                 Handle class_loader,
                                                                 Handle protection_domain,
                                                                 TRAPS) {
-  assert(name != NULL && !FieldType::is_array(name) &&
-         !FieldType::is_obj(name), "invalid class name");
+  assert(name != NULL && !Signature::is_array(name) &&
+         !Signature::has_envelope(name), "invalid class name");
 
   EventClassLoad class_load_start_event;
 
@@ -960,19 +959,21 @@
   Klass* k = NULL;
   assert(class_name != NULL, "class name must be non NULL");
 
-  if (FieldType::is_array(class_name)) {
+  if (Signature::is_array(class_name)) {
     // The name refers to an array.  Parse the name.
     // dimension and object_key in FieldArrayInfo are assigned as a
     // side-effect of this call
-    FieldArrayInfo fd;
-    BasicType t = FieldType::get_array_info(class_name, fd, CHECK_(NULL));
+    SignatureStream ss(class_name, false);
+    int ndims = ss.skip_array_prefix();  // skip all '['s
+    BasicType t = ss.type();
     if (t != T_OBJECT) {
       k = Universe::typeArrayKlassObj(t);
     } else {
-      k = SystemDictionary::find(fd.object_key(), class_loader, protection_domain, THREAD);
+      Symbol* obj_class = ss.as_symbol();
+      k = SystemDictionary::find(obj_class, class_loader, protection_domain, THREAD);
     }
     if (k != NULL) {
-      k = k->array_klass_or_null(fd.dimension());
+      k = k->array_klass_or_null(ndims);
     }
   } else {
     k = find(class_name, class_loader, protection_domain, THREAD);
@@ -2167,20 +2168,21 @@
   // Now look to see if it has been loaded elsewhere, and is subject to
   // a loader constraint that would require this loader to return the
   // klass that is already loaded.
-  if (FieldType::is_array(class_name)) {
+  if (Signature::is_array(class_name)) {
     // For array classes, their Klass*s are not kept in the
     // constraint table. The element Klass*s are.
-    FieldArrayInfo fd;
-    BasicType t = FieldType::get_array_info(class_name, fd, CHECK_(NULL));
+    SignatureStream ss(class_name, false);
+    int ndims = ss.skip_array_prefix();  // skip all '['s
+    BasicType t = ss.type();
     if (t != T_OBJECT) {
       klass = Universe::typeArrayKlassObj(t);
     } else {
       MutexLocker mu(THREAD, SystemDictionary_lock);
-      klass = constraints()->find_constrained_klass(fd.object_key(), class_loader);
+      klass = constraints()->find_constrained_klass(ss.as_symbol(), class_loader);
     }
     // If element class already loaded, allocate array klass
     if (klass != NULL) {
-      klass = klass->array_klass_or_null(fd.dimension());
+      klass = klass->array_klass_or_null(ndims);
     }
   } else {
     MutexLocker mu(THREAD, SystemDictionary_lock);
@@ -2200,21 +2202,22 @@
   ClassLoaderData* loader_data2 = class_loader_data(class_loader2);
 
   Symbol* constraint_name = NULL;
-  // Needs to be in same scope as constraint_name in case a Symbol is created and
-  // assigned to constraint_name.
-  FieldArrayInfo fd;
-  if (!FieldType::is_array(class_name)) {
+
+  if (!Signature::is_array(class_name)) {
     constraint_name = class_name;
   } else {
     // For array classes, their Klass*s are not kept in the
     // constraint table. The element classes are.
-    BasicType t = FieldType::get_array_info(class_name, fd, CHECK_(false));
-    // primitive types always pass
-    if (t != T_OBJECT) {
-      return true;
-    } else {
-      constraint_name = fd.object_key();
+    SignatureStream ss(class_name, false);
+    ss.skip_array_prefix();  // skip all '['s
+    if (!ss.has_envelope()) {
+      return true;     // primitive types always pass
     }
+    constraint_name = ss.as_symbol();
+    // Increment refcount to keep constraint_name alive after
+    // SignatureStream is destructed. It will be decremented below
+    // before returning.
+    constraint_name->increment_refcount();
   }
 
   Dictionary* dictionary1 = loader_data1->dictionary();
@@ -2227,8 +2230,12 @@
     MutexLocker mu_s(THREAD, SystemDictionary_lock);
     InstanceKlass* klass1 = find_class(d_hash1, constraint_name, dictionary1);
     InstanceKlass* klass2 = find_class(d_hash2, constraint_name, dictionary2);
-    return constraints()->add_entry(constraint_name, klass1, class_loader1,
-                                    klass2, class_loader2);
+    bool result = constraints()->add_entry(constraint_name, klass1, class_loader1,
+                                           klass2, class_loader2);
+    if (Signature::is_array(class_name)) {
+      constraint_name->decrement_refcount();
+    }
+    return result;
   }
 }
 
@@ -2325,15 +2332,16 @@
     return NULL;
   }
 
-  SignatureStream sig_strm(signature, is_method);
-  while (!sig_strm.is_done()) {
-    if (sig_strm.is_object()) {
-      Symbol* sig = sig_strm.as_symbol();
+  for (SignatureStream ss(signature, is_method); !ss.is_done(); ss.next()) {
+    if (ss.is_reference()) {
+      Symbol* sig = ss.as_symbol();
+      // Note: In the future, if template-like types can take
+      // arguments, we will want to recognize them and dig out class
+      // names hiding inside the argument lists.
       if (!add_loader_constraint(sig, loader1, loader2, THREAD)) {
         return sig;
       }
     }
-    sig_strm.next();
   }
   return NULL;
 }
@@ -2419,9 +2427,9 @@
 Method* SystemDictionary::find_method_handle_invoker(Klass* klass,
                                                      Symbol* name,
                                                      Symbol* signature,
-                                                     Klass* accessing_klass,
-                                                     Handle *appendix_result,
-                                                     TRAPS) {
+                                                          Klass* accessing_klass,
+                                                          Handle *appendix_result,
+                                                          TRAPS) {
   assert(THREAD->can_call_java() ,"");
   Handle method_type =
     SystemDictionary::find_method_handle_type(signature, accessing_klass, CHECK_NULL);
@@ -2474,14 +2482,6 @@
           InstanceKlass::cast(klass)->is_same_class_package(SystemDictionary::MethodHandle_klass()));  // java.lang.invoke
 }
 
-
-// Return the Java mirror (java.lang.Class instance) for a single-character
-// descriptor.  This result, when available, is the same as produced by the
-// heavier API point of the same name that takes a Symbol.
-oop SystemDictionary::find_java_mirror_for_type(char signature_char) {
-  return java_lang_Class::primitive_mirror(char2type(signature_char));
-}
-
 // Find or construct the Java mirror (java.lang.Class instance) for a
 // for the given field type signature, as interpreted relative to the
 // given class loader.  Handles primitives, void, references, arrays,
@@ -2498,19 +2498,17 @@
   assert(accessing_klass == NULL || (class_loader.is_null() && protection_domain.is_null()),
          "one or the other, or perhaps neither");
 
-  Symbol* type = signature;
+  SignatureStream ss(signature, false);
 
   // What we have here must be a valid field descriptor,
   // and all valid field descriptors are supported.
   // Produce the same java.lang.Class that reflection reports.
-  if (type->utf8_length() == 1) {
+  if (ss.is_primitive() || (ss.type() == T_VOID)) {
 
     // It's a primitive.  (Void has a primitive mirror too.)
-    char ch = type->char_at(0);
-    assert(is_java_primitive(char2type(ch)) || ch == JVM_SIGNATURE_VOID, "");
-    return Handle(THREAD, find_java_mirror_for_type(ch));
+    return Handle(THREAD, java_lang_Class::primitive_mirror(ss.type()));
 
-  } else if (FieldType::is_obj(type) || FieldType::is_array(type)) {
+  } else if (ss.is_reference()) {
 
     // It's a reference type.
     if (accessing_klass != NULL) {
@@ -2519,11 +2517,11 @@
     }
     Klass* constant_type_klass;
     if (failure_mode == SignatureStream::ReturnNull) {
-      constant_type_klass = resolve_or_null(type, class_loader, protection_domain,
+      constant_type_klass = resolve_or_null(signature, class_loader, protection_domain,
                                             CHECK_(empty));
     } else {
       bool throw_error = (failure_mode == SignatureStream::NCDFError);
-      constant_type_klass = resolve_or_fail(type, class_loader, protection_domain,
+      constant_type_klass = resolve_or_fail(signature, class_loader, protection_domain,
                                             throw_error, CHECK_(empty));
     }
     if (constant_type_klass == NULL) {
@@ -2586,7 +2584,7 @@
       // Use neutral class loader to lookup candidate classes to be placed in the cache.
       mirror = ss.as_java_mirror(Handle(), Handle(),
                                  SignatureStream::ReturnNull, CHECK_(empty));
-      if (mirror == NULL || (ss.is_object() && !is_always_visible_class(mirror))) {
+      if (mirror == NULL || (ss.is_reference() && !is_always_visible_class(mirror))) {
         // Fall back to accessing_klass context.
         can_be_cached = false;
       }
--- a/src/hotspot/share/classfile/systemDictionary.hpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/share/classfile/systemDictionary.hpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 1997, 2019, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 1997, 2020, Oracle and/or its affiliates. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
  * This code is free software; you can redistribute it and/or modify it
@@ -497,10 +497,6 @@
                                      failure_mode, THREAD);
   }
 
-
-  // fast short-cut for the one-character case:
-  static oop       find_java_mirror_for_type(char signature_char);
-
   // find a java.lang.invoke.MethodType object for a given signature
   // (asks Java to compute it if necessary, except in a compiler thread)
   static Handle    find_method_handle_type(Symbol* signature,
--- a/src/hotspot/share/classfile/verificationType.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/share/classfile/verificationType.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 2003, 2019, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2003, 2020, Oracle and/or its affiliates. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
  * This code is free software; you can redistribute it and/or modify it
@@ -120,27 +120,29 @@
 
 VerificationType VerificationType::get_component(ClassVerifier *context, TRAPS) const {
   assert(is_array() && name()->utf8_length() >= 2, "Must be a valid array");
-  Symbol* component;
-  switch (name()->char_at(1)) {
-    case JVM_SIGNATURE_BOOLEAN: return VerificationType(Boolean);
-    case JVM_SIGNATURE_BYTE:    return VerificationType(Byte);
-    case JVM_SIGNATURE_CHAR:    return VerificationType(Char);
-    case JVM_SIGNATURE_SHORT:   return VerificationType(Short);
-    case JVM_SIGNATURE_INT:     return VerificationType(Integer);
-    case JVM_SIGNATURE_LONG:    return VerificationType(Long);
-    case JVM_SIGNATURE_FLOAT:   return VerificationType(Float);
-    case JVM_SIGNATURE_DOUBLE:  return VerificationType(Double);
-    case JVM_SIGNATURE_ARRAY:
-      component = context->create_temporary_symbol(
-        name(), 1, name()->utf8_length());
-      return VerificationType::reference_type(component);
-    case JVM_SIGNATURE_CLASS:
-      component = context->create_temporary_symbol(
-        name(), 2, name()->utf8_length() - 1);
-      return VerificationType::reference_type(component);
-    default:
-      // Met an invalid type signature, e.g. [X
-      return VerificationType::bogus_type();
+  SignatureStream ss(name(), false);
+  ss.skip_array_prefix(1);
+  switch (ss.type()) {
+    case T_BOOLEAN: return VerificationType(Boolean);
+    case T_BYTE:    return VerificationType(Byte);
+    case T_CHAR:    return VerificationType(Char);
+    case T_SHORT:   return VerificationType(Short);
+    case T_INT:     return VerificationType(Integer);
+    case T_LONG:    return VerificationType(Long);
+    case T_FLOAT:   return VerificationType(Float);
+    case T_DOUBLE:  return VerificationType(Double);
+    case T_ARRAY:
+    case T_OBJECT: {
+      guarantee(ss.is_reference(), "unchecked verifier input?");
+      Symbol* component = ss.as_symbol();
+      // Create another symbol to save as signature stream unreferences this symbol.
+      Symbol* component_copy = context->create_temporary_symbol(component);
+      assert(component_copy == component, "symbols don't match");
+      return VerificationType::reference_type(component_copy);
+   }
+   default:
+     // Met an invalid type signature, e.g. [X
+     return VerificationType::bogus_type();
   }
 }
 
--- a/src/hotspot/share/classfile/vmSymbols.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/share/classfile/vmSymbols.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 1997, 2019, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 1997, 2020, Oracle and/or its affiliates. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
  * This code is free software; you can redistribute it and/or modify it
@@ -99,13 +99,13 @@
     _type_signatures[T_SHORT]   = short_signature();
     _type_signatures[T_BOOLEAN] = bool_signature();
     _type_signatures[T_VOID]    = void_signature();
-    // no single signatures for T_OBJECT or T_ARRAY
 #ifdef ASSERT
     for (int i = (int)T_BOOLEAN; i < (int)T_VOID+1; i++) {
       Symbol* s = _type_signatures[i];
       if (s == NULL)  continue;
-      BasicType st = signature_type(s);
-      assert(st == i, "");
+      SignatureStream ss(s, false);
+      assert(ss.type() == i, "matching signature");
+      assert(!ss.is_reference(), "no single-char signature for T_OBJECT, etc.");
     }
 #endif
   }
@@ -209,20 +209,6 @@
   soc->do_region((u_char*)_type_signatures, sizeof(_type_signatures));
 }
 
-
-BasicType vmSymbols::signature_type(const Symbol* s) {
-  assert(s != NULL, "checking");
-  if (s->utf8_length() == 1) {
-    BasicType result = char2type(s->char_at(0));
-    if (is_java_primitive(result) || result == T_VOID) {
-      assert(s == _type_signatures[result], "");
-      return result;
-    }
-  }
-  return T_OBJECT;
-}
-
-
 static int mid_hint = (int)vmSymbols::FIRST_SID+1;
 
 #ifndef PRODUCT
--- a/src/hotspot/share/classfile/vmSymbols.hpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/share/classfile/vmSymbols.hpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 1997, 2019, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 1997, 2020, Oracle and/or its affiliates. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
  * This code is free software; you can redistribute it and/or modify it
@@ -1566,8 +1566,6 @@
     assert(_type_signatures[t] != NULL, "domain check");
     return _type_signatures[t];
   }
-  // inverse of type_signature; returns T_OBJECT if s is not recognized
-  static BasicType signature_type(const Symbol* s);
 
   static Symbol* symbol_at(SID id) {
     assert(id >= FIRST_SID && id < SID_LIMIT, "oob");
--- a/src/hotspot/share/code/nmethod.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/share/code/nmethod.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 1997, 2019, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 1997, 2020, Oracle and/or its affiliates. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
  * This code is free software; you can redistribute it and/or modify it
@@ -1047,7 +1047,7 @@
       oop_Relocation* reloc = iter.oop_reloc();
       if (initialize_immediates && reloc->oop_is_immediate()) {
         oop* dest = reloc->oop_addr();
-        initialize_immediate_oop(dest, (jobject) *dest);
+        initialize_immediate_oop(dest, cast_from_oop<jobject>(*dest));
       }
       // Refresh the oop-related bits of this instruction.
       reloc->fix_oop_relocation();
@@ -3151,12 +3151,10 @@
           m->method_holder()->print_value_on(stream);
         } else {
           bool did_name = false;
-          if (!at_this && ss.is_object()) {
-            Symbol* name = ss.as_symbol_or_null();
-            if (name != NULL) {
-              name->print_value_on(stream);
-              did_name = true;
-            }
+          if (!at_this && ss.is_reference()) {
+            Symbol* name = ss.as_symbol();
+            name->print_value_on(stream);
+            did_name = true;
           }
           if (!did_name)
             stream->print("%s", type2name(t));
--- a/src/hotspot/share/code/relocInfo.hpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/share/code/relocInfo.hpp	Fri Feb 07 11:09:59 2020 -0800
@@ -935,7 +935,7 @@
 
   void verify_oop_relocation();
 
-  address value()  { return (address) *oop_addr(); }
+  address value()  { return cast_from_oop<address>(*oop_addr()); }
 
   bool oop_is_immediate()  { return oop_index() == 0; }
 
--- a/src/hotspot/share/compiler/compilationPolicy.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/share/compiler/compilationPolicy.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
  * This code is free software; you can redistribute it and/or modify it
@@ -190,6 +190,50 @@
   return compile_queue->first();
 }
 
+
+//
+// CounterDecay for SimpleCompPolicy
+//
+// Iterates through invocation counters and decrements them. This
+// is done at each safepoint.
+//
+class CounterDecay : public AllStatic {
+  static jlong _last_timestamp;
+  static void do_method(Method* m) {
+    MethodCounters* mcs = m->method_counters();
+    if (mcs != NULL) {
+      mcs->invocation_counter()->decay();
+    }
+  }
+public:
+  static void decay();
+  static bool is_decay_needed() {
+    return nanos_to_millis(os::javaTimeNanos() - _last_timestamp) > CounterDecayMinIntervalLength;
+  }
+  static void update_last_timestamp() { _last_timestamp = os::javaTimeNanos(); }
+};
+
+jlong CounterDecay::_last_timestamp = 0;
+
+void CounterDecay::decay() {
+  update_last_timestamp();
+
+  // This operation is going to be performed only at the end of a safepoint
+  // and hence GC's will not be going on, all Java mutators are suspended
+  // at this point and hence SystemDictionary_lock is also not needed.
+  assert(SafepointSynchronize::is_at_safepoint(), "can only be executed at a safepoint");
+  size_t nclasses = ClassLoaderDataGraph::num_instance_classes();
+  size_t classes_per_tick = nclasses * (CounterDecayMinIntervalLength * 1e-3 /
+                                        CounterHalfLifeTime);
+  for (size_t i = 0; i < classes_per_tick; i++) {
+    InstanceKlass* k = ClassLoaderDataGraph::try_get_next_class();
+    if (k != NULL) {
+      k->methods_do(do_method);
+    }
+  }
+}
+
+
 #ifndef PRODUCT
 void SimpleCompPolicy::trace_osr_completion(nmethod* osr_nm) {
   if (TraceOnStackReplacement) {
@@ -223,6 +267,7 @@
   } else {
     _compiler_count = CICompilerCount;
   }
+  CounterDecay::update_last_timestamp();
 }
 
 // Note: this policy is used ONLY if TieredCompilation is off.
@@ -272,47 +317,6 @@
   b->set(b->state(), CompileThreshold / 2);
 }
 
-//
-// CounterDecay
-//
-// Iterates through invocation counters and decrements them. This
-// is done at each safepoint.
-//
-class CounterDecay : public AllStatic {
-  static jlong _last_timestamp;
-  static void do_method(Method* m) {
-    MethodCounters* mcs = m->method_counters();
-    if (mcs != NULL) {
-      mcs->invocation_counter()->decay();
-    }
-  }
-public:
-  static void decay();
-  static bool is_decay_needed() {
-    return (os::javaTimeMillis() - _last_timestamp) > CounterDecayMinIntervalLength;
-  }
-};
-
-jlong CounterDecay::_last_timestamp = 0;
-
-void CounterDecay::decay() {
-  _last_timestamp = os::javaTimeMillis();
-
-  // This operation is going to be performed only at the end of a safepoint
-  // and hence GC's will not be going on, all Java mutators are suspended
-  // at this point and hence SystemDictionary_lock is also not needed.
-  assert(SafepointSynchronize::is_at_safepoint(), "can only be executed at a safepoint");
-  size_t nclasses = ClassLoaderDataGraph::num_instance_classes();
-  size_t classes_per_tick = nclasses * (CounterDecayMinIntervalLength * 1e-3 /
-                                        CounterHalfLifeTime);
-  for (size_t i = 0; i < classes_per_tick; i++) {
-    InstanceKlass* k = ClassLoaderDataGraph::try_get_next_class();
-    if (k != NULL) {
-      k->methods_do(do_method);
-    }
-  }
-}
-
 // Called at the end of the safepoint
 void SimpleCompPolicy::do_safepoint_work() {
   if(UseCounterDecay && CounterDecay::is_decay_needed()) {
--- a/src/hotspot/share/compiler/methodMatcher.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/share/compiler/methodMatcher.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 2016, 2019, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2016, 2020, Oracle and/or its affiliates. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
  * This code is free software; you can redistribute it and/or modify it
@@ -271,7 +271,8 @@
     }
     if ((strchr(method_name, JVM_SIGNATURE_SPECIAL) != NULL) ||
         (strchr(method_name, JVM_SIGNATURE_ENDSPECIAL) != NULL)) {
-      if ((strncmp("<init>", method_name, 255) != 0) && (strncmp("<clinit>", method_name, 255) != 0)) {
+      if (!vmSymbols::object_initializer_name()->equals(method_name) &&
+          !vmSymbols::class_initializer_name()->equals(method_name)) {
         error_msg = "Chars '<' and '>' only allowed in <init> and <clinit>";
         return;
       }
--- a/src/hotspot/share/compiler/oopMap.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/share/compiler/oopMap.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -367,7 +367,7 @@
           omv.print();
           tty->print_cr("register r");
           omv.reg()->print();
-          tty->print_cr("loc = %p *loc = %p\n", loc, (address)*loc);
+          tty->print_cr("loc = %p *loc = %p\n", loc, cast_from_oop<address>(*loc));
           // do the real assert.
           assert(Universe::heap()->is_in_or_null(*loc), "found non oop pointer");
         }
@@ -770,7 +770,7 @@
         "Add derived pointer@" INTPTR_FORMAT
         " - Derived: " INTPTR_FORMAT
         " Base: " INTPTR_FORMAT " (@" INTPTR_FORMAT ") (Offset: " INTX_FORMAT ")",
-        p2i(derived_loc), p2i((address)*derived_loc), p2i((address)*base_loc), p2i(base_loc), offset
+        p2i(derived_loc), p2i(*derived_loc), p2i(*base_loc), p2i(base_loc), offset
       );
     }
     // Set derived oop location to point to base.
@@ -792,13 +792,13 @@
     oop base = **(oop**)derived_loc;
     assert(Universe::heap()->is_in_or_null(base), "must be an oop");
 
-    *derived_loc = (oop)(((address)base) + offset);
+    *derived_loc = (oop)(cast_from_oop<address>(base) + offset);
     assert(value_of_loc(derived_loc) - value_of_loc(&base) == offset, "sanity check");
 
     if (TraceDerivedPointers) {
       tty->print_cr("Updating derived pointer@" INTPTR_FORMAT
                     " - Derived: " INTPTR_FORMAT "  Base: " INTPTR_FORMAT " (Offset: " INTX_FORMAT ")",
-          p2i(derived_loc), p2i((address)*derived_loc), p2i((address)base), offset);
+          p2i(derived_loc), p2i(*derived_loc), p2i(base), offset);
     }
 
     // Delete entry
--- a/src/hotspot/share/compiler/tieredThresholdPolicy.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/share/compiler/tieredThresholdPolicy.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 2010, 2019, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2010, 2020, Oracle and/or its affiliates. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
  * This code is free software; you can redistribute it and/or modify it
@@ -304,7 +304,7 @@
 #endif
 
   set_increase_threshold_at_ratio();
-  set_start_time(os::javaTimeMillis());
+  set_start_time(nanos_to_millis(os::javaTimeNanos()));
 }
 
 
@@ -404,7 +404,7 @@
   CompileTask *max_blocking_task = NULL;
   CompileTask *max_task = NULL;
   Method* max_method = NULL;
-  jlong t = os::javaTimeMillis();
+  jlong t = nanos_to_millis(os::javaTimeNanos());
   // Iterate through the queue and find a method with a maximum rate.
   for (CompileTask* task = compile_queue->first(); task != NULL;) {
     CompileTask* next_task = task->next();
@@ -596,7 +596,7 @@
       print_event(COMPILE, mh(), mh(), bci, level);
     }
     int hot_count = (bci == InvocationEntryBci) ? mh->invocation_count() : mh->backedge_count();
-    update_rate(os::javaTimeMillis(), mh());
+    update_rate(nanos_to_millis(os::javaTimeNanos()), mh());
     CompileBroker::compile_method(mh, bci, level, mh, hot_count, CompileTask::Reason_Tiered, thread);
   }
 }
@@ -616,7 +616,7 @@
 
   // We don't update the rate if we've just came out of a safepoint.
   // delta_s is the time since last safepoint in milliseconds.
-  jlong delta_s = t - SafepointTracing::end_of_last_safepoint_epoch_ms();
+  jlong delta_s = t - SafepointTracing::end_of_last_safepoint_ms();
   jlong delta_t = t - (m->prev_time() != 0 ? m->prev_time() : start_time()); // milliseconds since the last measurement
   // How many events were there since the last time?
   int event_count = m->invocation_count() + m->backedge_count();
@@ -641,7 +641,7 @@
 // Check if this method has been stale for a given number of milliseconds.
 // See select_task().
 bool TieredThresholdPolicy::is_stale(jlong t, jlong timeout, Method* m) {
-  jlong delta_s = t - SafepointTracing::end_of_last_safepoint_epoch_ms();
+  jlong delta_s = t - SafepointTracing::end_of_last_safepoint_ms();
   jlong delta_t = t - m->prev_time();
   if (delta_t > timeout && delta_s > timeout) {
     int event_count = m->invocation_count() + m->backedge_count();
--- a/src/hotspot/share/gc/g1/g1Allocator.inline.hpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/share/gc/g1/g1Allocator.inline.hpp	Fri Feb 07 11:09:59 2020 -0800
@@ -158,11 +158,11 @@
 
 // Check if an object is in a closed archive region using the _archive_region_map.
 inline bool G1ArchiveAllocator::in_closed_archive_range(oop object) {
-  return _archive_region_map.get_by_address((HeapWord*)object) == G1ArchiveRegionMap::ClosedArchive;
+  return _archive_region_map.get_by_address(cast_from_oop<HeapWord*>(object)) == G1ArchiveRegionMap::ClosedArchive;
 }
 
 inline bool G1ArchiveAllocator::in_open_archive_range(oop object) {
-  return _archive_region_map.get_by_address((HeapWord*)object) == G1ArchiveRegionMap::OpenArchive;
+  return _archive_region_map.get_by_address(cast_from_oop<HeapWord*>(object)) == G1ArchiveRegionMap::OpenArchive;
 }
 
 // Check if archive object checking is enabled, to avoid calling in_open/closed_archive_range
@@ -181,7 +181,7 @@
 
 inline bool G1ArchiveAllocator::is_archived_object(oop object) {
   return archive_check_enabled() &&
-         (_archive_region_map.get_by_address((HeapWord*)object) != G1ArchiveRegionMap::NoArchive);
+         (_archive_region_map.get_by_address(cast_from_oop<HeapWord*>(object)) != G1ArchiveRegionMap::NoArchive);
 }
 
 #endif // SHARE_GC_G1_G1ALLOCATOR_INLINE_HPP
--- a/src/hotspot/share/gc/g1/g1BarrierSet.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/share/gc/g1/g1BarrierSet.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 2001, 2019, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2001, 2020, Oracle and/or its affiliates. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
  * This code is free software; you can redistribute it and/or modify it
@@ -36,7 +36,6 @@
 #include "oops/compressedOops.inline.hpp"
 #include "oops/oop.inline.hpp"
 #include "runtime/interfaceSupport.inline.hpp"
-#include "runtime/mutexLocker.hpp"
 #include "runtime/orderAccess.hpp"
 #include "runtime/thread.inline.hpp"
 #include "utilities/macros.hpp"
@@ -59,7 +58,7 @@
   _satb_mark_queue_buffer_allocator("SATB Buffer Allocator", G1SATBBufferSize),
   _dirty_card_queue_buffer_allocator("DC Buffer Allocator", G1UpdateBufferSize),
   _satb_mark_queue_set(&_satb_mark_queue_buffer_allocator),
-  _dirty_card_queue_set(DirtyCardQ_CBL_mon, &_dirty_card_queue_buffer_allocator),
+  _dirty_card_queue_set(&_dirty_card_queue_buffer_allocator),
   _shared_dirty_card_queue(&_dirty_card_queue_set)
 {}
 
--- a/src/hotspot/share/gc/g1/g1CollectedHeap.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/share/gc/g1/g1CollectedHeap.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -79,6 +79,7 @@
 #include "gc/shared/preservedMarks.inline.hpp"
 #include "gc/shared/suspendibleThreadSet.hpp"
 #include "gc/shared/referenceProcessor.inline.hpp"
+#include "gc/shared/taskTerminator.hpp"
 #include "gc/shared/taskqueue.inline.hpp"
 #include "gc/shared/weakProcessor.inline.hpp"
 #include "gc/shared/workerPolicy.hpp"
@@ -1131,9 +1132,6 @@
   heap_transition->print();
   print_heap_after_gc();
   print_heap_regions();
-#ifdef TRACESPINNING
-  ParallelTaskTerminator::print_termination_counts();
-#endif
 }
 
 bool G1CollectedHeap::do_full_collection(bool explicit_gc,
@@ -2778,8 +2776,6 @@
   Threads::threads_do(&count_from_threads);
 
   G1DirtyCardQueueSet& dcqs = G1BarrierSet::dirty_card_queue_set();
-  dcqs.verify_num_cards();
-
   return dcqs.num_cards() + count_from_threads._cards;
 }
 
@@ -2987,6 +2983,19 @@
     return false;
   }
 
+  do_collection_pause_at_safepoint_helper(target_pause_time_ms);
+  if (should_upgrade_to_full_gc(gc_cause())) {
+    log_info(gc, ergo)("Attempting maximally compacting collection");
+    bool result = do_full_collection(false /* explicit gc */,
+                                     true /* clear_all_soft_refs */);
+    // do_full_collection only fails if blocked by GC locker, but
+    // we've already checked for that above.
+    assert(result, "invariant");
+  }
+  return true;
+}
+
+void G1CollectedHeap::do_collection_pause_at_safepoint_helper(double target_pause_time_ms) {
   GCIdMark gc_id_mark;
 
   SvcGCMarker sgcm(SvcGCMarker::MINOR);
@@ -3126,10 +3135,6 @@
 
       verify_after_young_collection(verify_type);
 
-#ifdef TRACESPINNING
-      ParallelTaskTerminator::print_termination_counts();
-#endif
-
       gc_epilogue(false);
     }
 
@@ -3174,8 +3179,6 @@
     // itself is released in SuspendibleThreadSet::desynchronize().
     do_concurrent_mark();
   }
-
-  return true;
 }
 
 void G1CollectedHeap::remove_self_forwarding_pointers(G1RedirtyCardsQueueSet* rdcqs) {
@@ -3465,14 +3468,14 @@
   G1CollectedHeap* _g1h;
   G1ParScanThreadStateSet* _pss;
   RefToScanQueueSet* _task_queues;
-  ParallelTaskTerminator* _terminator;
+  TaskTerminator* _terminator;
 
 public:
   G1STWRefProcTaskProxy(ProcessTask& proc_task,
                         G1CollectedHeap* g1h,
                         G1ParScanThreadStateSet* per_thread_states,
                         RefToScanQueueSet *task_queues,
-                        ParallelTaskTerminator* terminator) :
+                        TaskTerminator* terminator) :
     AbstractGangTask("Process reference objects in parallel"),
     _proc_task(proc_task),
     _g1h(g1h),
@@ -3517,7 +3520,7 @@
          "Ergonomically chosen workers (%u) should be less than or equal to active workers (%u)",
          ergo_workers, _workers->active_workers());
   TaskTerminator terminator(ergo_workers, _queues);
-  G1STWRefProcTaskProxy proc_task_proxy(proc_task, _g1h, _pss, _queues, terminator.terminator());
+  G1STWRefProcTaskProxy proc_task_proxy(proc_task, _g1h, _pss, _queues, &terminator);
 
   _workers->run_task(&proc_task_proxy, ergo_workers);
 }
@@ -3813,7 +3816,7 @@
     G1GCPhaseTimes* p = _g1h->phase_times();
 
     Ticks start = Ticks::now();
-    G1ParEvacuateFollowersClosure cl(_g1h, pss, _task_queues, _terminator.terminator(), objcopy_phase);
+    G1ParEvacuateFollowersClosure cl(_g1h, pss, _task_queues, &_terminator, objcopy_phase);
     cl.do_void();
 
     assert(pss->queue_is_empty(), "should be empty");
--- a/src/hotspot/share/gc/g1/g1CollectedHeap.hpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/share/gc/g1/g1CollectedHeap.hpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 2001, 2019, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2001, 2020, Oracle and/or its affiliates. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
  * This code is free software; you can redistribute it and/or modify it
@@ -757,11 +757,18 @@
 
   void wait_for_root_region_scanning();
 
-  // The guts of the incremental collection pause, executed by the vm
-  // thread. It returns false if it is unable to do the collection due
-  // to the GC locker being active, true otherwise
+  // Perform an incremental collection at a safepoint, possibly
+  // followed by a by-policy upgrade to a full collection.  Returns
+  // false if unable to do the collection due to the GC locker being
+  // active, true otherwise.
+  // precondition: at safepoint on VM thread
+  // precondition: !is_gc_active()
   bool do_collection_pause_at_safepoint(double target_pause_time_ms);
 
+  // Helper for do_collection_pause_at_safepoint, containing the guts
+  // of the incremental collection pause, executed by the vm thread.
+  void do_collection_pause_at_safepoint_helper(double target_pause_time_ms);
+
   G1HeapVerifier::G1VerifyType young_collection_verify_type() const;
   void verify_before_young_collection(G1HeapVerifier::G1VerifyType type);
   void verify_after_young_collection(G1HeapVerifier::G1VerifyType type);
@@ -1475,18 +1482,18 @@
   G1CollectedHeap*              _g1h;
   G1ParScanThreadState*         _par_scan_state;
   RefToScanQueueSet*            _queues;
-  ParallelTaskTerminator*       _terminator;
+  TaskTerminator*               _terminator;
   G1GCPhaseTimes::GCParPhases   _phase;
 
   G1ParScanThreadState*   par_scan_state() { return _par_scan_state; }
   RefToScanQueueSet*      queues()         { return _queues; }
-  ParallelTaskTerminator* terminator()     { return _terminator; }
+  TaskTerminator*         terminator()     { return _terminator; }
 
 public:
   G1ParEvacuateFollowersClosure(G1CollectedHeap* g1h,
                                 G1ParScanThreadState* par_scan_state,
                                 RefToScanQueueSet* queues,
-                                ParallelTaskTerminator* terminator,
+                                TaskTerminator* terminator,
                                 G1GCPhaseTimes::GCParPhases phase)
     : _start_term(0.0), _term_time(0.0), _term_attempts(0),
       _g1h(g1h), _par_scan_state(par_scan_state),
--- a/src/hotspot/share/gc/g1/g1CollectedHeap.inline.hpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/share/gc/g1/g1CollectedHeap.inline.hpp	Fri Feb 07 11:09:59 2020 -0800
@@ -33,6 +33,7 @@
 #include "gc/g1/heapRegionManager.inline.hpp"
 #include "gc/g1/heapRegionRemSet.hpp"
 #include "gc/g1/heapRegionSet.inline.hpp"
+#include "gc/shared/markBitMap.inline.hpp"
 #include "gc/shared/taskqueue.inline.hpp"
 
 G1GCPhaseTimes* G1CollectedHeap::phase_times() const {
@@ -89,7 +90,7 @@
   assert(is_in_g1_reserved((const void*) addr),
          "Address " PTR_FORMAT " is outside of the heap ranging from [" PTR_FORMAT " to " PTR_FORMAT ")",
          p2i((void*)addr), p2i(g1_reserved().start()), p2i(g1_reserved().end()));
-  return _hrm->addr_to_region((HeapWord*) addr);
+  return _hrm->addr_to_region((HeapWord*)(void*) addr);
 }
 
 template <class T>
@@ -143,11 +144,11 @@
 }
 
 inline bool G1CollectedHeap::is_marked_next(oop obj) const {
-  return _cm->next_mark_bitmap()->is_marked((HeapWord*)obj);
+  return _cm->next_mark_bitmap()->is_marked(obj);
 }
 
 inline bool G1CollectedHeap::is_in_cset(oop obj) {
-  return is_in_cset((HeapWord*)obj);
+  return is_in_cset(cast_from_oop<HeapWord*>(obj));
 }
 
 inline bool G1CollectedHeap::is_in_cset(HeapWord* addr) {
@@ -159,7 +160,7 @@
 }
 
 bool G1CollectedHeap::is_in_cset_or_humongous(const oop obj) {
-  return _region_attr.is_in_cset_or_humongous((HeapWord*)obj);
+  return _region_attr.is_in_cset_or_humongous(cast_from_oop<HeapWord*>(obj));
 }
 
 G1HeapRegionAttr G1CollectedHeap::region_attr(const void* addr) const {
@@ -303,7 +304,7 @@
 }
 
 inline void G1CollectedHeap::set_humongous_is_live(oop obj) {
-  uint region = addr_to_region((HeapWord*)obj);
+  uint region = addr_to_region(cast_from_oop<HeapWord*>(obj));
   // Clear the flag in the humongous_reclaim_candidates table.  Also
   // reset the entry in the region attribute table so that subsequent references
   // to the same humongous object do not go into the slow path again.
--- a/src/hotspot/share/gc/g1/g1ConcurrentMark.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/share/gc/g1/g1ConcurrentMark.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 2001, 2019, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2001, 2020, Oracle and/or its affiliates. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
  * This code is free software; you can redistribute it and/or modify it
@@ -49,6 +49,7 @@
 #include "gc/shared/referencePolicy.hpp"
 #include "gc/shared/strongRootsScope.hpp"
 #include "gc/shared/suspendibleThreadSet.hpp"
+#include "gc/shared/taskTerminator.hpp"
 #include "gc/shared/taskqueue.inline.hpp"
 #include "gc/shared/weakProcessor.inline.hpp"
 #include "gc/shared/workerPolicy.hpp"
@@ -209,7 +210,7 @@
     return NULL;
   }
 
-  size_t cur_idx = Atomic::add(&_hwm, 1u) - 1;
+  size_t cur_idx = Atomic::fetch_and_add(&_hwm, 1u);
   if (cur_idx >= _chunk_capacity) {
     return NULL;
   }
@@ -282,7 +283,7 @@
 
 void G1CMRootMemRegions::add(HeapWord* start, HeapWord* end) {
   assert_at_safepoint();
-  size_t idx = Atomic::add(&_num_root_regions, (size_t)1) - 1;
+  size_t idx = Atomic::fetch_and_add(&_num_root_regions, 1u);
   assert(idx < _max_regions, "Trying to add more root MemRegions than there is space " SIZE_FORMAT, _max_regions);
   assert(start != NULL && end != NULL && start <= end, "Start (" PTR_FORMAT ") should be less or equal to "
          "end (" PTR_FORMAT ")", p2i(start), p2i(end));
@@ -310,7 +311,7 @@
     return NULL;
   }
 
-  size_t claimed_index = Atomic::add(&_claimed_root_regions, (size_t)1) - 1;
+  size_t claimed_index = Atomic::fetch_and_add(&_claimed_root_regions, 1u);
   if (claimed_index < _num_root_regions) {
     return &_root_regions[claimed_index];
   }
@@ -600,7 +601,7 @@
   _num_active_tasks = active_tasks;
   // Need to update the three data structures below according to the
   // number of active threads for this phase.
-  _terminator.terminator()->reset_for_reuse((int) active_tasks);
+  _terminator.reset_for_reuse(active_tasks);
   _first_overflow_barrier_sync.set_n_workers((int) active_tasks);
   _second_overflow_barrier_sync.set_n_workers((int) active_tasks);
 }
@@ -1728,9 +1729,8 @@
   G1ObjectCountIsAliveClosure(G1CollectedHeap* g1h) : _g1h(g1h) { }
 
   bool do_object_b(oop obj) {
-    HeapWord* addr = (HeapWord*)obj;
-    return addr != NULL &&
-           (!_g1h->is_in_g1_reserved(addr) || !_g1h->is_obj_dead(obj));
+    return obj != NULL &&
+           (!_g1h->is_in_g1_reserved(obj) || !_g1h->is_obj_dead(obj));
   }
 };
 
--- a/src/hotspot/share/gc/g1/g1ConcurrentMark.hpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/share/gc/g1/g1ConcurrentMark.hpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 2001, 2019, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2001, 2020, Oracle and/or its affiliates. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
  * This code is free software; you can redistribute it and/or modify it
@@ -30,6 +30,7 @@
 #include "gc/g1/g1HeapVerifier.hpp"
 #include "gc/g1/g1RegionMarkStatsCache.hpp"
 #include "gc/g1/heapRegionSet.hpp"
+#include "gc/shared/taskTerminator.hpp"
 #include "gc/shared/taskqueue.hpp"
 #include "gc/shared/verifyOption.hpp"
 #include "gc/shared/workgroup.hpp"
@@ -414,10 +415,10 @@
   // Prints all gathered CM-related statistics
   void print_stats();
 
-  HeapWord*               finger()           { return _finger;   }
-  bool                    concurrent()       { return _concurrent; }
-  uint                    active_tasks()     { return _num_active_tasks; }
-  ParallelTaskTerminator* terminator() const { return _terminator.terminator(); }
+  HeapWord*           finger()       { return _finger;   }
+  bool                concurrent()   { return _concurrent; }
+  uint                active_tasks() { return _num_active_tasks; }
+  TaskTerminator*     terminator()   { return &_terminator; }
 
   // Claims the next available region to be scanned by a marking
   // task/thread. It might return NULL if the next region is empty or
--- a/src/hotspot/share/gc/g1/g1ConcurrentMark.inline.hpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/share/gc/g1/g1ConcurrentMark.inline.hpp	Fri Feb 07 11:09:59 2020 -0800
@@ -72,9 +72,7 @@
   // Can't assert that this is a valid object at this point, since it might be in the process of being copied by another thread.
   assert(!hr->is_continues_humongous(), "Should not try to mark object " PTR_FORMAT " in Humongous continues region %u above nTAMS " PTR_FORMAT, p2i(obj), hr->hrm_index(), p2i(hr->next_top_at_mark_start()));
 
-  HeapWord* const obj_addr = (HeapWord*)obj;
-
-  bool success = _next_mark_bitmap->par_mark(obj_addr);
+  bool success = _next_mark_bitmap->par_mark(obj);
   if (success) {
     add_to_liveness(worker_id, obj, obj->size());
   }
@@ -112,7 +110,7 @@
   assert(task_entry.is_array_slice() || !_g1h->is_on_master_free_list(
               _g1h->heap_region_containing(task_entry.obj())), "invariant");
   assert(task_entry.is_array_slice() || !_g1h->is_obj_ill(task_entry.obj()), "invariant");  // FIXME!!!
-  assert(task_entry.is_array_slice() || _next_mark_bitmap->is_marked((HeapWord*)task_entry.obj()), "invariant");
+  assert(task_entry.is_array_slice() || _next_mark_bitmap->is_marked(cast_from_oop<HeapWord*>(task_entry.obj())), "invariant");
 
   if (!_task_queue->push(task_entry)) {
     // The local task queue looks full. We need to push some entries
@@ -135,7 +133,7 @@
   // of checking both vs only checking the global finger is that the
   // local check will be more accurate and so result in fewer pushes,
   // but may also be a little slower.
-  HeapWord* objAddr = (HeapWord*)obj;
+  HeapWord* objAddr = cast_from_oop<HeapWord*>(obj);
   if (_finger != NULL) {
     // We have a current region.
 
@@ -160,7 +158,7 @@
 template<bool scan>
 inline void G1CMTask::process_grey_task_entry(G1TaskQueueEntry task_entry) {
   assert(scan || (task_entry.is_oop() && task_entry.obj()->is_typeArray()), "Skipping scan of grey non-typeArray");
-  assert(task_entry.is_array_slice() || _next_mark_bitmap->is_marked((HeapWord*)task_entry.obj()),
+  assert(task_entry.is_array_slice() || _next_mark_bitmap->is_marked(cast_from_oop<HeapWord*>(task_entry.obj())),
          "Any stolen object should be a slice or marked");
 
   if (scan) {
@@ -203,7 +201,7 @@
 }
 
 inline void G1CMTask::update_liveness(oop const obj, const size_t obj_size) {
-  _mark_stats_cache.add_live_words(_g1h->addr_to_region((HeapWord*)obj), obj_size);
+  _mark_stats_cache.add_live_words(_g1h->addr_to_region(cast_from_oop<HeapWord*>(obj)), obj_size);
 }
 
 inline void G1ConcurrentMark::add_to_liveness(uint worker_id, oop const obj, size_t size) {
@@ -270,18 +268,18 @@
 }
 
 inline void G1ConcurrentMark::mark_in_prev_bitmap(oop p) {
-  assert(!_prev_mark_bitmap->is_marked((HeapWord*) p), "sanity");
- _prev_mark_bitmap->mark((HeapWord*) p);
+  assert(!_prev_mark_bitmap->is_marked(p), "sanity");
+ _prev_mark_bitmap->mark(p);
 }
 
 bool G1ConcurrentMark::is_marked_in_prev_bitmap(oop p) const {
   assert(p != NULL && oopDesc::is_oop(p), "expected an oop");
-  return _prev_mark_bitmap->is_marked((HeapWord*)p);
+  return _prev_mark_bitmap->is_marked(cast_from_oop<HeapWord*>(p));
 }
 
 bool G1ConcurrentMark::is_marked_in_next_bitmap(oop p) const {
   assert(p != NULL && oopDesc::is_oop(p), "expected an oop");
-  return _next_mark_bitmap->is_marked((HeapWord*)p);
+  return _next_mark_bitmap->is_marked(cast_from_oop<HeapWord*>(p));
 }
 
 inline bool G1ConcurrentMark::do_yield_check() {
--- a/src/hotspot/share/gc/g1/g1ConcurrentMarkObjArrayProcessor.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/share/gc/g1/g1ConcurrentMarkObjArrayProcessor.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -45,7 +45,7 @@
 size_t G1CMObjArrayProcessor::process_obj(oop obj) {
   assert(should_be_sliced(obj), "Must be an array object %d and large " SIZE_FORMAT, obj->is_objArray(), (size_t)obj->size());
 
-  return process_array_slice(objArrayOop(obj), (HeapWord*)obj, (size_t)objArrayOop(obj)->size());
+  return process_array_slice(objArrayOop(obj), cast_from_oop<HeapWord*>(obj), (size_t)objArrayOop(obj)->size());
 }
 
 size_t G1CMObjArrayProcessor::process_slice(HeapWord* slice) {
--- a/src/hotspot/share/gc/g1/g1ConcurrentRefine.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/share/gc/g1/g1ConcurrentRefine.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -89,6 +89,11 @@
       }
     }
   }
+
+  if (num_max_threads > 0) {
+    G1BarrierSet::dirty_card_queue_set().set_primary_refinement_thread(_threads[0]);
+  }
+
   return JNI_OK;
 }
 
@@ -108,7 +113,7 @@
     _threads[worker_id] = create_refinement_thread(worker_id, false);
     thread_to_activate = _threads[worker_id];
   }
-  if (thread_to_activate != NULL && !thread_to_activate->is_active()) {
+  if (thread_to_activate != NULL) {
     thread_to_activate->activate();
   }
 }
--- a/src/hotspot/share/gc/g1/g1ConcurrentRefineThread.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/share/gc/g1/g1ConcurrentRefineThread.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 2001, 2019, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2001, 2020, Oracle and/or its affiliates. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
  * This code is free software; you can redistribute it and/or modify it
@@ -29,9 +29,8 @@
 #include "gc/g1/g1DirtyCardQueue.hpp"
 #include "gc/shared/suspendibleThreadSet.hpp"
 #include "logging/log.hpp"
-#include "memory/resourceArea.hpp"
-#include "runtime/handles.inline.hpp"
-#include "runtime/mutexLocker.hpp"
+#include "runtime/atomic.hpp"
+#include "runtime/thread.hpp"
 
 G1ConcurrentRefineThread::G1ConcurrentRefineThread(G1ConcurrentRefine* cr, uint worker_id) :
   ConcurrentGCThread(),
@@ -40,56 +39,53 @@
   _total_refinement_time(),
   _total_refined_cards(0),
   _worker_id(worker_id),
-  _active(false),
-  _monitor(NULL),
+  _notifier(new Semaphore(0)),
+  _should_notify(true),
   _cr(cr)
 {
-  // Each thread has its own monitor. The i-th thread is responsible for signaling
-  // to thread i+1 if the number of buffers in the queue exceeds a threshold for this
-  // thread. Monitors are also used to wake up the threads during termination.
-  // The 0th (primary) worker is notified by mutator threads and has a special monitor.
-  if (!is_primary()) {
-    _monitor = new Monitor(Mutex::nonleaf, "Refinement monitor", true,
-                           Monitor::_safepoint_check_never);
-  } else {
-    _monitor = DirtyCardQ_CBL_mon;
-  }
-
   // set name
   set_name("G1 Refine#%d", worker_id);
   create_and_start();
 }
 
 void G1ConcurrentRefineThread::wait_for_completed_buffers() {
-  MonitorLocker ml(_monitor, Mutex::_no_safepoint_check_flag);
-  while (!should_terminate() && !is_active()) {
-    ml.wait();
+  assert(this == Thread::current(), "precondition");
+  while (Atomic::load_acquire(&_should_notify)) {
+    _notifier->wait();
   }
 }
 
-bool G1ConcurrentRefineThread::is_active() {
-  G1DirtyCardQueueSet& dcqs = G1BarrierSet::dirty_card_queue_set();
-  return is_primary() ? dcqs.process_completed_buffers() : _active;
+void G1ConcurrentRefineThread::activate() {
+  assert(this != Thread::current(), "precondition");
+  // Notify iff transitioning from needing activation to not.  This helps
+  // keep the semaphore count bounded and minimizes the work done by
+  // activators when the thread is already active.
+  if (Atomic::load_acquire(&_should_notify) &&
+      Atomic::cmpxchg(&_should_notify, true, false)) {
+    _notifier->signal();
+  }
 }
 
-void G1ConcurrentRefineThread::activate() {
-  MutexLocker x(_monitor, Mutex::_no_safepoint_check_flag);
-  if (!is_primary()) {
-    set_active(true);
+bool G1ConcurrentRefineThread::maybe_deactivate(bool more_work) {
+  assert(this == Thread::current(), "precondition");
+
+  if (more_work) {
+    // Suppress unnecessary notifications.
+    Atomic::release_store(&_should_notify, false);
+    return false;
+  } else if (Atomic::load_acquire(&_should_notify)) {
+    // Deactivate if no notifications since enabled (see below).
+    return true;
   } else {
-    G1DirtyCardQueueSet& dcqs = G1BarrierSet::dirty_card_queue_set();
-    dcqs.set_process_completed_buffers(true);
-  }
-  _monitor->notify();
-}
-
-void G1ConcurrentRefineThread::deactivate() {
-  MutexLocker x(_monitor, Mutex::_no_safepoint_check_flag);
-  if (!is_primary()) {
-    set_active(false);
-  } else {
-    G1DirtyCardQueueSet& dcqs = G1BarrierSet::dirty_card_queue_set();
-    dcqs.set_process_completed_buffers(false);
+    // Try for more refinement work with notifications enabled, to close
+    // race; there could be a plethora of suppressed activation attempts
+    // after we found no work but before we enable notifications here
+    // (so there could be lots of work for this thread to do), followed
+    // by a long time without activation after enabling notifications.
+    // But first, clear any pending signals to prevent accumulation.
+    while (_notifier->trywait()) {}
+    Atomic::release_store(&_should_notify, true);
+    return false;
   }
 }
 
@@ -119,14 +115,13 @@
         }
 
         Ticks start_time = Ticks::now();
-        if (!_cr->do_refinement_step(_worker_id, &_total_refined_cards)) {
-          break;                // No cards to process.
-        }
+        bool more_work = _cr->do_refinement_step(_worker_id, &_total_refined_cards);
         _total_refinement_time += (Ticks::now() - start_time);
+
+        if (maybe_deactivate(more_work)) break;
       }
     }
 
-    deactivate();
     log_debug(gc, refine)("Deactivated worker %d, off threshold: " SIZE_FORMAT
                           ", current: " SIZE_FORMAT ", refined cards: "
                           SIZE_FORMAT ", total refined cards: " SIZE_FORMAT,
@@ -146,6 +141,5 @@
 }
 
 void G1ConcurrentRefineThread::stop_service() {
-  MutexLocker x(_monitor, Mutex::_no_safepoint_check_flag);
-  _monitor->notify();
+  activate();
 }
--- a/src/hotspot/share/gc/g1/g1ConcurrentRefineThread.hpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/share/gc/g1/g1ConcurrentRefineThread.hpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 2001, 2019, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2001, 2020, Oracle and/or its affiliates. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
  * This code is free software; you can redistribute it and/or modify it
@@ -45,24 +45,33 @@
 
   uint _worker_id;
 
-  bool _active;
-  Monitor* _monitor;
+  // _notifier and _should_notify form a single-reader / multi-writer
+  // notification mechanism.  The owning concurrent refinement thread is the
+  // single reader. The writers are (other) threads that call activate() on
+  // the thread.  The i-th concurrent refinement thread is responsible for
+  // activating thread i+1 if the number of buffers in the queue exceeds a
+  // threshold for that i+1th thread.  The 0th (primary) thread is activated
+  // by threads that add cards to the dirty card queue set when the primary
+  // thread's threshold is exceeded.  activate() is also used to wake up the
+  // threads during termination, so even the non-primary thread case is
+  // multi-writer.
+  Semaphore* _notifier;
+  volatile bool _should_notify;
+
+  // Called when no refinement work found for this thread.
+  // Returns true if should deactivate.
+  bool maybe_deactivate(bool more_work);
+
   G1ConcurrentRefine* _cr;
 
   void wait_for_completed_buffers();
 
-  void set_active(bool x) { _active = x; }
-  // Deactivate this thread.
-  void deactivate();
+  virtual void run_service();
+  virtual void stop_service();
 
-  bool is_primary() { return (_worker_id == 0); }
-
-  void run_service();
-  void stop_service();
 public:
   G1ConcurrentRefineThread(G1ConcurrentRefine* cg1r, uint worker_id);
 
-  bool is_active();
   // Activate this thread.
   void activate();
 
--- a/src/hotspot/share/gc/g1/g1DirtyCardQueue.cpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/share/gc/g1/g1DirtyCardQueue.cpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 2001, 2019, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2001, 2020, Oracle and/or its affiliates. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
  * This code is free software; you can redistribute it and/or modify it
@@ -26,6 +26,7 @@
 #include "gc/g1/g1BufferNodeList.hpp"
 #include "gc/g1/g1CardTableEntryClosure.hpp"
 #include "gc/g1/g1CollectedHeap.inline.hpp"
+#include "gc/g1/g1ConcurrentRefineThread.hpp"
 #include "gc/g1/g1DirtyCardQueue.hpp"
 #include "gc/g1/g1FreeIdSet.hpp"
 #include "gc/g1/g1RedirtyCardsQueue.hpp"
@@ -33,15 +34,14 @@
 #include "gc/g1/g1ThreadLocalData.hpp"
 #include "gc/g1/heapRegionRemSet.hpp"
 #include "gc/shared/suspendibleThreadSet.hpp"
-#include "gc/shared/workgroup.hpp"
 #include "memory/iterator.hpp"
-#include "runtime/flags/flagSetting.hpp"
-#include "runtime/mutexLocker.hpp"
-#include "runtime/orderAccess.hpp"
+#include "runtime/atomic.hpp"
 #include "runtime/os.hpp"
 #include "runtime/safepoint.hpp"
 #include "runtime/thread.inline.hpp"
 #include "runtime/threadSMR.hpp"
+#include "utilities/globalCounter.inline.hpp"
+#include "utilities/macros.hpp"
 #include "utilities/quickSort.hpp"
 
 G1DirtyCardQueue::G1DirtyCardQueue(G1DirtyCardQueueSet* qset) :
@@ -68,18 +68,16 @@
 // Assumed to be zero by concurrent threads.
 static uint par_ids_start() { return 0; }
 
-G1DirtyCardQueueSet::G1DirtyCardQueueSet(Monitor* cbl_mon,
-                                         BufferNode::Allocator* allocator) :
+G1DirtyCardQueueSet::G1DirtyCardQueueSet(BufferNode::Allocator* allocator) :
   PtrQueueSet(allocator),
-  _cbl_mon(cbl_mon),
-  _completed_buffers_head(NULL),
-  _completed_buffers_tail(NULL),
+  _primary_refinement_thread(NULL),
   _num_cards(0),
+  _completed(),
+  _paused(),
+  _free_ids(par_ids_start(), num_par_ids()),
   _process_cards_threshold(ProcessCardsThresholdNever),
-  _process_completed_buffers(false),
   _max_cards(MaxCardsUnlimited),
   _max_cards_padding(0),
-  _free_ids(par_ids_start(), num_par_ids()),
   _mutator_refined_cards_counters(NEW_C_HEAP_ARRAY(size_t, num_par_ids(), mtGC))
 {
   ::memset(_mutator_refined_cards_counters, 0, num_par_ids() * sizeof(size_t));
@@ -108,75 +106,304 @@
   G1ThreadLocalData::dirty_card_queue(t).handle_zero_index();
 }
 
+#ifdef ASSERT
+G1DirtyCardQueueSet::Queue::~Queue() {
+  assert(_head == NULL, "precondition");
+  assert(_tail == NULL, "precondition");
+}
+#endif // ASSERT
+
+BufferNode* G1DirtyCardQueueSet::Queue::top() const {
+  return Atomic::load(&_head);
+}
+
+// An append operation atomically exchanges the new tail with the queue tail.
+// It then sets the "next" value of the old tail to the head of the list being
+// appended; it is an invariant that the old tail's "next" value is NULL.
+// But if the old tail is NULL then the queue was empty.  In this case the
+// head of the list being appended is instead stored in the queue head; it is
+// an invariant that the queue head is NULL in this case.
+//
+// This means there is a period between the exchange and the old tail update
+// where the queue sequence is split into two parts, the list from the queue
+// head to the old tail, and the list being appended.  If there are concurrent
+// push/append operations, each may introduce another such segment.  But they
+// all eventually get resolved by their respective updates of their old tail's
+// "next" value.  This also means that pop operations must handle a buffer
+// with a NULL "next" value specially.
+//
+// A push operation is just a degenerate append, where the buffer being pushed
+// is both the head and the tail of the list being appended.
+void G1DirtyCardQueueSet::Queue::append(BufferNode& first, BufferNode& last) {
+  assert(last.next() == NULL, "precondition");
+  BufferNode* old_tail = Atomic::xchg(&_tail, &last);
+  if (old_tail == NULL) {       // Was empty.
+    assert(Atomic::load(&_head) == NULL, "invariant");
+    Atomic::store(&_head, &first);
+  } else {
+    assert(old_tail->next() == NULL, "invariant");
+    old_tail->set_next(&first);
+  }
+}
+
+// pop gets the queue head as the candidate result (returning NULL if the
+// queue head was NULL), and then gets that result node's "next" value.  If
+// that "next" value is NULL and the queue head hasn't changed, then there
+// is only one element in the accessible part of the list (the sequence from
+// head to a node with a NULL "next" value).  We can't return that element,
+// because it may be the old tail of a concurrent push/append that has not
+// yet had its "next" field set to the new tail.  So return NULL in this case.
+// Otherwise, attempt to cmpxchg that "next" value into the queue head,
+// retrying the whole operation if that fails. This is the "usual" lock-free
+// pop from the head of a singly linked list, with the additional restriction
+// on taking the last element.
+BufferNode* G1DirtyCardQueueSet::Queue::pop() {
+  Thread* current_thread = Thread::current();
+  while (true) {
+    // Use a critical section per iteration, rather than over the whole
+    // operation.  We're not guaranteed to make progress, because of possible
+    // contention on the queue head.  Lingering in one CS the whole time could
+    // lead to excessive allocation of buffers, because the CS blocks return
+    // of released buffers to the free list for reuse.
+    GlobalCounter::CriticalSection cs(current_thread);
+
+    BufferNode* result = Atomic::load_acquire(&_head);
+    // Check for empty queue.  Only needs to be done on first iteration,
+    // since we never take the last element, but it's messy to make use
+    // of that and we expect one iteration to be the common case.
+    if (result == NULL) return NULL;
+
+    BufferNode* next = Atomic::load_acquire(BufferNode::next_ptr(*result));
+    if (next != NULL) {
+      next = Atomic::cmpxchg(&_head, result, next);
+      if (next == result) {
+        // Former head successfully taken; it is not the last.
+        assert(Atomic::load(&_tail) != result, "invariant");
+        assert(result->next() != NULL, "invariant");
+        result->set_next(NULL);
+        return result;
+      }
+      // cmpxchg failed; try again.
+    } else if (result == Atomic::load_acquire(&_head)) {
+      // If follower of head is NULL and head hasn't changed, then only
+      // the one element is currently accessible.  We don't take the last
+      // accessible element, because there may be a concurrent add using it.
+      // The check for unchanged head isn't needed for correctness, but the
+      // retry on change may sometimes let us get a buffer after all.
+      return NULL;
+    }
+    // Head changed; try again.
+  }
+}
+
+G1DirtyCardQueueSet::HeadTail G1DirtyCardQueueSet::Queue::take_all() {
+  assert_at_safepoint();
+  HeadTail result(Atomic::load(&_head), Atomic::load(&_tail));
+  Atomic::store(&_head, (BufferNode*)NULL);
+  Atomic::store(&_tail, (BufferNode*)NULL);
+  return result;
+}
+
 void G1DirtyCardQueueSet::enqueue_completed_buffer(BufferNode* cbn) {
-  MonitorLocker ml(_cbl_mon, Mutex::_no_safepoint_check_flag);
-  cbn->set_next(NULL);
-  if (_completed_buffers_tail == NULL) {
-    assert(_completed_buffers_head == NULL, "Well-formedness");
-    _completed_buffers_head = cbn;
-    _completed_buffers_tail = cbn;
-  } else {
-    _completed_buffers_tail->set_next(cbn);
-    _completed_buffers_tail = cbn;
+  assert(cbn != NULL, "precondition");
+  // Increment _num_cards before adding to queue, so queue removal doesn't
+  // need to deal with _num_cards possibly going negative.
+  size_t new_num_cards = Atomic::add(&_num_cards, buffer_size() - cbn->index());
+  _completed.push(*cbn);
+  if ((new_num_cards > process_cards_threshold()) &&
+      (_primary_refinement_thread != NULL)) {
+    _primary_refinement_thread->activate();
   }
-  _num_cards += buffer_size() - cbn->index();
-
-  if (!process_completed_buffers() &&
-      (num_cards() > process_cards_threshold())) {
-    set_process_completed_buffers(true);
-    ml.notify_all();
-  }
-  verify_num_cards();
 }
 
 BufferNode* G1DirtyCardQueueSet::get_completed_buffer(size_t stop_at) {
-  MutexLocker x(_cbl_mon, Mutex::_no_safepoint_check_flag);
+  enqueue_previous_paused_buffers();
 
-  if (num_cards() <= stop_at) {
+  // Check for insufficient cards to satisfy request.  We only do this once,
+  // up front, rather than on each iteration below, since the test is racy
+  // regardless of when we do it.
+  if (Atomic::load_acquire(&_num_cards) <= stop_at) {
     return NULL;
   }
 
-  assert(num_cards() > 0, "invariant");
-  assert(_completed_buffers_head != NULL, "invariant");
-  assert(_completed_buffers_tail != NULL, "invariant");
-
-  BufferNode* bn = _completed_buffers_head;
-  _num_cards -= buffer_size() - bn->index();
-  _completed_buffers_head = bn->next();
-  if (_completed_buffers_head == NULL) {
-    assert(num_cards() == 0, "invariant");
-    _completed_buffers_tail = NULL;
-    set_process_completed_buffers(false);
+  BufferNode* result = _completed.pop();
+  if (result != NULL) {
+    Atomic::sub(&_num_cards, buffer_size() - result->index());
   }
-  verify_num_cards();
-  bn->set_next(NULL);
-  return bn;
+  return result;
 }
 
 #ifdef ASSERT
 void G1DirtyCardQueueSet::verify_num_cards() const {
   size_t actual = 0;
-  BufferNode* cur = _completed_buffers_head;
-  while (cur != NULL) {
+  BufferNode* cur = _completed.top();
+  for ( ; cur != NULL; cur = cur->next()) {
     actual += buffer_size() - cur->index();
-    cur = cur->next();
   }
-  assert(actual == _num_cards,
+  assert(actual == Atomic::load(&_num_cards),
          "Num entries in completed buffers should be " SIZE_FORMAT " but are " SIZE_FORMAT,
-         _num_cards, actual);
+         Atomic::load(&_num_cards), actual);
 }
-#endif
+#endif // ASSERT
+
+G1DirtyCardQueueSet::PausedBuffers::PausedList::PausedList() :
+  _head(NULL), _tail(NULL),
+  _safepoint_id(SafepointSynchronize::safepoint_id())
+{}
+
+#ifdef ASSERT
+G1DirtyCardQueueSet::PausedBuffers::PausedList::~PausedList() {
+  assert(Atomic::load(&_head) == NULL, "precondition");
+  assert(_tail == NULL, "precondition");
+}
+#endif // ASSERT
+
+bool G1DirtyCardQueueSet::PausedBuffers::PausedList::is_next() const {
+  assert_not_at_safepoint();
+  return _safepoint_id == SafepointSynchronize::safepoint_id();
+}
+
+void G1DirtyCardQueueSet::PausedBuffers::PausedList::add(BufferNode* node) {
+  assert_not_at_safepoint();
+  assert(is_next(), "precondition");
+  BufferNode* old_head = Atomic::xchg(&_head, node);
+  if (old_head == NULL) {
+    assert(_tail == NULL, "invariant");
+    _tail = node;
+  } else {
+    node->set_next(old_head);
+  }
+}
+
+G1DirtyCardQueueSet::HeadTail G1DirtyCardQueueSet::PausedBuffers::PausedList::take() {
+  BufferNode* head = Atomic::load(&_head);
+  BufferNode* tail = _tail;
+  Atomic::store(&_head, (BufferNode*)NULL);
+  _tail = NULL;
+  return HeadTail(head, tail);
+}
+
+G1DirtyCardQueueSet::PausedBuffers::PausedBuffers() : _plist(NULL) {}
+
+#ifdef ASSERT
+G1DirtyCardQueueSet::PausedBuffers::~PausedBuffers() {
+  assert(is_empty(), "invariant");
+}
+#endif // ASSERT
+
+bool G1DirtyCardQueueSet::PausedBuffers::is_empty() const {
+  return Atomic::load(&_plist) == NULL;
+}
+
+void G1DirtyCardQueueSet::PausedBuffers::add(BufferNode* node) {
+  assert_not_at_safepoint();
+  PausedList* plist = Atomic::load_acquire(&_plist);
+  if (plist != NULL) {
+    // Already have a next list, so use it.  We know it's a next list because
+    // of the precondition that take_previous() has already been called.
+    assert(plist->is_next(), "invariant");
+  } else {
+    // Try to install a new next list.
+    plist = new PausedList();
+    PausedList* old_plist = Atomic::cmpxchg(&_plist, (PausedList*)NULL, plist);
+    if (old_plist != NULL) {
+      // Some other thread installed a new next list. Use it instead.
+      delete plist;
+      plist = old_plist;
+    }
+  }
+  plist->add(node);
+}
+
+G1DirtyCardQueueSet::HeadTail G1DirtyCardQueueSet::PausedBuffers::take_previous() {
+  assert_not_at_safepoint();
+  PausedList* previous;
+  {
+    // Deal with plist in a critical section, to prevent it from being
+    // deleted out from under us by a concurrent take_previous().
+    GlobalCounter::CriticalSection cs(Thread::current());
+    previous = Atomic::load_acquire(&_plist);
+    if ((previous == NULL) ||   // Nothing to take.
+        previous->is_next() ||  // Not from a previous safepoint.
+        // Some other thread stole it.
+        (Atomic::cmpxchg(&_plist, previous, (PausedList*)NULL) != previous)) {
+      return HeadTail();
+    }
+  }
+  // We now own previous.
+  HeadTail result = previous->take();
+  // There might be other threads examining previous (in concurrent
+  // take_previous()).  Synchronize to wait until any such threads are
+  // done with such examination before deleting.
+  GlobalCounter::write_synchronize();
+  delete previous;
+  return result;
+}
+
+G1DirtyCardQueueSet::HeadTail G1DirtyCardQueueSet::PausedBuffers::take_all() {
+  assert_at_safepoint();
+  HeadTail result;
+  PausedList* plist = Atomic::load(&_plist);
+  if (plist != NULL) {
+    Atomic::store(&_plist, (PausedList*)NULL);
+    result = plist->take();
+    delete plist;
+  }
+  return result;
+}
+
+void G1DirtyCardQueueSet::record_paused_buffer(BufferNode* node) {
+  assert_not_at_safepoint();
+  assert(node->next() == NULL, "precondition");
+  // Cards for paused buffers are included in count, to contribute to
+  // notification checking after the coming safepoint if it doesn't GC.
+  // Note that this means the queue's _num_cards differs from the number
+  // of cards in the queued buffers when there are paused buffers.
+  Atomic::add(&_num_cards, buffer_size() - node->index());
+  _paused.add(node);
+}
+
+void G1DirtyCardQueueSet::enqueue_paused_buffers_aux(const HeadTail& paused) {
+  if (paused._head != NULL) {
+    assert(paused._tail != NULL, "invariant");
+    // Cards from paused buffers are already recorded in the queue count.
+    _completed.append(*paused._head, *paused._tail);
+  }
+}
+
+void G1DirtyCardQueueSet::enqueue_previous_paused_buffers() {
+  assert_not_at_safepoint();
+  // The fast-path still satisfies the precondition for record_paused_buffer
+  // and PausedBuffers::add, even with a racy test.  If there are paused
+  // buffers from a previous safepoint, is_empty() will return false; there
+  // will have been a safepoint between recording and test, so there can't be
+  // a false negative (is_empty() returns true) while such buffers are present.
+  // If is_empty() is false, there are two cases:
+  //
+  // (1) There were paused buffers from a previous safepoint.  A concurrent
+  // caller may take and enqueue them first, but that's okay; the precondition
+  // for a possible later record_paused_buffer by this thread will still hold.
+  //
+  // (2) There are paused buffers for a requested next safepoint.
+  //
+  // In each of those cases some effort may be spent detecting and dealing
+  // with those circumstances; any wasted effort in such cases is expected to
+  // be well compensated by the fast path.
+  if (!_paused.is_empty()) {
+    enqueue_paused_buffers_aux(_paused.take_previous());
+  }
+}
+
+void G1DirtyCardQueueSet::enqueue_all_paused_buffers() {
+  assert_at_safepoint();
+  enqueue_paused_buffers_aux(_paused.take_all());
+}
 
 void G1DirtyCardQueueSet::abandon_completed_buffers() {
-  BufferNode* buffers_to_delete = NULL;
-  {
-    MutexLocker x(_cbl_mon, Mutex::_no_safepoint_check_flag);
-    buffers_to_delete = _completed_buffers_head;
-    _completed_buffers_head = NULL;
-    _completed_buffers_tail = NULL;
-    _num_cards = 0;
-    set_process_completed_buffers(false);
-  }
+  enqueue_all_paused_buffers();
+  verify_num_cards();
+  G1BufferNodeList list = take_all_completed_buffers();
+  BufferNode* buffers_to_delete = list._head;
   while (buffers_to_delete != NULL) {
     BufferNode* bn = buffers_to_delete;
     buffers_to_delete = bn->next();
@@ -186,46 +413,30 @@
 }
 
 void G1DirtyCardQueueSet::notify_if_necessary() {
-  MonitorLocker ml(_cbl_mon, Mutex::_no_safepoint_check_flag);
-  if (num_cards() > process_cards_threshold()) {
-    set_process_completed_buffers(true);
-    ml.notify_all();
+  if ((_primary_refinement_thread != NULL) &&
+      (num_cards() > process_cards_threshold())) {
+    _primary_refinement_thread->activate();
   }
 }
 
-// Merge lists of buffers. Notify the processing threads.
-// The source queue is emptied as a result. The queues
-// must share the monitor.
+// Merge lists of buffers. The source queue set is emptied as a
+// result. The queue sets must share the same allocator.
 void G1DirtyCardQueueSet::merge_bufferlists(G1RedirtyCardsQueueSet* src) {
   assert(allocator() == src->allocator(), "precondition");
   const G1BufferNodeList from = src->take_all_completed_buffers();
-  if (from._head == NULL) return;
-
-  MutexLocker x(_cbl_mon, Mutex::_no_safepoint_check_flag);
-  if (_completed_buffers_tail == NULL) {
-    assert(_completed_buffers_head == NULL, "Well-formedness");
-    _completed_buffers_head = from._head;
-    _completed_buffers_tail = from._tail;
-  } else {
-    assert(_completed_buffers_head != NULL, "Well formedness");
-    _completed_buffers_tail->set_next(from._head);
-    _completed_buffers_tail = from._tail;
+  if (from._head != NULL) {
+    Atomic::add(&_num_cards, from._entry_count);
+    _completed.append(*from._head, *from._tail);
   }
-  _num_cards += from._entry_count;
-
-  assert(_completed_buffers_head == NULL && _completed_buffers_tail == NULL ||
-         _completed_buffers_head != NULL && _completed_buffers_tail != NULL,
-         "Sanity");
-  verify_num_cards();
 }
 
 G1BufferNodeList G1DirtyCardQueueSet::take_all_completed_buffers() {
-  MutexLocker x(_cbl_mon, Mutex::_no_safepoint_check_flag);
-  G1BufferNodeList result(_completed_buffers_head, _completed_buffers_tail, _num_cards);
-  _completed_buffers_head = NULL;
-  _completed_buffers_tail = NULL;
-  _num_cards = 0;
-  return result;
+  enqueue_all_paused_buffers();
+  verify_num_cards();
+  HeadTail buffers = _completed.take_all();
+  size_t num_cards = Atomic::load(&_num_cards);
+  Atomic::store(&_num_cards, size_t(0));
+  return G1BufferNodeList(buffers._head, buffers._tail, num_cards);
 }
 
 class G1RefineBufferedCards : public StackObj {
@@ -368,14 +579,20 @@
 bool G1DirtyCardQueueSet::process_or_enqueue_completed_buffer(BufferNode* node) {
   if (Thread::current()->is_Java_thread()) {
     // If the number of buffers exceeds the limit, make this Java
-    // thread do the processing itself.  We don't lock to access
-    // buffer count or padding; it is fine to be imprecise here.  The
-    // add of padding could overflow, which is treated as unlimited.
+    // thread do the processing itself.  Calculation is racy but we
+    // don't need precision here.  The add of padding could overflow,
+    // which is treated as unlimited.
     size_t limit = max_cards() + max_cards_padding();
     if ((num_cards() > limit) && (limit >= max_cards())) {
       if (mut_process_buffer(node)) {
         return true;
       }
+      // Buffer was incompletely processed because of a pending safepoint
+      // request.  Unlike with refinement thread processing, for mutator
+      // processing the buffer did not come from the completed buffer queue,
+      // so it is okay to add it to the queue rather than to the paused set.
+      // Indeed, it can't be added to the paused set because we didn't pass
+      // through enqueue_previous_paused_buffers.
     }
   }
   enqueue_completed_buffer(node);
@@ -407,14 +624,15 @@
     deallocate_buffer(node);
     return true;
   } else {
-    // Return partially processed buffer to the queue.
-    enqueue_completed_buffer(node);
+    // Buffer incompletely processed because there is a pending safepoint.
+    // Record partially processed buffer, to be finished later.
+    record_paused_buffer(node);
     return true;
   }
 }
 
 void G1DirtyCardQueueSet::abandon_logs() {
-  assert(SafepointSynchronize::is_at_safepoint(), "Must be at safepoint.");
+  assert_at_safepoint();
   abandon_completed_buffers();
 
   // Since abandon is done only at safepoints, we can safely manipulate
@@ -433,7 +651,7 @@
   // Iterate over all the threads, if we find a partial log add it to
   // the global list of logs.  Temporarily turn off the limit on the number
   // of outstanding buffers.
-  assert(SafepointSynchronize::is_at_safepoint(), "Must be at safepoint.");
+  assert_at_safepoint();
   size_t old_limit = max_cards();
   set_max_cards(MaxCardsUnlimited);
 
@@ -448,5 +666,7 @@
   Threads::threads_do(&closure);
 
   G1BarrierSet::shared_dirty_card_queue().flush();
+  enqueue_all_paused_buffers();
+  verify_num_cards();
   set_max_cards(old_limit);
 }
--- a/src/hotspot/share/gc/g1/g1DirtyCardQueue.hpp	Tue Feb 04 12:56:19 2020 -0800
+++ b/src/hotspot/share/gc/g1/g1DirtyCardQueue.hpp	Fri Feb 07 11:09:59 2020 -0800
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 2001, 2019, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2001, 2020, Oracle and/or its affiliates. All rights reserved.
  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  *
  * This code is free software; you can redistribute it and/or modify it
@@ -29,11 +29,12 @@
 #include "gc/g1/g1FreeIdSet.hpp"
 #include "gc/shared/ptrQueue.hpp"
 #include "memory/allocation.hpp"
+#include "memory/padded.hpp"
 
+class G1ConcurrentRefineThread;
 class G1DirtyCardQueueSet;
 class G1RedirtyCardsQueueSet;
 class Thread;
-class Monitor;
 
 // A ptrQueue whose elements are "oops", pointers to object heads.
 class G1DirtyCardQueue: public PtrQueue {
@@ -66,15 +67,178 @@
 };
 
 class G1DirtyCardQueueSet: public PtrQueueSet {
-  Monitor* _cbl_mon;  // Protects the list and count members.
-  BufferNode* _completed_buffers_head;
-  BufferNode* _completed_buffers_tail;
+  // Head and tail of a list of BufferNodes, linked through their next()
+  // fields.  Similar to G1BufferNodeList, but without the _entry_count.
+  struct HeadTail {
+    BufferNode* _head;
+    BufferNode* _tail;
+    HeadTail() : _head(NULL), _tail(NULL) {}
+    HeadTail(BufferNode* head, BufferNode* tail) : _head(head), _tail(tail) {}
+  };
 
-  // Number of actual cards in the list of completed buffers.
+  // A lock-free FIFO of BufferNodes, linked through their next() fields.
+  // This class has a restriction that pop() cannot return the last buffer
+  // in the queue, or what was the last buffer for a concurrent push/append
+  // operation.  It is expected that there will be a later push/append that
+  // will make that buffer available to a future pop(), or there will
+  // eventually be a complete transfer via take_all().
+  class Queue {
+    BufferNode* volatile _head;
+    DEFINE_PAD_MINUS_SIZE(1, DEFAULT_CACHE_LINE_SIZE, sizeof(BufferNode*));
+    BufferNode* volatile _tail;
+    DEFINE_PAD_MINUS_SIZE(2, DEFAULT_CACHE_LINE_SIZE, sizeof(BufferNode*));
+
+    NONCOPYABLE(Queue);
+
+  public:
+    Queue() : _head(NULL), _tail(NULL) {}
+    DEBUG_ONLY(~Queue();)
+
+    // Return the first buffer in the queue.
+    // Thread-safe, but the result may change immediately.
+    BufferNode* top() const;
+
+    // Thread-safe add the buffer to the end of the queue.
+    void push(BufferNode& node) { append(node, node); }
+
+    // Thread-safe add the buffers from first to last to the end of the queue.
+    void append(BufferNode& first, BufferNode& last);
+
+    // Thread-safe attempt to remove and return the first buffer in the queue.
+    // Returns NULL if the queue is empty, or if only one buffer is found.
+    // Uses GlobalCounter critical sections to address the ABA problem; this
+    // works with the buffer allocator's use of GlobalCounter synchronization.
+    BufferNode* pop();
+
+    // Take all the buffers from the queue, leaving the queue empty.
+    // Not thread-safe.
+    HeadTail take_all();
+  };
+
+  // Concurrent refinement may stop processing in the middle of a buffer if
+  // there is a pending safepoint, to avoid long delays to safepoint.  A
+  // partially processed buffer needs to be recorded for processing by the
+  // safepoint if it's a GC safepoint; otherwise it needs to be recorded for
+  // further concurrent refinement work after the safepoint.  But if the
+  // buffer was obtained from the completed buffer queue then it can't simply
+  // be added back to the queue, as that would introduce a new source of ABA
+  // for the queue.
+  //
+  // The PausedBuffer object is used to record such buffers for the upcoming
+  // safepoint, and provides access to the buffers recorded for previous
+  // safepoints.  Before obtaining a buffer from the completed buffers queue,
+  // we first transfer any buffers from previous safepoints to the queue.
+  // This is ABA-safe because threads cannot be in the midst of a queue pop
+  // across a safepoint.
+  //
+  // The paused buffers are conceptually an extension of the completed buffers
+  // queue, and operations which need to deal with all of the queued buffers
+  // (such as concatenate_logs) also need to deal with any paused buffers.  In
+  // general, if a safepoint performs a GC then the paused buffers will be
+  // processed as part of it, and there won't be any paused buffers after a
+  // GC safepoint.
+  class PausedBuffers {
+    class PausedList : public CHeapObj<mtGC> {
+      BufferNode* volatile _head;
+      BufferNode* _tail;
+      size_t _safepoint_id;
+
+      NONCOPYABLE(PausedList);
+
+    public:
+      PausedList();
+      DEBUG_ONLY(~PausedList();)
+
+      // Return true if this list was created to hold buffers for the
+      // next safepoint.
+      // precondition: not at safepoint.
+      bool is_next() const;
+
+      // Thread-safe add the buffer to the list.
+      // precondition: not at safepoint.
+      // precondition: is_next().
+      void add(BufferNode* node);
+
+      // Take all the buffers from the list.  Not thread-safe.
+      HeadTail take();
+    };
+
+    // The most recently created list, which might be for either the next or
+    // a previous safepoint, or might be NULL if the next list hasn't been
+    // created yet.  We only need one list because of the requirement that
+    // threads calling add() must first ensure there are no paused buffers
+    // from a previous safepoint.  There might be many list instances existing
+    // at the same time though; there can be many threads competing to create
+    // and install the next list, and meanwhile there can be a thread dealing
+    // with the previous list.
+    PausedList* volatile _plist;
+    DEFINE_PAD_MINUS_SIZE(1, DEFAULT_CACHE_LINE_SIZE, sizeof(PausedList*));
+
+    NONCOPYABLE(PausedBuffers);
+
+  public:
+    PausedBuffers();
+    DEBUG_ONLY(~PausedBuffers();)
+
+    // Test whether there are any paused lists.
+    // Thread-safe, but the answer may change immediately.
+    bool is_empty() const;
+
+    // Thread-safe add the buffer to paused list for next safepoint.
+    // precondition: not at safepoint.
+    // precondition: does not have paused buffers from a previous safepoint.
+    void add(BufferNode* node);
+
+    // Thread-safe take all paused buffers for previous safepoints.
+    // precondition: not at safepoint.
+    HeadTail take_previous();
+
+    // Take all the paused buffers.
+    // precondition: at safepoint.
+    HeadTail take_all();
+  };
+
+  // The primary refinement thread, for activation when the processing
+  // threshold is reached.  NULL if there aren't any refineme