JVM의 JIT 컴파일러가 벡터화 된 부동 소수점 명령어를 사용하는 코드를 생성합니까?
내 Java 프로그램의 병목 현상이 실제로 많은 벡터 내적을 계산하기위한 타이트한 루프라고 가정 해 보겠습니다. 예, 예, 병목 현상입니다. 예, 중요합니다. 예, 알고리즘이 그렇습니다. 예, Proguard를 실행하여 바이트 코드를 최적화했습니다.
작업은 본질적으로 내적입니다. 에서와 같이 두 개가 float[50]
있고 쌍을 이루는 곱의 합을 계산해야합니다. SSE 또는 MMX와 같이 이러한 종류의 작업을 대량으로 신속하게 수행하기 위해 프로세서 명령 세트가 존재한다는 것을 알고 있습니다.
예, JNI에서 일부 네이티브 코드를 작성하여 액세스 할 수 있습니다. JNI 호출은 상당히 비쌉니다.
JIT가 컴파일하거나 컴파일하지 않을 것을 보장 할 수 없다는 것을 알고 있습니다. 사람이나요 지금 이 지침을 사용하는 JIT 생성 코드 들어? 그렇다면 Java 코드에 대해 이러한 방식으로 컴파일 할 수 있도록 도와주는 것이 있습니까?
아마도 "아니오"일 것입니다. 물어볼 가치가 있습니다.
따라서 기본적으로 코드가 더 빠르게 실행되기를 원합니다. JNI가 답입니다. 당신이 그것이 당신에게 효과가 없다고 말한 것을 알고 있지만 당신이 틀렸다는 것을 보여 드리겠습니다.
여기 있습니다 Dot.java
:
import java.nio.FloatBuffer;
import org.bytedeco.javacpp.*;
import org.bytedeco.javacpp.annotation.*;
@Platform(include = "Dot.h", compiler = "fastfpu")
public class Dot {
static { Loader.load(); }
static float[] a = new float[50], b = new float[50];
static float dot() {
float sum = 0;
for (int i = 0; i < 50; i++) {
sum += a[i]*b[i];
}
return sum;
}
static native @MemberGetter FloatPointer ac();
static native @MemberGetter FloatPointer bc();
static native @NoException float dotc();
public static void main(String[] args) {
FloatBuffer ab = ac().capacity(50).asBuffer();
FloatBuffer bb = bc().capacity(50).asBuffer();
for (int i = 0; i < 10000000; i++) {
a[i%50] = b[i%50] = dot();
float sum = dotc();
ab.put(i%50, sum);
bb.put(i%50, sum);
}
long t1 = System.nanoTime();
for (int i = 0; i < 10000000; i++) {
a[i%50] = b[i%50] = dot();
}
long t2 = System.nanoTime();
for (int i = 0; i < 10000000; i++) {
float sum = dotc();
ab.put(i%50, sum);
bb.put(i%50, sum);
}
long t3 = System.nanoTime();
System.out.println("dot(): " + (t2 - t1)/10000000 + " ns");
System.out.println("dotc(): " + (t3 - t2)/10000000 + " ns");
}
}
및 Dot.h
:
float ac[50], bc[50];
inline float dotc() {
float sum = 0;
for (int i = 0; i < 50; i++) {
sum += ac[i]*bc[i];
}
return sum;
}
다음 명령을 사용하여 JavaCPP로 컴파일하고 실행할 수 있습니다 .
$ java -jar javacpp.jar Dot.java -exec
Intel (R) Core (TM) i7-7700HQ CPU @ 2.80GHz, Fedora 30, GCC 9.1.1 및 OpenJDK 8 또는 11을 사용하면 다음과 같은 출력이 나타납니다.
dot(): 39 ns
dotc(): 16 ns
Or roughly 2.4 times faster. We need to use direct NIO buffers instead of arrays, but HotSpot can access direct NIO buffers as fast as arrays. On the other hand, manually unrolling the loop does not provide a measurable boost in performance, in this case.
To address some of the scepticism expressed by others here I suggest anyone who wants to prove to themselves or other use the following method:
- Create a JMH project
- Write a small snippet of vectorizable math.
- Run their benchmark flipping between -XX:-UseSuperWord and -XX:+UseSuperWord(default)
- If no difference in performance is observed, your code probably didn't get vectorized
- To make sure, run your benchmark such that it prints out the assembly. On linux you can enjoy the perfasm profiler('-prof perfasm') have a look and see if the instructions you expect get generated.
Example:
@Benchmark
@CompilerControl(CompilerControl.Mode.DONT_INLINE) //makes looking at assembly easier
public void inc() {
for (int i=0;i<a.length;i++)
a[i]++;// a is an int[], I benchmarked with size 32K
}
The result with and without the flag (on recent Haswell laptop, Oracle JDK 8u60): -XX:+UseSuperWord : 475.073 ± 44.579 ns/op (nanoseconds per op) -XX:-UseSuperWord : 3376.364 ± 233.211 ns/op
The assembly for the hot loop is a bit much to format and stick in here but here's a snippet(hsdis.so is failing to format some of the AVX2 vector instructions so I ran with -XX:UseAVX=1): -XX:+UseSuperWord(with '-prof perfasm:intelSyntax=true')
9.15% 10.90% │││ │↗ 0x00007fc09d1ece60: vmovdqu xmm1,XMMWORD PTR [r10+r9*4+0x18]
10.63% 9.78% │││ ││ 0x00007fc09d1ece67: vpaddd xmm1,xmm1,xmm0
12.47% 12.67% │││ ││ 0x00007fc09d1ece6b: movsxd r11,r9d
8.54% 7.82% │││ ││ 0x00007fc09d1ece6e: vmovdqu xmm2,XMMWORD PTR [r10+r11*4+0x28]
│││ ││ ;*iaload
│││ ││ ; - psy.lob.saw.VectorMath::inc@17 (line 45)
10.68% 10.36% │││ ││ 0x00007fc09d1ece75: vmovdqu XMMWORD PTR [r10+r9*4+0x18],xmm1
10.65% 10.44% │││ ││ 0x00007fc09d1ece7c: vpaddd xmm1,xmm2,xmm0
10.11% 11.94% │││ ││ 0x00007fc09d1ece80: vmovdqu XMMWORD PTR [r10+r11*4+0x28],xmm1
│││ ││ ;*iastore
│││ ││ ; - psy.lob.saw.VectorMath::inc@20 (line 45)
11.19% 12.65% │││ ││ 0x00007fc09d1ece87: add r9d,0x8 ;*iinc
│││ ││ ; - psy.lob.saw.VectorMath::inc@21 (line 44)
8.38% 9.50% │││ ││ 0x00007fc09d1ece8b: cmp r9d,ecx
│││ │╰ 0x00007fc09d1ece8e: jl 0x00007fc09d1ece60 ;*if_icmpge
Have fun storming the castle!
In HotSpot versions beginning with Java 7u40, the server compiler provides support for auto-vectorisation. According to JDK-6340864
However, this seems to be true only for "simple loops" - at least for the moment. For example, accumulating an array cannot be vectorised yet JDK-7192383
Here is nice article about experimenting with Java and SIMD instructions written by my friend: http://prestodb.rocks/code/simd/
Its general outcome is that you can expect JIT to use some SSE operations in 1.8 (and some more in 1.9). Though you should not expect much and you need to be careful.
You could write OpenCl kernel to do the computing and run it from java http://www.jocl.org/.
Code can be run on CPU and/or GPU and OpenCL language supports also vector types so you should be able to take explicitly advantage of e.g. SSE3/4 instructions.
I'm guessing you wrote this question before you found out about netlib-java ;-) it provides exactly the native API you require, with machine optimised implementations, and does not have any cost at the native boundary due thanks to memory pinning.
Have a look at Performance comparison between Java and JNI for optimal implementation of computational micro-kernels. They show that Java HotSpot VM server compiler supports auto-vectorization using Super-word Level Parallelism, which is limited to simple cases of inside the loop parallelism. This article will also give you some guidance whether your data size is large enough to justify going JNI route.
I dont believe most if any VMs are ever smart enough for this sort of optimisations. To be fair most optimisations are much simpler, such as shifting instead of multiplication whena power of two. The mono project introduced their own vector and other methods with native backings to help performance.
'programing tip' 카테고리의 다른 글
Ruby on Rails 3를 사용하여 모듈을 만들고 사용하는 방법은 무엇입니까? (0) | 2020.09.06 |
---|---|
다중 처리를 위해 공유 메모리에서 numpy 배열 사용 (0) | 2020.09.06 |
config / environments / development.rb에서“consider_all_requests_local”의 목적은 무엇입니까? (0) | 2020.09.06 |
동적 SQL-EXEC (@SQL) 대 EXEC SP_EXECUTESQL (@SQL) (0) | 2020.09.06 |
Javascript 배열이 희소합니까? (0) | 2020.09.06 |