2. Structure What is a memory model about? Platform memory models Annotation: Formal semantics Pro & Contra The new Java Memory Model (JMM) History Atomicity rules Ordering rules Visibility rules Piggybacking on synchronization Publication Initialization safety Changes Implementation sketch
3. What is a memory model about? A memory model defines the behavior of threads how to interact through shared mutual memory. Therefore you need rules for Atomicity: Which operations are atomic? Ordering: In which order do the operations happen? Visibility: When is the modified data visible to other processors? These depends on the memory model at processor level, because of the varying degrees of cache coherence across different architectures.
5. Platform memory models defines what guarantees programs can expect from the memory Atomicity: Which operations are not interrupted by the scheduler? Ordering: In which situation are operations reordered to achieve maximum performance? Visibility: Which writes to shared mutual memory are visible to other processors at any given time? specifies memory barriers (special instructions of a processor) to enforce an ordering constraint Some processors exhibit a strong memory model, other processors exhibit a weaker memory model (different processors are able to see different values for the same memory location).
6. JMM and PMM JMM can be seen as an abstraction layer across different multiprocessor architectures. In dependence to the underlying hardware, the Java compiler deals with the differences by inserting memory barriers at the appropriate places to maintain the rules of the JMM.
7. Annotation: Formal semantics Together with the definition of single-threaded execution, the memory model specifies the formal semantics of a programming language. Typically, in a single-threaded environment the compiler is required to implement as-if-serial semantics. As-if-serial: All actions of a thread can be reordered for optimization, as long as the result is the same as if they are ordered in program order.
8. Pro & Contra Pro A programmer can reason about code execution. Atomicity and visibility is well-regulated. A data race can be prevented, because of ordering rules. A data race occurs, if a reads and writes of different threads on the same variable are not in a partial order. A compiler / processor is able to perform important optimizations, like loop fusion, in multi-threaded environment to achieve maximum performance. Contra The memory model also leads to performance cost, because of locks and other communication stuff.
9. History 1991: Oak was developed under the name "The Green Project" by James Goslip and other developers within 18 months. The project was initiated by Sun Microsystems to develop a kernel for executing safe, portable, object-oriented and multi-threaded software on set-top boxes. 1995: Java 1.0 has been published by Sun Microsystems.
10. History 2004: The original memory model was updated through the Java Community Process, as Java Specification Request 133, for Java 5.0, because the old one prevented many runtime optimizations and it didn’t provide enough guarantees for code safety.
11. Atomicity rules Accesses to variables of primitive types (excluding long and double) and reference variables are atomic. Accesses to volatile variables of primitive types (including long and double) and reference variables are atomic. Operations on atomic variables in the package „java.util.concurrent.atomic“ are atomic, too.
12. Ordering rules JMM defines an happens-before partially order for all instructions in a multi-threaded program. The order guarantees: Thread B will see the result of Thread A, if there exists a happens-before relation between B and A (because of corresponding visibility rules). If there exists a happens-before relation, the order of instructions will be reordered at will.
13. Happens-before order Program order rule Monitor lock rule Volatile variable rule Thread start rule Thread termination rule Interruption rule Finalizer rule Transitivity
15. Piggybacking on synchronization Piggybacking on synchronization means combining the program order rule with the monitor lock rule or volatile variable rule concerning ordering and visibility of ordering and visibility, appearing in a code sequence. Pro: Not every method must be synchronized. Performance benefit A global order can be asserted by using a volatile variable as a guard.
16. Publication The absence of a happens-before order between publishing a shared object and accessing it leads to a partially constructed object. @NotThreadSafe public class UnsafeLazyInitialization{ private static Resource resource; public static Resource getInstance() { if (resource == null) resource = new Resource(); return resource; } }
17. Thread-safe lazy initilization Thread-safe lazy initialization is guaranteed by synchronization of the getInstance method. @ThreadSafe public class SafeLazyInitialization{ private static Resource resource; public synchronized static Resource getInstance() { if (resource == null) resource = new Resource(); return resource; } }
18. Double-checked locking Antipattern (not thread safe), invented to reduce the impact of synchronization. In the original JMM synchronization was expensive concerning performance. Since JRE 5.0 it’s obsolete, because of a tuned JVM (especially JMM). Now it’s better to use thelazy initialization holder class idiom to gain performance, because it offers the same benefit and is easier to understand.
19. Double-checked locking @NotThreadSafe public class DoubleCheckedLocking{ private static Resource resource; public static Resource getInstance() { if (resource == null) { synchronized (doubleCheckedLocking.class) { if (resource == null) resource = new Resource(); } } } }
20. Lazy initialization holder class idiom combines the safe lazy initialization with the lazy initialization holder class idiom does not require synchronization, because of eager class initialization idiom @ThreadSafe public class ResourceFactory{ private static class ResourceHolder{ public static Resource resource= new Resource(); } public static Resource getResource() { return ResourceHolder.resource; } }
21. Initialization safety Allows properly constructed immutable objects to be safely shared across threads without synchronization. Threads, obtaining a reference to that objects, are guaranteed to see up-to-date values of final fields. Initial writes aren’t reordered with accesses, coming after the final field is frozen.
22. Changes Accesses to volatile variables won’t be reordered with any other memory operation. useful for using a volatile variable as a guard static int a = 0, b = 0; static int x = 0, y = 0; static volatile boolean initialized =false; //Thread A a = 1; b = 1; initialized =true; //ThreadB while(!initialized){} x = a; y = b;
23. Changes Introduction of the happens-before order The initialization of final fields will never be reordered with operations following the finalization associated with the end of the constructor. Final means final.
24. Implementation sketch Dependent on the underlying hardware platform model , the Java compiler insert the right memory barriers/fences, so that the Java memory model is valid. In a general model, the following fences exist: LoadLoad StoreStore LoadStore StoreLoad
25. Implementation sketch A synchronized block can be realized as follows: public class X { private int a,i; public void synchronized f(){ LoadLoad; LoadStore; i= a; a =i; StoreLoad; StoreStore; } }