- 🚦 The Java Memory Model (JMM) guarantees visibility, ordering, and atomicity for thread interactions.
- ⚙️ The Java Virtual Machine (JVM) enforces JMM rules across different platforms and hardware architectures.
- ⚠️ Instruction reordering by compilers/CPUs can break thread safety without proper synchronization.
- 🔒 Using
volatile,synchronized, andfinalproperly helps maintain JMM-compliant behavior. - 🔄 Concurrency utilities in
java.util.concurrentoffer JMM-safe abstractions for multithreaded code.
As Java applications scale and use multi-core processors, you need to understand how the Java Memory Model (JMM) and the Java Virtual Machine (JVM) work. The JVM runs your program, but it must follow JMM rules. This ensures correct and consistent behavior in programs that use multiple threads. These two parts work closely. But they have different jobs in Java's concurrency model. We will look at what each one does, how they are different, and how they work together to make sure things run correctly and fast.
Understanding the Java Virtual Machine (JVM)
The Java Virtual Machine is a main part of the Java platform. It runs Java bytecode. This lets Java programs work on any platform that has a JVM. This is why Java can "write once, run anywhere."
Key Responsibilities
The JVM has several important jobs for running Java programs:
1. Memory Management
The JVM separates memory into different areas when a program runs:
- Heap: Stores all Java objects and class instances.
- Stack: Each thread keeps its own stack for method calls and local variables.
- Method Area: Stores class structure, including metadata and bytecode.
- Program Counter (PC) Register: Keeps track of the next instruction to run.
- Native Method Stack: For native (non-Java) methods called through JNI.
The JVM automatically handles memory. It also cleans up unused memory through garbage collection (GC).
2. Garbage Collection
Java developers do not free up memory by hand. GC automatically removes objects that are no longer needed. Different garbage collectors (e.g., G1, ZGC, Shenandoah) offer trade-offs in how fast they run, how much delay they cause, and how much memory they use.
3. Thread Scheduling and Execution
The JVM supports running multiple threads. It handles context switching, giving out resources, and thread scheduling. This often works with the underlying operating system.
4. Platform Independence
Java bytecode can run on any JVM. This is true no matter the OS or hardware. But different JVMs (for example, HotSpot from Oracle, OpenJ9 from IBM, GraalVM) might perform differently. And they might have different ways to make things run faster.
JVM Optimization Techniques
Modern JVMs use several ways to make things run faster:
- Just-In-Time (JIT) compilation: Compiles "hot" bytecode into native machine code while the program runs.
- Inlining: Reduces how much time method calls take by putting the called code right into the caller method.
- Lock coarsening or elision: Makes the handling of synchronized blocks more efficient.
- Escape Analysis: Finds out where an object can be seen. This helps avoid putting it on the heap when not needed.
Even with these changes, the JVM must make sure its optimizations do not break JMM rules.
What is the Java Memory Model (JMM)?
The Java Memory Model is an important part of how Java handles many tasks at once. The Java Language Specification (JLS) defines it. It controls how threads use memory. It also provides rules for how variables appear, how operations run without interruption, and the order of actions when different threads access them.
Key Guarantees
1. Visibility
When one thread changes a variable, the JMM makes sure other threads will see that new value. This keeps data consistent between threads.
2. Ordering
The JMM lets instructions be reordered a bit to make things faster. But it has strict rules about order. This stops race conditions.
3. Atomicity
It guarantees that single operations (like reading or writing a volatile variable, or adding one to an int) cannot be split up. Other operations running at the same time will not interrupt them.
By defining these rules clearly, the JMM makes it possible to understand how operations work together. This is true no matter how the JVM or hardware behaves.
Why Was the JMM Introduced?
Before Java 1.5, code that used threads had small bugs. These bugs acted differently on various platforms. Hardware memory models, how compilers worked, and processor speedups varied. This made it hard to guess how programs would run.
Common Pre-JMM Problems
- Dirty reads: A thread reads data that is only partly there or is old.
- Invisible writes: Changes made in one thread cannot be seen by others.
- Reordering issues: Commands are put in a different order than expected, causing logic errors.
- Data Races: Two threads try to use shared data at the same time. At least one of them writes to it. This leads to unpredictable results.
To fix these problems, the Java Community Process brought in the Java Memory Model under JSR-133. This made programs more predictable. It also gave clear rules for how synchronization should work.
How the JMM and JVM Work Together
Think of the JMM as the physics rules for Java threads. And think of the JVM as an engineer building machines that follow those rules. The JMM sets out how threads should act. Then the JVM must make those rules work on real hardware.
Compliance Across Architectures
Different CPUs have different ways they handle memory. For example:
- x86 architectures usually have strong memory guarantees.
- ARM and PowerPC architectures are less strict. They let things be reordered more freely.
The JVM turns the JMM's rules into memory barriers and fences. These are right for the CPU architecture it runs on. This makes sure the same Java program gives correct results on both x86 and ARM.
Memory Barriers and Fences
Memory fences are basic CPU instructions. They make sure things happen in a certain order. The JVM puts these barriers in place. This helps make sure writes show up on other CPUs as the JMM says they should.
For example, when you access a volatile field, it tells the JVM to put out memory-fencing instructions. This stops reordering that could be unsafe. And it stops problems with visibility.
Thread Communication and Shared Variables
Java threads do not talk by sending messages. Instead, they share memory. This shared memory has fields in objects, parts of arrays, and even static fields.
Working Memory vs Main Memory
Each thread can store variables in its own working memory. This could be CPU registers or L1 caches. This causes a delay with the main memory. If there is no proper synchronization, one thread might be updating its local copy. And then another thread could read old values from main memory.
JMM to the Rescue
The Java Memory Model defines when values in local working memory must be seen by all threads. For example:
- Writing to a
volatilevariable sends local changes to main memory. - Getting or giving up a monitor (
synchronized) makes local memory match main memory.
Without these ways to make things work, as this code snippet shows:
class SharedData {
boolean flag = false;
void writer() {
flag = true;
}
void reader() {
if (flag) {
System.out.println("Flag detected!");
}
}
}
A thread calling writer() might change flag to true. But another thread calling reader() might not see that change. This is exactly the kind of problem the JMM fixes.
Instruction Reordering and Its Pitfalls
Compilers and CPUs often reorder instructions. They do this to run as fast as possible. You won't notice these reorderings in single-threaded programs. But they can cause big problems in programs with many threads.
Real-World Example: Double-Checked Locking
Look at this common way to set up something only when needed:
class Singleton {
private static Singleton instance;
public static Singleton getInstance() {
if (instance == null) {
synchronized (Singleton.class) {
if (instance == null) {
instance = new Singleton();
}
}
}
return instance;
}
}
Before Java 1.5, this code did not work right. The object could be set to instance before it was fully built. This happened because of reordering. Then other threads might see an object that was not fully made.
Solution
Using volatile:
private static volatile Singleton instance;
This makes sure memory is ordered correctly. And it makes sure things are visible. This makes the pattern safe with the JMM.
The Happens-Before Relationship
A key idea in the JMM is the happens-before relationship. It means if action A happens before action B, then B will see what A did. If there is no happens-before relationship, then you cannot be sure what will happen.
Common Happens-Before Relationships
- Method call order within a single thread.
- Writing to a
volatilevariable happens before reading it. - Letting go of a synchronized block happens before any lock on the same monitor.
- Calling
Thread.start()happens before the new thread’s run method begins. - Thread completion (via
join()) happens before thejoin()call returns.
You need to understand and set up these relationships. This is important for writing code that works predictably and safely with threads.
Use of Volatile, Synchronized, and Final
Java has language features that work directly with the memory model. These help make threads safe.
1. Volatile
- Guarantees visibility.
- It does not make sure operations are atomic. But it does if combined with operations that are already atomic (like setting a value).
- Best for simple state flags.
Example:
private volatile boolean ready = false;
2. Synchronized
- Stops more than one thread from using code at the same time. And it syncs memory.
- Sets up happens-before relationships.
- Heavier than
volatilebut can do more things.
Example:
synchronized (this) {
// Critical section
}
3. Final
- If a final field is set in the constructor and not released too early, other threads will safely see it.
- This lets objects that cannot be changed be shared safely, without more synchronization.
Be careful if "this" escapes or if you start threads inside constructors. Doing so breaks the safety rules.
Impact of JMM on JVM Implementations
Every JVM must follow the JMM. This is true no matter how it schedules threads or compiles bytecode. But how it works inside can be different:
- HotSpot (Oracle): It has strong ways to make things run faster. For example, biased locking and adaptive compiler speedups.
- OpenJ9 (IBM): It focuses on how much memory it uses. And it focuses on programs made for the cloud.
- GraalVM: It has a modern JIT compiler. And it has ways to make things run faster for programs that use many languages.
Even with these differences, all of them follow the Java Memory Model. This makes Java code that uses threads portable and predictable.
Common Java Concurrency Issues Without JMM Awareness
Problems when handling many tasks at once come from not understanding the memory model. Or from ignoring it:
- Incorrect double-check locking: Fixed with
volatile. - Old reads: Fixed with correctly synchronized access.
- Partially built objects: Fixed by properly releasing objects to other threads.
These bugs often happen sometimes. And they are hard to make happen again. This means you must design programs with the JMM in mind.
JMM with Modern Java Concurrency Tools
Java gives strong concurrency tools. They are made to strictly follow the Memory Model:
java.util.concurrentmakes sure things are visible and synchronized correctly.- Tools like
ConcurrentHashMap,AtomicInteger,ExecutorService, andSemaphorehide the complex details of concurrency controls. - Using them avoids doing synchronization by hand. And it lowers the chance of bugs.
By using these tools, developers can build programs that can grow. And they can build thread-safe programs better.
Recap: JMM vs JVM
| Feature | Java Memory Model (JMM) | Java Virtual Machine (JVM) |
|---|---|---|
| Role | Formal specification | Runtime engine |
| Defines | Memory visibility, ordering, atomicity | Execution of bytecode, memory management |
| Ensures | Thread-safe semantics | Performance, resource management, JMM compliance |
| Impacts | Code design and concurrency | Platform portability and optimization |
| Example Concern | volatile reads and writes |
Just-In-Time (JIT) compilation |
Understanding the responsibilities and boundaries between the JMM and JVM helps avoid concurrency bugs and develop high-performing Java applications.
Practical Tip Summary for Devsolus Readers
- ✅ Use
volatilefor state indicators that don't require atomic updates - ✅ Lean on
java.util.concurrentabstractions for thread-safe design - ✅ Apply
synchronizedblocks to protect shared mutable state - ✅ Never publish "this" reference during object construction
- ✅ Test and profile on multiple JVM variations for performance SLAs
- ✅ Keep up-to-date with the Java Language and JVM specifications
Learning these principles helps you write Java applications that use many threads. They will be safe, fast, and easy to keep up.
Citations
Goetz, B., Peierls, T., Bloch, J., Bowbeer, J., Holmes, D., & Lea, D. (2006). Java Concurrency in Practice. Addison-Wesley.
Lindholm, T., Yellin, F., Bracha, G., & Buckley, A. (2014). The Java Virtual Machine Specification, Java SE 8 Edition. Oracle.
Lea, D. (2005). JSR-133: Java Memory Model and Thread Specification. Retrieved from https://jcp.org/en/jsr/detail?id=133