[Konkurens] atomic classes and volatile

Menczer Andor menczer.andor at gmail.com
Thu Oct 16 16:01:25 CEST 2025


Hi Everyone,

Unfortunately I couldn't attend last week's meeting. However, I just
watched the recording and noticed some things were mentioned, but not
addressed during the meeting:

*volatile in Java:*

I strongly discourage the use of volatile in classes. It's okay to mention
it during a one on one with a more knowledgeable student, but I feel like
in general it's more confusing than helpful for beginners. In fact, I often
see even lecturers misusing volatile.

The keyword volatile should *never* be used to deal with race conditions.
Volatile is strictly for managing *visibility *in a parallel computing
environment. In simple terms, what this means is that volatile reduces the
risk of a thread not seeing the effect of another thread's operation
*after* the
operation has already finished. This is usually done by restricting both
compile and runtime optimizations, as well as bypassing low level caching.
By doing so the variable is not only guaranteed to remain in the compiled
code, but every read and write operation is executed inside the actual
memory space (e.g. RAM), and not using locally cached variants.

In short, mutual exclusion and critical sections are used to
guarantee thread-safety *during *the problematic operation, while
visibility is when the effect reaches other threads *after *the operation.
Relying on the fact that most processors can do simple operations with
primitives in a single instruction, thus we only have to deal with
visibility when it comes to primitives is both dangerous and incorrect.

tldr: "x++" is not thread-safe even if x is a volatile int. Either use
atomic classes or explicit locking.

*atomic classes and custom operators:*

In both Java and CPP, atomic classes *do not *guarantee lock-free
behaviour. This is highly dependent on the hardware and implementation of
the language. The "atomic" behaviour is not referring to what it actually
does, but how it seems to an outside observer. The actual implementation is
allowed to use all tools necessary to mimic such behaviour, including
locking. The methods of the class are atomic in the sense that they are
executed as if they were not divisible into smaller subroutines. They
either did nothing yet, or they have already finished. There is nothing
in-between.* This is how it appears to be working from the point of view of
other threads, and not how it actually works*. The easiest way to achieve
this is through locking. The other threads cannot see the in-between
because they are locked out and forced to wait for the shared resource.

Fun fact: c++11 allows you use to check whether the compiled code for the
atomic class uses locking or not std::atomic<T>::is_lock_free
<https://en.cppreference.com/w/cpp/atomic/atomic/is_lock_free.html>

Thank you all for coming to my TED talk.

Regards,
Andor
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://plc.inf.elte.hu/pipermail/konkurens/attachments/20251016/11f9a5cc/attachment.htm>


More information about the Konkurens mailing list