synchronized是什么?
在java规范中是这样描述的:Java编程语言为线程间通信提供了多种机制。这些方法中最基本的是使用监视器实现的同步(Synchronized)。Java中的每个对象都是与监视器关联,线程可以锁定或解锁该监视器。一个线程一次只能锁住一个监视器。任何其他线程试图锁定监视器将被阻止,直到他们可以在监视器上获得锁为止。一个线程可以多次锁定特定监视器(可重入锁);每次解锁都会反转一次解锁的效果锁定操作。
Synchronized是作用到Java对象的同步锁。具有互斥性和可重入性,互斥性是针对多个线程之间,而可重入性是指同一个线程而言。
如果不具备可重入性,那么同一个线程二次获取锁的时候,会造成死锁。synchronized加锁后,解锁动作是自动的,不管同步代码是正常还是异常。
synchronized的作用域:
synchronized可以修饰成员方法,静态方法,代码块。
Synchronized锁的特性:
1、 原子性:确保线程互斥的访问同步代码,如同单线程环境,自然具有原子性。
2、 可见性:保证共享变量的修改能够及时可见,其实是通过Java内存模型中的 “对一个变量unlock操作之前,必须要同步到主内存中;如果对一个变量进行lock操作,则将会清空工作内存中此变量的值,在执行引擎使用此变量前,需要重新从主内存中load操作或assign操作初始化变量值” 来保证的;
3、 有序性:synchronized内的代码和外部的代码禁止排序,至于内部的代码,似乎不会禁止排序,但是由于是单线程的,根据Java代码单线程的有序性定义“不管怎么重排序,单线程的执行结果都不能改变”,所以没有任何影响!
底层原理验证:
public static void main(String[] args) {
synchronized (TestLock.class) {
System.out.println("1");
}
}
通过javap -c TestLock.class查看字节码得到如下:
下面看同步方法的字节码
public synchronized static void main(String[] args) { System.out.println("1"); }
从字节码中可以看出,synchronized修饰的方法并没有monitorenter指令和monitorexit指令,取得代之的确是ACC\_SYNCHRONIZED标识,该标识指明了该方法是一个同步方法,JVM通过该ACC\_SYNCHRONIZED访问标志来辨别一个方法是否声明为同步方法,从而执行相应的同步调用。
在JDK1.6之前的同步块和同步方法两个同步方式实际都是通过获取Monitor和释放Monitor来实现同步的,其实wait、notiy和notifyAll等方法也是依赖于Monitor对象的内部方法来完成的,这也就是为什么需要在同步方法或者同步代码块中调用的原因(需要先获取对象的锁,��能执行),否则会抛出java.lang.IllegalMonitorStateException的异常。
在JDK1.6之前,synchronized属于重量级锁,效率低下,因为Monitor是依赖于底层的操作系统的互斥原语mutex来实现,JDK1.6之前实际上加锁时会调用Monitor的enter方法,解锁时会调用Monitor的exit方法,由于java的线程是映射到操作系统的原生线程之上的,如果要操作Monitor对象,都需要操作系统来帮忙完成,这会导致线程在“用户态和内核态”两个态之间来回切换,这个状态之间的转换需要相对比较长的时间,对性能有较大影响。
庆幸的是在JDK1.6之后Java官方对从JVM层面对synchronized较大优化,所以现在的synchronized锁效率也优化得很不错了,JDK1.6之后,为了减少获得锁和释放锁所带来的性能消耗,为了减少这种重量级锁的使用,引入了轻量级锁和偏向锁,这两个锁可以不依赖Monitor的操作。
偏向锁:
偏向锁是在单线程执行代码块时使用的机制,如果在多线程并发的环境下,则一定会转化为轻量级锁或者重量级锁。
引入偏向锁主要目的是:为了在没有多线程竞争的情况下尽量减少不必要的轻量级锁执行路径。因为轻量级锁、重量级锁的加锁、解锁操作是需要依赖多次CAS原子指令的,而偏向锁只需要在置换ThreadID的时候依赖一次CAS原子指令。
偏向锁的获取:
void BiasedLocking::revoke(Handle obj, TRAPS) {
assert(!SafepointSynchronize::is_at_safepoint(), "must not be called while at safepoint");
while (true) {
// We can revoke the biases of anonymously-biased objects
// efficiently enough that we should not cause these revocations to
// update the heuristics because doing so may cause unwanted bulk
// revocations (which are expensive) to occur.
//获取markword
markWord mark = obj->mark();
//是否有偏向锁字段
if (!mark.has_bias_pattern()) {
return;
}
if (mark.is_biased_anonymously()) {
// We are probably trying to revoke the bias of this object due to
// an identity hash code computation. Try to revoke the bias
// without a safepoint. This is possible if we can successfully
// compare-and-exchange an unbiased header into the mark word of
// the object, meaning that no other thread has raced to acquire
// the bias of the object.
markWord biased_value = mark;
markWord unbiased_prototype = markWord::prototype().set_age(mark.age());
//cas设置偏向锁标志
markWord res_mark = obj->cas_set_mark(unbiased_prototype, mark);
if (res_mark == biased_value) {
return;
}
mark = res_mark; // Refresh mark with the latest value.
} else {
Klass* k = obj->klass();
markWord prototype_header = k->prototype_header();
if (!prototype_header.has_bias_pattern()) {
// This object has a stale bias from before the bulk revocation
// for this data type occurred. It's pointless to update the
// heuristics at this point so simply update the header with a
// CAS. If we fail this race, the object's bias has been revoked
// by another thread so we simply return and let the caller deal
// with it.
obj->cas_set_mark(prototype_header.set_age(mark.age()), mark);
assert(!obj->mark().has_bias_pattern(), "even if we raced, should still be revoked");
return;
} else if (prototype_header.bias_epoch() != mark.bias_epoch()) {
// The epoch of this biasing has expired indicating that the
// object is effectively unbiased. We can revoke the bias of this
// object efficiently enough with a CAS that we shouldn't update the
// heuristics. This is normally done in the assembly code but we
// can reach this point due to various points in the runtime
// needing to revoke biases.
markWord res_mark;
markWord biased_value = mark;
markWord unbiased_prototype = markWord::prototype().set_age(mark.age());
res_mark = obj->cas_set_mark(unbiased_prototype, mark);
if (res_mark == biased_value) {
return;
}
mark = res_mark; // Refresh mark with the latest value.
}
}
HeuristicsResult heuristics = update_heuristics(obj());
if (heuristics == HR_NOT_BIASED) {
return;
} else if (heuristics == HR_SINGLE_REVOKE) {
JavaThread *blt = mark.biased_locker();
assert(blt != NULL, "invariant");
if (blt == THREAD) {
// A thread is trying to revoke the bias of an object biased
// toward it, again likely due to an identity hash code
// computation. We can again avoid a safepoint/handshake in this case
// since we are only going to walk our own stack. There are no
// races with revocations occurring in other threads because we
// reach no safepoints in the revocation path.
EventBiasedLockSelfRevocation event;
ResourceMark rm;
walk_stack_and_revoke(obj(), blt);
blt->set_cached_monitor_info(NULL);
assert(!obj->mark().has_bias_pattern(), "invariant");
if (event.should_commit()) {
post_self_revocation_event(&event, obj->klass());
}
return;
} else {
BiasedLocking::Condition cond = single_revoke_with_handshake(obj, (JavaThread*)THREAD, blt);
if (cond != NOT_REVOKED) {
return;
}
}
} else {
assert((heuristics == HR_BULK_REVOKE) ||
(heuristics == HR_BULK_REBIAS), "?");
EventBiasedLockClassRevocation event;
VM_BulkRevokeBias bulk_revoke(&obj, (JavaThread*)THREAD,
(heuristics == HR_BULK_REBIAS));
VMThread::execute(&bulk_revoke);
if (event.should_commit()) {
post_class_revocation_event(&event, obj->klass(), &bulk_revoke);
}
return;
}
}
}
偏向锁标记是存储在对象头的markword中的,在源码中有对锁标记说明:
在运行期间,Mark Word里存储的数据会随着锁标志位的变化而变化。32虚拟机的不同状态下Mark Word的大概组成如下表:
轻量级锁获取:
源码位于src/hotspot/share/runtime/objectMonitor.cpp
//轻量级锁和重量级加锁入口
void ObjectMonitor::enter(TRAPS) {
// The following code is ordered to check the most common cases first
// and to reduce RTS->RTO cache line upgrades on SPARC and IA32 processors.
//获取当前线程
Thread * const Self = THREAD;
//cas设置 owner的值 为当前线程
void * cur = Atomic::cmpxchg(&_owner, (void*)NULL, Self);
//之前没有线程 设置当前线程成功
if (cur == NULL) {
//设置重入次数0 _recursions = 0
assert(_recursions == 0, "invariant");
return;
}
//之前加锁的线程 就是当前线程
if (cur == Self) {
// TODO-FIXME: check for integer overflow! BUGID 6557169.
//重试次数自增
_recursions++;
return;
}
if (Self->is_lock_owned((address)cur)) {
assert(_recursions == 0, "internal state error");
//重入次数设置为1
_recursions = 1;
// Commute owner from a thread-specific on-stack BasicLockObject address to
// a full-fledged "Thread *".
//更新owner为当前线程
_owner = Self;
return;
}
//开始竞争锁
// We've encountered genuine contention.
assert(Self->_Stalled == 0, "invariant");
Self->_Stalled = intptr_t(this);
//在入队列之前 尝试自旋
// Try one round of spinning *before* enqueueing Self
// and before going through the awkward and expensive state
// transitions. The following spin is strictly optional ...
// Note that if we acquire the monitor from an initial spin
// we forgo posting JVMTI events and firing DTRACE probes.
if (TrySpin(Self) > 0) {
assert(_owner == Self, "must be Self: owner=" INTPTR_FORMAT, p2i(_owner));
assert(_recursions == 0, "must be 0: recursions=" INTX_FORMAT, _recursions);
assert(((oop)object())->mark() == markWord::encode(this),
"object mark must match encoded this: mark=" INTPTR_FORMAT
", encoded this=" INTPTR_FORMAT, ((oop)object())->mark().value(),
markWord::encode(this).value());
Self->_Stalled = 0;
return;
}
// 前面自旋 没有成功 开始进入重量级锁
assert(_owner != Self, "invariant");
assert(_succ != Self, "invariant");
assert(Self->is_Java_thread(), "invariant");
JavaThread * jt = (JavaThread *) Self;
assert(!SafepointSynchronize::is_at_safepoint(), "invariant");
assert(jt->thread_state() != _thread_blocked, "invariant");
assert(this->object() != NULL, "invariant");
assert(_contentions >= 0, "invariant");
// Prevent deflation at STW-time. See deflate_idle_monitors() and is_busy().
// Ensure the object-monitor relationship remains stable while there's contention.
// _contentions 争用次数自增
Atomic::inc(&_contentions);
JFR_ONLY(JfrConditionalFlushWithStacktrace<EventJavaMonitorEnter> flush(jt);)
EventJavaMonitorEnter event;
if (event.should_commit()) {
// 设置类型指针
event.set_monitorClass(((oop)this->object())->klass());
//设置对象内存地址
event.set_address((uintptr_t)(this->object_addr()));
}
//更改Java线程状态以指示在监视器进入时被阻止。
{ // Change java thread status to indicate blocked on monitor enter.
//修改当前线程为阻塞状态
JavaThreadBlockedOnMonitorEnterState jtbmes(jt, this);
Self->set_current_pending_monitor(this);
DTRACE_MONITOR_PROBE(contended__enter, this, object(), jt);
if (JvmtiExport::should_post_monitor_contended_enter()) {
JvmtiExport::post_monitor_contended_enter(jt, this);
//当前线程还没在队列里
// The current thread does not yet own the monitor and does not
// yet appear on any queues that would get it made the successor.
// This means that the JVMTI_EVENT_MONITOR_CONTENDED_ENTER event
// handler cannot accidentally consume an unpark() meant for the
// ParkEvent associated with this ObjectMonitor.
}
OSThreadContendState osts(Self->osthread());
ThreadBlockInVM tbivm(jt);
// TODO-FIXME: change the following for(;;) loop to straight-line code.
for (;;) {
jt->set_suspend_equivalent();
// cleared by handle_special_suspend_equivalent_condition()
// or java_suspend_self()
//开始获取重量级别锁
EnterI(THREAD);
if (!ExitSuspendEquivalent(jt)) break;
//我们已经获得了有竞争力的监视器,但是当我们等待另一个线程暂停我们时
//我们不想在挂起时进入监视器,因为这会使挂起我们的线程感到惊讶。
// We have acquired the contended monitor, but while we were
// waiting another thread suspended us. We don't want to enter
// the monitor while suspended because that would surprise the
// thread that suspended us.
//
_recursions = 0;
_succ = NULL;
exit(false, Self);
jt->java_suspend_self();
}
Self->set_current_pending_monitor(NULL);
// We cleared the pending monitor info since we've just gotten past
// the enter-check-for-suspend dance and we now own the monitor free
// and clear, i.e., it is no longer pending. The ThreadBlockInVM
// destructor can go to a safepoint at the end of this block. If we
// do a thread dump during that safepoint, then this thread will show
// as having "-locked" the monitor, but the OS and java.lang.Thread
// states will still report that the thread is blocked trying to
// acquire it.
}
Atomic::dec(&_contentions);
assert(_contentions >= 0, "invariant");
Self->_Stalled = 0;
// Must either set _recursions = 0 or ASSERT _recursions == 0.
assert(_recursions == 0, "invariant");
assert(_owner == Self, "invariant");
assert(_succ != Self, "invariant");
//
assert(((oop)(object()))->mark() == markWord::encode(this), "invariant");
// The thread -- now the owner -- is back in vm mode.
// Report the glorious news via TI,DTrace and jvmstat.
// The probe effect is non-trivial. All the reportage occurs
// while we hold the monitor, increasing the length of the critical
// section. Amdahl's parallel speedup law comes vividly into play.
//
// Another option might be to aggregate the events (thread local or
// per-monitor aggregation) and defer reporting until a more opportune
// time -- such as next time some thread encounters contention but has
// yet to acquire the lock. While spinning that thread could
// spinning we could increment JVMStat counters, etc.
DTRACE_MONITOR_PROBE(contended__entered, this, object(), jt);
if (JvmtiExport::should_post_monitor_contended_entered()) {
JvmtiExport::post_monitor_contended_entered(jt, this);
// The current thread already owns the monitor and is not going to
// call park() for the remainder of the monitor enter protocol. So
// it doesn't matter if the JVMTI_EVENT_MONITOR_CONTENDED_ENTERED
// event handler consumed an unpark() issued by the thread that
// just exited the monitor.
}
if (event.should_commit()) {
event.set_previousOwner((uintptr_t)_previous_owner_tid);
event.commit();
}
OM_PERFDATA_OP(ContendedLockAttempts, inc());
}
```java
省略部分代码
void ATTR ObjectMonitor::EnterI (TRAPS) {
Thread * Self = THREAD ;
// Try lock 尝试获取锁
if (TryLock (Self) > 0) {
// 如果获取成功则退出,避免 park unpark 系统调度的开销
return ;
}
// 自旋获取锁
if (TrySpin(Self) > 0) {
return;
}
// 当前线程被封装成 ObjectWaiter 对象 node, 状态设置成 ObjectWaiter::TS_CXQ
ObjectWaiter node(Self) ;
Self->_ParkEvent->reset() ;
node._prev = (ObjectWaiter *) 0xBAD ;
node.TState = ObjectWaiter::TS_CXQ ;
// 通过 CAS 把 node 节点 push 到_cxq 列表中
ObjectWaiter * nxt ;
for (;;) {
node._next = nxt = _cxq ;
if (Atomic::cmpxchg_ptr (&node, &_cxq, nxt) == nxt) break ;
// 再次 tryLock
if (TryLock (Self) > 0) {
return ;
}
}
for (;;) {
// 本段代码的主要思想和 AQS 中相似可以类比来看
// 再次尝试
if (TryLock (Self) > 0) break ;
assert (_owner != Self, "invariant") ;
if ((SyncFlags & 2) && _Responsible == NULL) {
Atomic::cmpxchg_ptr (Self, &_Responsible, NULL) ;
}
// 满足条件则 park self
if (_Responsible == Self || (SyncFlags & 1)) {
TEVENT (Inflated enter - park TIMED) ;
Self->_ParkEvent->park ((jlong) RecheckInterval) ;
// Increase the RecheckInterval, but clamp the value.
RecheckInterval *= 8 ;
if (RecheckInterval > 1000) RecheckInterval = 1000 ;
} else {
TEVENT (Inflated enter - park UNTIMED) ;
// 通过 park 将当前线程挂起,等待被唤醒
Self->_ParkEvent->park() ;
}
if (TryLock(Self) > 0) break ;
// 再次尝试自旋
if ((Knob_SpinAfterFutile & 1) && TrySpin(Self) > 0) break;
}
return ;
}
从上面看出,轻量级锁其实就是利用cpu资源先自旋等待,涉及到自适应自旋,每次都会将线程重试获取锁的次数记录下来,用于下次获取自动自旋次数的参考,如果再自旋一定次数后成功获取锁则不会转换成重量级锁。
锁释放流程
void ObjectMonitor::exit(bool not_suspended, TRAPS) {
Thread * const Self = THREAD;
//检查锁状态
if (THREAD != _owner) {
if (THREAD->is_lock_owned((address) _owner)) {
// Transmute _owner from a BasicLock pointer to a Thread address.
// We don't need to hold _mutex for this transition.
// Non-null to Non-null is safe as long as all readers can
// tolerate either flavor.
assert(_recursions == 0, "invariant");
_owner = THREAD;
_recursions = 0;
} else {
// Apparent unbalanced locking ...
// Naively we'd like to throw IllegalMonitorStateException.
// As a practical matter we can neither allocate nor throw an
// exception as ::exit() can be called from leaf routines.
// see x86_32.ad Fast_Unlock() and the I1 and I2 properties.
// Upon deeper reflection, however, in a properly run JVM the only
// way we should encounter this situation is in the presence of
// unbalanced JNI locking. TODO: CheckJNICalls.
// See also: CR4414101
#ifdef ASSERT
LogStreamHandle(Error, monitorinflation) lsh;
lsh.print_cr("ERROR: ObjectMonitor::exit(): thread=" INTPTR_FORMAT
" is exiting an ObjectMonitor it does not own.", p2i(THREAD));
lsh.print_cr("The imbalance is possibly caused by JNI locking.");
print_debug_style_on(&lsh);
#endif
assert(false, "Non-balanced monitor enter/exit!");
return;
}
}
//重入次数 减少1
if (_recursions != 0) {
_recursions--; // this is simple recursive enter
return;
}
// Invariant: after setting Responsible=null an thread must execute
// a MEMBAR or other serializing instruction before fetching EntryList|cxq.
_Responsible = NULL;
#if INCLUDE_JFR
// get the owner's thread id for the MonitorEnter event
// if it is enabled and the thread isn't suspended
if (not_suspended && EventJavaMonitorEnter::is_enabled()) {
//设置上一次获取锁的线程
_previous_owner_tid = JFR_THREAD_ID(Self);
}
#endif
for (;;) {
assert(THREAD == _owner, "invariant");
// release semantics: prior loads and stores from within the critical section
// must not float (reorder) past the following store that drops the lock.
//将 _owner 设置为Null
Atomic::release_store(&_owner, (void*)NULL); // drop the lock
//内存屏障 后面的读 可以读取到同步代码之前所有的修改
OrderAccess::storeload();
//看看我们是否需要唤醒后继者
// See if we need to wake a successor
if ((intptr_t(_EntryList)|intptr_t(_cxq)) == 0 || _succ != NULL) {
return;
}
// Other threads are blocked trying to acquire the lock.
//通常情况下,退出线程负责确保继承, 确定下一个要唤醒的线程
// Normally the exiting thread is responsible for ensuring succession,
// 可能其它线程正在自旋进入
//但是如果其他继任者准备就绪或其他进入线程正在旋转,
//则此线程可以简单地将NULL存储到_owner中并退出而无需唤醒继任者
//替换为空失败 其它线程已经在抢占锁了
if (!Atomic::replace_if_null(&_owner, THREAD)) {
return;
}
guarantee(_owner == THREAD, "invariant");
ObjectWaiter * w = NULL;
w = _EntryList;
if (w != NULL) {
// I'd like to write: guarantee (w->_thread != Self).
// But in practice an exiting thread may find itself on the EntryList.
// Let's say thread T1 calls O.wait(). Wait() enqueues T1 on O's waitset and
// then calls exit(). Exit release the lock by setting O._owner to NULL.
// Let's say T1 then stalls. T2 acquires O and calls O.notify(). The
// notify() operation moves T1 from O's waitset to O's EntryList. T2 then
// release the lock "O". T2 resumes immediately after the ST of null into
// _owner, above. T2 notices that the EntryList is populated, so it
// reacquires the lock and then finds itself on the EntryList.
// Given all that, we have to tolerate the circumstance where "w" is
// associated with Self.
assert(w->TState == ObjectWaiter::TS_ENTER, "invariant");
ExitEpilog(Self, w);
return;
}
// If we find that both _cxq and EntryList are null then just
// re-run the exit protocol from the top.
w = _cxq;
if (w == NULL) continue;
// Drain _cxq into EntryList - bulk transfer.
// First, detach _cxq.
// The following loop is tantamount to: w = swap(&cxq, NULL)
for (;;) {
assert(w != NULL, "Invariant");
ObjectWaiter * u = Atomic::cmpxchg(&_cxq, w, (ObjectWaiter*)NULL);
if (u == w) break;
w = u;
}
assert(w != NULL, "invariant");
assert(_EntryList == NULL, "invariant");
// Convert the LIFO SLL anchored by _cxq into a DLL.
// The list reorganization step operates in O(LENGTH(w)) time.
// It's critical that this step operate quickly as
// "Self" still holds the outer-lock, restricting parallelism
// and effectively lengthening the critical section.
// Invariant: s chases t chases u.
// TODO-FIXME: consider changing EntryList from a DLL to a CDLL so
// we have faster access to the tail.
_EntryList = w;
ObjectWaiter * q = NULL;
ObjectWaiter * p;
for (p = w; p != NULL; p = p->_next) {
guarantee(p->TState == ObjectWaiter::TS_CXQ, "Invariant");
p->TState = ObjectWaiter::TS_ENTER;
p->_prev = q;
q = p;
}
// In 1-0 mode we need: ST EntryList; MEMBAR #storestore; ST _owner = NULL
// The MEMBAR is satisfied by the release_store() operation in ExitEpilog().
// See if we can abdicate to a spinner instead of waking a thread.
// A primary goal of the implementation is to reduce the
// context-switch rate.
if (_succ != NULL) continue;
w = _EntryList;
if (w != NULL) {
guarantee(w->TState == ObjectWaiter::TS_ENTER, "invariant");
ExitEpilog(Self, w);
return;
}
}
}
从源码可知synchronized锁获取流程如下图:
通过查阅synchronized源码发现实现还是很巧妙的,而且他们也不断在优化代码,为了synchronized不断提升性能做出了巨大贡献,作为开发者对这种追求极致的品质真的很佩服。