Effective Objective-C 2.0: Item 41: Prefer Dispatch Queues to Locks for Synchronization
2015-12-02 19:27
585 查看
Sometimes in Objective-C, you will come across code that you’re having trouble with because it’s being accessed from multiple threads. This situation usually calls for the application of some sort of synchronization through the
use of locks. Before GCD, there were two ways to achieve this, the first being the built-in synchronization block:
Click here to view code image
- (void)synchronizedMethod {
@synchronized(self) {
// Safe
}
}
This construct automatically creates a lock based on the given object and
waits on that lock until it executes the code contained in the block. At the end of the code block, the lock is released. In the example, the object being synchronized
against is
However, overuse of
such blocks. If you overuse synchronization against
by unrelated code.
The other approach is to use the
Click here to view code image
_lock = [[NSLock alloc] init];
- (void)synchronizedMethod {
[_lock lock];
// Safe
[_lock unlock];
}
Recursive locks are also available through
same lock multiple times without causing a deadlock.
Both of these approaches are fine but come with their own drawbacks. For example,synchronization
blocks can suffer from deadlock under extreme circumstances and are not necessarily efficient. Direct use of locks can be troublesome when it comes to deadlocks.
The alternative is to use GCD, which can provide locking in a much
simpler and more efficient manner. Properties are a good example of where developers
find the need to put synchronization, known as making the property atomic. This can be achieved through use of the
6). Or, if the accessors need to be written manually, the following is often seen:
Click here to view code image
- (NSString*)someString {
@synchronized(self) {
return _someString;
}
}
- (void)setSomeString:(NSString*)someString {
@synchronized(self) {
_someString = someString;
}
}
Recall that
dangerous if overused, because all such blocks will be synchronized with respect to one another. If multiple properties do that, each will be synchronized with respect to all others, which is probably not what you want. All
you really want is that access to each property be synchronized individually.
As an aside, you should be aware that although this goes some way
to ensuring thread safety, it does not ensure absolute thread safety of the object. Rather, access to the property is atomic.You are guaranteed to get valid results when using
the property, but if you call the getter multiple times from the same thread, you may not necessarily get the same result each time. Other threads may have written to the property between accesses.
A simple and effective alternative to synchronization blocks or lock objects is to use a
serial synchronization queue. Dispatching reads and writes onto the same queue ensures synchronization.
Doing so looks like this:
Click here to view code image
_syncQueue =
dispatch_queue_create("com.effectiveobjectivec.syncQueue", NULL);
- (NSString*)someString {
__block NSString *localSomeString;
dispatch_sync(_syncQueue, ^{
localSomeString = _someString;
});
return localSomeString;
}
- (void)setSomeString:(NSString*)someString
{
dispatch_sync(_syncQueue, ^{
_someString = someString;
});
}
The idea behind this pattern is that all access to the property is synchronized
because the GCD queue that both the setter and the getter run on is a serial queue. Apart from the
required to allow the block to set the variable (see Item 37),
this approach is much neater. All the locking is handled down in GCD, which has been implemented at a very low level and has many optimizations made. Thus, you don’t have to worry about that side of things and can instead focus on writing your accessor code.
However, we can go one step further. The setter does not have to be synchronous.
The block that sets the instance variable does not need to return anything to the setter method. This means that you can change the setter method to look like this:
Click here to view code image
- (void)setSomeString:(NSString*)someString {
dispatch_async(_syncQueue, ^{
_someString = someString;
});
}
The simple change from synchronous dispatch to asynchronous provides the benefit that the setter is fast from the caller’s perspective, but reading
and writing are still executed serially with respect to each another. One downside, though, is that if you were to benchmark this, you might find that it’s slower; with asynchronous
dispatch, the block has to be copied. If the time taken to perform the copy is significant compared to the time the block takes to execute, it will be slower. So in our simple example, it’s likely to be slower. However, the approach is still good to
understand as a potential candidate if the block that is being dispatched performs much heavier tasks.
Another way to make this approach even faster is to take advantage
of the fact that the getters can run concurrently with one another but not with the setter. This is where the GCD approach comes into its own. The following cannot be easily done with synchronization blocks or locks. Instead of using a serial queue,
consider what would happen if you used a concurrent queue:
Click here to view code image
_syncQueue =
dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
- (NSString*)someString {
__block NSString *localSomeString;
dispatch_sync(_syncQueue, ^{
localSomeString = _someString;
});
return localSomeString;
}
- (void)setSomeString:(NSString*)someString {
dispatch_async(_syncQueue, ^{
_someString = someString;
});
}
As it stands, that code would not work for synchronization. All reads and writes are executed on the same queue, but that queue being concurrent, reads
and writes can all happen at the same time. This is what we were trying to stop from happening in the first place! However, a simple GCD feature, called a barrier, is available
and can solve this. The functions that a queue barrier blocks are as follows:
Click here to view code image
void dispatch_barrier_async(dispatch_queue_t queue,
dispatch_block_t block);
void dispatch_barrier_sync(dispatch_queue_t queue,
dispatch_block_t block);
A barrier is executed exclusively with respect to all other blocks
on that queue. They are relevant only on concurrent queues, since all blocks on a serial queue are always executed exclusively with respect to one another. When a queue is processed
and the next block is a barrier block, the queue waits for all current blocks to finish and then executes the barrier block. When the barrier block finishes executing, processing
of the queue continues as normal.
Barriers can be used with the property example in the setter. If the setter uses a barrier block, reads of the property will still execute
concurrently, but writes will execute exclusively. Figure 6.3illustrates
the queue with many reads and a single write queued.
Figure 6.3 Concurrent
queue with reads as normal blocks and writes as barrier blocks. Reads are executed concurrently; writes are executed exclusively, as they are barriers.
The code to achieve this is simple:
Click here to view code image
_syncQueue =
dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
- (NSString*)someString {
__block NSString *localSomeString;
dispatch_sync(_syncQueue, ^{
localSomeString = _someString;
});
return localSomeString;
}
- (void)setSomeString:(NSString*)someString {
dispatch_barrier_async(_syncQueue, ^{
_someString = someString;
});
}
If you were to benchmark this, you would certainly find it quicker
than using a serial queue. Note that you could also use a synchronous barrier in the setter, which may be more efficient for the same reason as explained before. It would be
prudent to benchmark each approach and choose the one that is best for your specific scenario.
Dispatch
queues can be used to provide synchronization semantics and offer a simpler alternative to
Mixing
synchronous and asynchronous dispatches can provide the same synchronized behavior as with normal locking but without blocking the calling thread in the asynchronous dispatches.
Concurrent
queues and barrier blocks can be used to make synchronized behavior more efficient.
use of locks. Before GCD, there were two ways to achieve this, the first being the built-in synchronization block:
Click here to view code image
- (void)synchronizedMethod {
@synchronized(self) {
// Safe
}
}
This construct automatically creates a lock based on the given object and
waits on that lock until it executes the code contained in the block. At the end of the code block, the lock is released. In the example, the object being synchronized
against is
self. This construct is often a good choice, as it ensures that each instance of the object can run its own
synchronizedMethodindependently.
However, overuse of
@synchronized(self)can lead to inefficient code, as each synchronized block will execute serially across all
such blocks. If you overuse synchronization against
self,you can end up with code waiting unnecessarily on a lock held
by unrelated code.
The other approach is to use the
NSLockobject directly:
Click here to view code image
_lock = [[NSLock alloc] init];
- (void)synchronizedMethod {
[_lock lock];
// Safe
[_lock unlock];
}
Recursive locks are also available through
NSRecursiveLock,allowing for one thread to take out the
same lock multiple times without causing a deadlock.
Both of these approaches are fine but come with their own drawbacks. For example,synchronization
blocks can suffer from deadlock under extreme circumstances and are not necessarily efficient. Direct use of locks can be troublesome when it comes to deadlocks.
The alternative is to use GCD, which can provide locking in a much
simpler and more efficient manner. Properties are a good example of where developers
find the need to put synchronization, known as making the property atomic. This can be achieved through use of the
atomicproperty attribute (see Item
6). Or, if the accessors need to be written manually, the following is often seen:
Click here to view code image
- (NSString*)someString {
@synchronized(self) {
return _someString;
}
}
- (void)setSomeString:(NSString*)someString {
@synchronized(self) {
_someString = someString;
}
}
Recall that
@synchronized(self)is
dangerous if overused, because all such blocks will be synchronized with respect to one another. If multiple properties do that, each will be synchronized with respect to all others, which is probably not what you want. All
you really want is that access to each property be synchronized individually.
As an aside, you should be aware that although this goes some way
to ensuring thread safety, it does not ensure absolute thread safety of the object. Rather, access to the property is atomic.You are guaranteed to get valid results when using
the property, but if you call the getter multiple times from the same thread, you may not necessarily get the same result each time. Other threads may have written to the property between accesses.
A simple and effective alternative to synchronization blocks or lock objects is to use a
serial synchronization queue. Dispatching reads and writes onto the same queue ensures synchronization.
Doing so looks like this:
Click here to view code image
_syncQueue =
dispatch_queue_create("com.effectiveobjectivec.syncQueue", NULL);
- (NSString*)someString {
__block NSString *localSomeString;
dispatch_sync(_syncQueue, ^{
localSomeString = _someString;
});
return localSomeString;
}
- (void)setSomeString:(NSString*)someString
{
dispatch_sync(_syncQueue, ^{
_someString = someString;
});
}
The idea behind this pattern is that all access to the property is synchronized
because the GCD queue that both the setter and the getter run on is a serial queue. Apart from the
__blocksyntax in the getter,
required to allow the block to set the variable (see Item 37),
this approach is much neater. All the locking is handled down in GCD, which has been implemented at a very low level and has many optimizations made. Thus, you don’t have to worry about that side of things and can instead focus on writing your accessor code.
However, we can go one step further. The setter does not have to be synchronous.
The block that sets the instance variable does not need to return anything to the setter method. This means that you can change the setter method to look like this:
Click here to view code image
- (void)setSomeString:(NSString*)someString {
dispatch_async(_syncQueue, ^{
_someString = someString;
});
}
The simple change from synchronous dispatch to asynchronous provides the benefit that the setter is fast from the caller’s perspective, but reading
and writing are still executed serially with respect to each another. One downside, though, is that if you were to benchmark this, you might find that it’s slower; with asynchronous
dispatch, the block has to be copied. If the time taken to perform the copy is significant compared to the time the block takes to execute, it will be slower. So in our simple example, it’s likely to be slower. However, the approach is still good to
understand as a potential candidate if the block that is being dispatched performs much heavier tasks.
Another way to make this approach even faster is to take advantage
of the fact that the getters can run concurrently with one another but not with the setter. This is where the GCD approach comes into its own. The following cannot be easily done with synchronization blocks or locks. Instead of using a serial queue,
consider what would happen if you used a concurrent queue:
Click here to view code image
_syncQueue =
dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
- (NSString*)someString {
__block NSString *localSomeString;
dispatch_sync(_syncQueue, ^{
localSomeString = _someString;
});
return localSomeString;
}
- (void)setSomeString:(NSString*)someString {
dispatch_async(_syncQueue, ^{
_someString = someString;
});
}
As it stands, that code would not work for synchronization. All reads and writes are executed on the same queue, but that queue being concurrent, reads
and writes can all happen at the same time. This is what we were trying to stop from happening in the first place! However, a simple GCD feature, called a barrier, is available
and can solve this. The functions that a queue barrier blocks are as follows:
Click here to view code image
void dispatch_barrier_async(dispatch_queue_t queue,
dispatch_block_t block);
void dispatch_barrier_sync(dispatch_queue_t queue,
dispatch_block_t block);
A barrier is executed exclusively with respect to all other blocks
on that queue. They are relevant only on concurrent queues, since all blocks on a serial queue are always executed exclusively with respect to one another. When a queue is processed
and the next block is a barrier block, the queue waits for all current blocks to finish and then executes the barrier block. When the barrier block finishes executing, processing
of the queue continues as normal.
Barriers can be used with the property example in the setter. If the setter uses a barrier block, reads of the property will still execute
concurrently, but writes will execute exclusively. Figure 6.3illustrates
the queue with many reads and a single write queued.
Figure 6.3 Concurrent
queue with reads as normal blocks and writes as barrier blocks. Reads are executed concurrently; writes are executed exclusively, as they are barriers.
The code to achieve this is simple:
Click here to view code image
_syncQueue =
dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
- (NSString*)someString {
__block NSString *localSomeString;
dispatch_sync(_syncQueue, ^{
localSomeString = _someString;
});
return localSomeString;
}
- (void)setSomeString:(NSString*)someString {
dispatch_barrier_async(_syncQueue, ^{
_someString = someString;
});
}
If you were to benchmark this, you would certainly find it quicker
than using a serial queue. Note that you could also use a synchronous barrier in the setter, which may be more efficient for the same reason as explained before. It would be
prudent to benchmark each approach and choose the one that is best for your specific scenario.
Things to Remember
Dispatch
queues can be used to provide synchronization semantics and offer a simpler alternative to
@synchronizedblocks or
NSLockobjects.
Mixing
synchronous and asynchronous dispatches can provide the same synchronized behavior as with normal locking but without blocking the calling thread in the asynchronous dispatches.
Concurrent
queues and barrier blocks can be used to make synchronized behavior more efficient.
相关文章推荐
- 好代码系列(一):LazyObject
- IOS-36-Object-C语法之属性关键字的使用(assign/weak/strong/copy)
- JSONObject 转换 JSON复杂对象
- Java之Object构造方法(创建子类对父类构造方法的调用)
- Object中的wait,notify,notifyAll基本使用
- Java多线程编程模式实战指南(二):Immutable Object模式
- SWFObject文件上传使用记录
- 问题3-Error occurred during initialization of VM Could not reserve enough space for object heap
- 使用Myeclipse内置Ant编译项目时提醒警告java\lang\Object.class(java\lang:Object.class): 主版本 51 比 50 新,此编译器支持最新的主版本
- 在controller中返回ajax处理后结果,页面显示xmlObjectDocument的解决方法
- Objective C语言中nil、Nil、NULL、NSNull的区别
- git 错误 fatal: Not a valid object name: 'master'.
- re.MatchObject() Python
- 基于SuperMap iObjects.NET 7C 中扩展图层实现航线一键查询系统
- #Paper Reading# Learning to Segment Object Candidates
- Qt多线程间信号槽传递非QObject类型对象的参数
- objective-c 委托的理解与应用
- Delphi组件开发-在窗体标题栏添加按钮(使用MakeObjectInstance(NewWndProc),并处理好多消息)
- 用Spring+Hibernate做项目时候遇到 java.lang.NoSuchMethodError: org.objectweb.asm.ClassVisitor.visit
- Cloneable接口和Object的clone()方法