您的位置:首页 > 数据库 > Mongodb

Two Reasons You Shouldn't Use MongoDB

2011-06-28 22:35 232 查看
请不要被作者的标题吓到,我们可以从另外一个方面认识mongodb,这样我们在使用的时候才能够做到如何去规避问题,在读这篇文章的时候,我看了一下这篇文章的写作日期(14 September 2010),因为我看到写操作的粒度的时候很吃惊,竟然是process级别上的,以为作者写这篇文章的时候mongodb的版本还很低,其实不然,这个时候的mongodb至少应该是1.6+了, 说的是当你写一个document的时候,即使针对其他document的写操作也必须等待前面的写操作释放read/write lock,当然这个时候的读操作也是不可能获得锁的。有兴趣可以读读 How does concurrency work 它提到了read/write Lock、javascript、multicore等问题(我写这篇文章的时候mongodb的版本号是1.8)

文章来源: http://ethangunderson.com/blog/two-reasons-to-not-use-mongodb/

ADDENDUM
As a couple of people have pointed out, the title of this post is pretty flamey. A bit more flamey than I really intended it to be. The underlying message of this post is that, as a developer, you should be aware of the idiosyncrasies of your potential data store, so that you can adequately make a decision to use one that fits your problem space. Don’t let the title derail that message.

1. Your application is awkwardly write heavy

In Mongo, a single mongod process can only process one write at a time, and issues a server level read/write lock while it’s doing so. Yup, that’s right, when a write is in process, nothing else can happen.

Since Mongo has some wicked fast writes, this normally isn’t a problem. However, if a write hangs, if your application has large batch inserts, or if you’re inserting a lot of really large documents, this could quickly become an issue. Ideally, this will eventually become more granular, down to the collection level for instance*.

Thankfully, until that happens, there are a couple of other ways to mitigate this. The first, and probably easiest option, is to setup a Replica Set and perform all reads on the slave(s). However, this doesn’t stop writes from queueing up. The second option is to setup a sharded environment. This option allows writes to be split up and sent to their respective shards.

2. You don’t understand how Mongo handles durability

Mongo is fast, very fast. However, it achieves those speeds by doing things that are pretty out of the ordinary. These things can potentially be catastrophic if you don’t understand what’s going on.

Any developer looking to use Mongo needs to take a look at its current** stance on single server durability. To sum it up, it’s not. Instead, developers should be using Replica Sets and sharding to achieve durability. These are things you should be looking at regardless of your data store, but it becomes all that more important to have a proper cluster when you’re working with Mongo.

Another key thing to look at is the insertion path. By default, Mongo does not wait for a response when issuing a write. There is no guarantee that the write successfully updated the memory mapped file, that it was fsynced to datafiles, or that it was replicated across the cluster. Luckily, there are a couple of commands available to alleviate this.

All of the drivers implement the getLastError command, commonly known as safe mode. Safe mode will wait for a return code from the database, ensuring that the write was successful. Safe mode also has options for ensuring fsnyc and replication.

There is also a general fsync command that can be used to flush everything to datafiles. While this can be configured at the server level, by default, it is executed every 60 seconds, or when the kernel forces it, whichever comes first.

In the end, your problem space will dictate whether the cost of performance is acceptable in favor of better durability.

* Apparently this is being addressed in 1.8 or 2.0
** True single server durability will be added in 1.8

Full Disclosure:
I love mongo. So much so, that I’m using it in my latest venture, gathers.us, and am giving several presentations on it, one of them being at Mongo Chicago.

While these quirks don’t even come close to outweighing the benefits of using Mongo, they are things that I believe tend to bite new developers to Mongo, and should be given some attention.

赠送信息:

Read/Write Lock

mongod uses a read/write lock for many operations. Any number of concurrent read operations are allowed, but typically only one write operation (although some write operations yield and in the future more concurrency will be added). The write lock acquisition is greedy: a pending write lock acquisition will prevent further read lock acquisitions until fulfilled.

The read/write lock is currently global, but collection-level locking is coming soon.

On Javascript

Only one thread in the mongod process executes Javascript at a time (other database operations are often possible concurrent with this).

Multicore

With read operations, it is easy for mongod 1.3+ to saturate all cores. However, because of the read/write lock above, write operations will not yet fully utilize all cores. This will be improved in the future.
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: 
相关文章推荐