php - MongoDB for realtime ajax stuff? -


Honda Stavevourflow people!

So I'm digging some of these knowledgeable databases, Mangodebi, CouchDB etc. Although I am still not sure about real time-ish stuff, so I thought whether I have any practical experience with any person.

We think of web content, we have got a very dynamic super-axioxified webapp which asks for different types of data every 5-20 seconds, except our backend dragon or PHP or Java There is nothing else ... In such cases it would obviously be heavy to a MySQL or similar DBB (with many users), Mangoodybi / CHDB without breaking a sweat and some super ultra complex Sessions will run it without the need for cluster / caching solutions etc.? Yes, this is basically my question, if you think that no one is..then yes, I know that there are many solutions for this, NodesJS / Websets / Antigravity / Worm-Hole Super Take, but I am very much interested in ASTM and NSQ thing ATM and more especially if they can handle this type of thing.

Let's say we have 5000 users, every 5, 10 or 20 second AJAX request that updates various interfaces.

Shoot

Assume that we have the same There are 5000 users in time, every 5, 10 or 20 second AJAX request that updates various interfaces.

OK, to get this right, you write 250 to 1000 per second? Yes, mongoadibi can handle it.

The real key to the display is whether these questions, updates, or joinings or not. For

query , Mongo may be able to handle this load. It will be really about the size of the memory size ratio if you have 1 GB RAM and a server with 150 GB data So, you probably will not get 250 questions / seconds (with any DB technology). But with the proper hardware specs, Mongo can affect this speed on the same 64-bit server.

If you have 5,000 active users and you are constantly updating existing records, then Mongo will be really fast updating memcached on a machine) The reason is that the possibility of keeping records in Mongo memory is. So a user sends updates every 5 seconds and updates the in-memory object.

If you consistently insert new records, then the limit is actually going to be through one when you are writing many new data, you can also index Are forced to expand. So if you are planning to pump in new gigs of data, then you take the risk of saturating disc throughput and you have to cover.

So depending on your questions, it seems that you are mostly inquiring / updating You will write new records, but not 1000 new records / sec. If this is the case, then maybe MongoDB is right for you. It will definitely come around the concerns of many caching.


Comments