-
Notifications
You must be signed in to change notification settings - Fork 295
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Import export leads to browser crashes if the IndexDB data size is > 300 MB #88
Comments
I can pass the schema information and the same data till 150 MB upon request. |
@gyanendra2058 There had been browser-specific crash issues regarding bulk deletes earlier (IE and safari) but dexie has workaround for those. It would be marvelous to get a complete repro of the crashes. We can hopefully work around it by dividing the operations into chunks. In worst case we might have to split it into several transactions - if so, the API caller must also avoid doing the operations from within transactions. Have you tried the noTransaction option when importing? |
Here is the details of the problem
Line no 532 (purgeRecordsOlderThanGivenHrs method) ..is causing the issue (chrome crash Aww Snap error) while deleting the records from IDB.The size of IDB was 2GB at that time.
https://www.dropbox.com/s/azv4wg7clatgspj/dexie-export%202.json?dl=0 (If you are not able to view the file please provide me your email so that I can add you).
Thank you in advance |
@dfahlander Sorry for being a pest ..but did you got a chance to look into the code and the IDB exported files? |
Not yet. This is open source work I do for free. If you need support on this, we have paid support information on our dexie.org contact page. That said, I regard the large data use case important and hope me or someone else will have a change to look into it eventually. |
@dfahlander I tried the bulkDelete (by passing the primary keys) API instead of delete() and it worked liked a charm. I was able to delete a table which was 3 GB in size in 1-2 secs, 2.5 GB within 0.5 sec. So far so so good. I will try this with bigger data set for 6-8 GB also and will share the results with you. With that said what's the difference between (delete vs bulkDelete) which is causing such drastic differences? Also is it better to split the huge data set and perform bulkDelete() in chunks? For eg. if my table has 2000 rows shall I do bulkDelete operation 4 times with 500 keys? Thank you |
Collection.delete() performs bulkDeletes in chunks internally. The difference might be that bulkDelete does it in a single transaction. Maybe it hits some undocumented limit in chromium on how many operations can be done in a single transaction. |
Also observer that dexie delete() api is also crashing if the size of data to be be deleted > ~700 MB.
Is the delete API tested for huge data sets ? Are there any bench marks for the delete api?
The text was updated successfully, but these errors were encountered: