oenthinkphp5支持sqlite3怎么办

Android Sqlite get last insert row id - Stack Overflow
to customize your list.
Stack Overflow is a community of 4.7 million programmers, just like you, helping each other.
J it only takes a minute:
Join the Stack Overflow community to:
Ask programming questions
Answer and help your peers
Get recognized for your expertise
Possible Duplicate:
I want to get the last inserted row id in my android application using this code :
String query = "SELECT * from SQLITE_SEQUENCE";
int createdUserId = Integer.parseInt(dbHelper.executeSQLQuery(query).toString());
but the problem is that it's throws an exception that cannot convert dbHelper.executeSQLQuery(query).toString() to integer. I'm not really good at sqlite ,but i think that this should return the last row id which was inserted...which will definitely will be int (at least I think this way). So if this is not the right way, can someone please guide me how to get the last row id in android application.
5,3172084162
marked as duplicate by
This question has been asked before and already has an answer. If those answers do not fully address your question, please .
Your SQL statrment will return all the row ids, not just the latest. Try something like this...
SELECT ROWID from SQL_LITE_SEQUENCE order by ROWID DESC limit 1
Also note that I believe selecting from SQL_LITE_SEQUENCE will get the latest ID from ANY table, you can also access the SQL_LITE_SEQUENCE by selecting ROWID on any table, and getting just the IDs for that table. IE
SELECT ROWID from MYTABLE order by ROWID DESC limit 1
And thanks to MisterSquonk for pointing out the next step in the comments, adding it here for ease of reference later...
The query statement will then return a Cursor object containing the results, so to access the integer value you would do something like this (I'll substitute more common methods for your helper method, just for others sake)
String query = "SELECT ROWID from MYTABLE order by ROWID DESC limit 1";
Cursor c = db.rawQuery(query);
if (c != null && c.moveToFirst()) {
lastId = c.getLong(0); //The 0 is the column index, we only have 1 column, so the index is 0
(Note that although the SQL Lite docs call ROWID and Integer, it is a 64 bit integer, so in Java it should be retrieved as a long.)
Care to use "select"? :)
Not the answer you're looking for?
Browse other questions tagged
Upcoming Events
Stack Overflow works best with JavaScript enabledWhat are the performance characteristics of sqlite with very large database files? - Stack Overflow
to customize your list.
Stack Overflow is a community of 4.7 million programmers, just like you, helping each other.
J it only takes a minute:
Join the Stack Overflow community to:
Ask programming questions
Answer and help your peers
Get recognized for your expertise
I know that sqlite doesn't perform well with extremely large database files even when they are supported (there used to be a comment on the sqlite website stating that if you need file sizes above 1GB you may want to consider using an enterprise rdbms. Can't find it anymore, might be related to an older version of sqlite).
However, for my purposes I'd like to get an idea of how bad it really is before I consider other solutions.
I'm talking about sqlite data files in the multi-gigabyte range, from 2GB onwards.
Anyone have any experience with this? Any tips/ideas?
8,914103162
3,26451522
closed as primarily opinion-based by ♦
Many good questions generate some degree of opinion based on expert experience, but answers to this question
will tend to be almost entirely based on opinions, rather than facts, references, or specific expertise.If this question can be reworded to fit the rules in the , please .
So I did some tests with sqlite for very large files, and came to some conclusions (at least for my specific application).
The tests involve a single sqlite file with either a single table, or multiple tables. Each table had about 8 columns, almost all integers, and 4 indices.
The idea was to insert enough data until sqlite files were about 50GB.
Single Table
I tried to insert multiple rows into a sqlite file with just one table. When the file was about 7GB (sorry I can't be specific about row counts) insertions were taking far too long. I had estimated that my test to insert all my data would take 24 hours or so, but it did not complete even after 48 hours.
This leads me to conclude that a single, very large sqlite table will have issues with insertions, and probably other operations as well.
I guess this is no surprise, as the table gets larger, inserting and updating all the indices take longer.
Multiple Tables
I then tried splitting the data by time over several tables, one table per day. The data for the original 1 table was split to ~700 tables.
This setup had no problems with the insertion, it did not take longer as time progressed, since a new table was created for every day.
Vacuum Issues
As pointed out by i_like_caffeine, the VACUUM command is a problem the larger the sqlite file is. As more inserts/deletes are done, the fragmentation of the file on disk will get worse, so the goal is to periodically VACUUM to optimize the file and recover file space.
However, as pointed out by , a full copy of the database is made to do a vacuum, taking a very long time to complete. So, the smaller the database, the faster this operation will finish.
Conclusions
For my specific application, I'll probably be splitting out data over several db files, one per day, to get the best of both vacuum performance and insertion/delete speed.
This complicates queries, but for me, it's a worthwhile tradeoff to be able to index this much data. An additional advantage is that I can just delete a whole db file to drop a day's worth of data (a common operation for my application).
I'd probably have to monitor table size per file as well to see when the speed will become a problem.
It's too bad that there doesn't seem to be an incremental vacuum method other than . I can't use it because my goal for vacuum is to defragment the file (file space isn't a big deal), which auto vacuum does not do. In fact, documentation states it may make fragmentation worse, so I have to resort to periodically doing a full vacuum on the file.
3,26451522
Did you find this question interesting? Try our newsletter
Sign up for our newsletter and get our top new questions delivered to your inbox ().
Subscribed!
Success! Please click the link in the confirmation email to activate your subscription.
We are using DBS of 50 GB+ on our platform. no complains works great.
Make sure you are doing everything right! Are you using predefined statements ?
*SQLITE 3.7.3
Transactions
Pre made statements
Apply these settings (right after you create the DB)
PRAGMA main.page_size = 4096;
PRAGMA main.cache_size=10000;
PRAGMA main.locking_mode=EXCLUSIVE;
PRAGMA main.synchronous=NORMAL;
PRAGMA main.journal_mode=WAL;
PRAGMA main.cache_size=5000;
Hope this will help others, works great here
4,03523050
I've created SQLite databases up to 3.5GB in size with no noticeable performance issues.
If I remember correctly, I think SQLite2 might have had some lower limits, but I don't think SQLite3 has any such issues.
According to the
page, the maximum size of each database page is 32K.
And the maximum pages in a database is 1024^3.
So by my math that comes out to 32 terabytes as the maximum size.
I think you'll hit your file system's limits before hitting SQLite's!
4,19131831
Much of the reason that it took > 48 hours to do your inserts is because of your indexes.
It is incredibly faster to:
1 - Drop all indexes
2 - Do all inserts
3 - Create indexes again
Besides the usual recommendation:
Drop index for bulk insert.
Batch inserts/updates in large transactions.
Tune your buffer cache/disable journal /w PRAGMAs.
Use a 64bit machine (to be able to use lots of cache(TM)).
[added July 2014] Use
instead of running multiple SQL queries! Requires SQLite release 3.8.3.
I have learnt the following from my experience with SQLite3:
For maximum insert speed, don't use schema with any column constraint. (Alter table later as needed).
Optimize your schema to store what you need. Sometimes this means breaking down tables and/or even compressing/transforming your data before inserting to the database. A great example is to storing IP addresses as (long) integers.
One table per db file - to minimize lock contention. (Use ATTACH DATABASE if you want to have a single connection object.
SQLite can store different types of data in the same column (dynamic typing), use that to your advantage.
Question/comment welcome. ;-)
There used to be a statement in the SQLite documentation that the practical size limit of a database file was a few dozen GB:s. That was mostly due to the need for SQLite to "allocate a bitmap of dirty pages" whenever you started a transaction. Thus 256 byte of RAM were required for each MB in the database. Inserting into a 50 GB DB-file would require a hefty (2^8)*(2^10)=2^18=256 MB of RAM.
But as of recent versions of SQLite, this is no longer needed. Read more .
79.3k49275391
I've experienced problems with large sqlite files when using the vacuum command.
I haven't tried the auto_vacuum feature yet.
If you expect to be updating and deleting
data often then this is worth looking at.
I think the main complaints about sqlite scaling is:
Single process write.
No mirroring.
No replication.
27.3k14100156
I have a 7GB SQLite database.
To perform a particular query with an inner join takes 2.6s
In order to speed this up I tried adding indexes. Depending on which index(es) I added, sometimes the query went down to 0.1s and sometimes it went UP to as much as 7s.
I think the problem in my case was that if a column is highly duplicate then adding an index degrades performance :(
Not the answer you're looking for?
Browse other questions tagged
Upcoming Events
Stack Overflow works best with JavaScript enabled

我要回帖

更多关于 sqlite支持中文 的文章

 

随机推荐