Slow queries 2

Post Reply
w9mdb
Old Man
Posts: 434
Joined: 13 Jul 2014, 12:05

Slow queries 2

Post by w9mdb »

The compaction of all QSL's into one field makes for an extremely slow query.

All the QSL information is in qsoconfirmations which means in order to find any particular group of QSO's...like LOTW uploads with Requested status the entire database must be retrieve and the client walks through that field with all the QSL informaion. Pretty well defeats the purpose of having a database.

Please consider denormalizing this.....or put it in another table like it should have been so a join query can done and the speed will come back.

Mike W9MDB
User avatar
IW3HMH
Site Admin
Posts: 2925
Joined: 21 Jan 2013, 14:20
Location: Quarto d'Altino - Venezia (ITA)
Contact:

Re: Slow queries 2

Post by IW3HMH »

Please define "extremely slow query" and the number of QSO in your log.
Daniele Pistollato - IW3HMH
w9mdb
Old Man
Posts: 434
Joined: 13 Jul 2014, 12:05

Re: Slow queries 2

Post by w9mdb »

The debug log doesn't show the total time...just the query time. I've got over 34,000 QSOs in the log.
The actual query I've tested and by itself takes a little over 2.5 seconds evenly split between query and fetch according to MySQL Workbench timing.
In order to get any QSL status in this database settup you have to fetch the entire database and apparently somebody is then doing LINQ queries on the field which is another database operation and not nearly as fast as an indexed database field.

SELECT qsoid, callsign, band, mode, qsodate, address, stationcallsign, dxcc, arrlsect, qsocomplete, age, aindex, antaz, antel, antpath, antenna, arrlcheck, bandrx, callsignurl, `class`, cnty, comment, cont, contactassociations, contactedop, contestid, country, cqzone, distance, eqcall, email, forceinit, freq, freqrx, gridsquare, ituzone, kindex, lat, lon, maxbursts, msshower, myassociations, mydxcc, mylat, mylon, mycity, mycnty, mycountry, mycqzone, mygridsquare, myituzone, myname, mypostalcode, mystreet, myrig, mysig, mysiginfo, mystate, name, notes, nrbursts, nrpings, pfx, `operator`, ownercallsign, precedence, propmode, programid, programversion, qslmsg, qslvia, qsoenddate, qsorandom, qth, rxpwr, sig, siginfo, rstrcvd, rstsent, satelliteqso, satmode, satname, sfi, srx, srxstring, state, stx, stxstring, swl, txpwr, qsoconfirmations, contactreferences, myreferences from log ORDER BY log.QsoDate DESC
Then LOG4OM takes almost another 2.5 seconds to process the records. I get 100% cpu sage for just under 5 seconds trying to select LOTW records to upload.
Under database normalization the query/fetch/display should take less then 300ms but since you can't query or index the QSL fields can't be done that fast.

Under Log4OM1 the same LOTW operation doesn't even create a blip on the performance graph and I see 227ms doing the LOTW query on that side.
SELECT qsoId, address, age, AIndex, AntAz, AntEl, AntPath, ArrlSect, ArrlCheck, TheClass, CreditSubmitted, CreditGranted, TenTen, MyAntenna, Band, bandRx, log.call, comment, cont, contactedOp, contestId, country, Cqz, Distance, dxcc, eqCall, email, eqslQslrdate, eqslQslsdate, eqslQslRcvd, eqslQslSent, ForceInit, Freq, FreqRx, gridsquare, iota, iotaIslandId, Ituz, KIndex, lat, lon, MaxBursts, msShower, myLat, myLon, lotwQslrdate, lotwQslsdate, lotwQslRcvd, lotwQslSent, mode, myCity, myCnty, myCountry, MyCqZone, myGridsquare, myIota, myIotaIslandId, MyItuZone, myName, myPostalCode, myRig, mySig, mySigInfo, myState, myStreet, Name, notes, NrBursts, NrPings, pfx, theOperator, ownerCallsign, precedence, propMode, publicKey, programid, programversion, qslmsg, qslrdate, qslsdate, qslRcvd, qslRcvdVia, qslSent, qslSentVia, qslVia, qsoComplete, qsoDate, qsoDateOff, qsoRandom, qth, rig, Sfi, rstRcvd, rstSent, satMode, satName, sig, sigInfo, srx, srxString, state, stationCallsign, stx, stxString, swl, timeOff, timeOn, validatedCallsign, manualDateTime, mobile, qrp, portable, callsignUrl, homepageUrl, contest, cnty, submissionstatus, ClubLogQsoUploadDate, ClubLogQsoUploadStatus, AwardSubmitted, AwardGranted, Fists, FistsCc, HrdlogQsoUploadDate, HrdlogQsoUploadStatus, MyFists, MySotaRef, MyUsacaCounties, MyVuccGrids, QrzcomQsoUploadDate, QrzcomQsoUploadStatus, RxPwrDecimal, Skcc, SotaRef, SubMode, TxPwrDecimal, UsacaCounties, VuccGrids, QsoAwards FROM LOG WHERE (lotwqslsent = 'R' OR lotwqslsent = 'Q') ORDER BY QsoDate DESC, TimeOn DESC;


You'll have to forgive me...I'm a speed freak and when I know something should be more than 10 times faster it tends to stick out to me.
w9mdb
Old Man
Posts: 434
Joined: 13 Jul 2014, 12:05

Re: Slow queries 2

Post by w9mdb »

On another note about speed. The queries from JTAlert are like this on every decode (before JTAlert caches the info)
select `qsodate` FROM `log` WHERE (`mode` LIKE ''FT8%'') AND (`band` = ''10m'') AND (`callsign`=''W1HS'') ORDER BY `qsoid` DESC;
And one like this
SELECT dxcc, cont, cqzone, ituzone, state, name, qth, gridsquare, band, mode, pfx, cnty, address, qsodate FROM log WHERE callsign='WA1SXK' ORDER BY qsoid ASC;

These were taking 400ms in Workbench and I noticed the indexed fields didn't have callsign for example which is an important index.
I added that and now the queries are now lighting fast and show 0.000sec and 0.015sec on Workbench. JTAlert now is MUCH faster during the display of decodes.

Mike W9MDB
User avatar
IW3HMH
Site Admin
Posts: 2925
Joined: 21 Jan 2013, 14:20
Location: Quarto d'Altino - Venezia (ITA)
Contact:

Re: Slow queries 2

Post by IW3HMH »

There is still wide space for optimizations.
Actually Log4OM relies on some inner structures (json) filled into text fields, and performs unboxing in memory.
That give us flexibility to add more external services in the future simply adding an enumeration into a class, without needs to work on DB side.

Putting all the fields into single fields will easily reach 120+ columns. The same logic is also used to perform boxing/unboxing of references, that could be a lot.
As said, normalizing database would require at least 3 more tables, with multiple queries and in-memory joint of data by parsing data row by row.

I know it's not the best way, but it's probably the more optimized for the process, as seen end-to-end :)
Maybe one day i will refactor everything, but before starting with V2 i already made some tests with multiple tables and results were not optimal, at the cost of an higher amount of disk access.
Daniele Pistollato - IW3HMH
User avatar
IW3HMH
Site Admin
Posts: 2925
Joined: 21 Jan 2013, 14:20
Location: Quarto d'Altino - Venezia (ITA)
Contact:

Re: Slow queries 2

Post by IW3HMH »

That's strange.
the file i'm distributing with Log4OM has those indexes (PK is index):

PRIMARY KEY (
band,
callsign,
mode,
qsodate
)
Daniele Pistollato - IW3HMH
w9mdb
Old Man
Posts: 434
Joined: 13 Jul 2014, 12:05

Re: Slow queries 2

Post by w9mdb »

Hmmm...I wouldn't think 3 more tables would be needed. That might be over normalizing.
Essentially it would be the same as the JSON structure but putting that in an indexed table that could be joined.
So adding a new one wouldn't require any code changes at all....just a database update.
At first blush that's what it sounds like to me.

Mike
User avatar
IW3HMH
Site Admin
Posts: 2925
Joined: 21 Jan 2013, 14:20
Location: Quarto d'Altino - Venezia (ITA)
Contact:

Re: Slow queries 2

Post by IW3HMH »

but i need multiple queries and in-memory double list scroll to match data and fill the QSO class with their own data.
Much slower than a json serialization made once, in memory
Daniele Pistollato - IW3HMH
w9mdb
Old Man
Posts: 434
Joined: 13 Jul 2014, 12:05

Re: Slow queries 2

Post by w9mdb »

I don't see 2+ seconds running through the JSON fields as fast.
Maybe not be as easy as a 2nd table...which would require multiple entries per QSL. It would link to the id field in the primary table and contain rows of QSL info for LOTW, EQsl, etc. So 8 rows per QSO....but comparitively short rows.

You want my ADIF file for testing?

Mike
w9mdb
Old Man
Posts: 434
Joined: 13 Jul 2014, 12:05

Re: Slow queries 2

Post by w9mdb »

Also remember to add the index to callsign.

Mike
Post Reply