Current V2 status

General discussions about Log4OM features
Post Reply
User avatar
IW3HMH
Site Admin
Posts: 2110
Joined: 21 Jan 2013, 15:20
Location: Quarto d'Altino - Venezia (ITA)
Contact:

Current V2 status

Post by IW3HMH » 27 Nov 2017, 13:28

Hi
As you probably know i started a "low profile" rebuild of Log4OM into a "V2", mainly to optimize some functionalities of the current V1.
Log4OM has had a lot of improvmements and extensions in the last 4 years, so some functionalities has been "attached" to the main software in a "non elegant" way.
Log4OM code is highly modular and indipendent. All functionalities share a background layer where every information is "shared" and each function can access to this information layer for his work.
On this background layer there are couple of functions (aka services) that provide functionalities to the "high level" features (aka the user interface)

This layer is actually working well, but updating existing features in some cases require large code rewriting to mantain modularity and flexibility, and sometimes this is not possible (or very complicated, better).

This is why i started "thinking" a V2 version, with more flexibility and as experimental platform of new Technologies (SignalR, as example).
Many of the improvements has been made and trashed, because they were too much complicated and complex to mantain for the purpose i wanted.
Another reason is to keep Log4OM simple. Simple on user interface but simple also with straightforward code, easy to read, to mantain, to debug and with the MINIMUM amount of Technologies that could require user intervention.

As example, SignalR use TCP protocol to allow talk between different applications (this was implemented on initial V2, now trashed, that was made modular made by several .exe files). This version worked well, as Platform, but could have required some skills in order to deal with firewall, networking, tcp ports and could may create troubles to non skilled users (on computer issues, not ham radio related things)

Current version is mainly a white form with some of "base functionalities" built on a common core. Functionalities already implemented are the link with external data sources (qrz.com / hamqth / ...), an internal "scheduler" that will handle all time sensitive Operations of the code with a single thread and a queue (instead of having timers started on single functionalities) and a fully functional cluster (this with working UI) that supports multiple concurrent cluster services (one of the improvements of V2, hard to implement in V1).Other functionalities are implemented but still under work :)

I'm actually "stuck" on database integration.
Database will be "modular", so different kind of databases may be added as "plugin". This function is quite complex, so i'm still evaluating if there is need to have a so "fast" modular system user controlled instead of having a "release" with support for a new database type.

Modular is already working, but i can still decide to trash it... KISS: "Keep It Simple, Stupid" is the way.

Another thought on database structure. As IT PRO i'm working with complex ORACLE and MSSQL database with PL/SQL and T-SQL code. Database with a lot of tables, well structured and formed.

But "old" log4om is relying on a single "omnicomprensive" data table. Why? Because performance matters.
Loading 100.000 QSO from a single table is straightforward. Loading 100.000 QSO from a single table, and then enrich data from x tables containing "additional informations" require a lot of database accesses.

Imagine a main data table with a single QSO, and a slave data table with confirmations. One QSO on main table may have 3-4 rows on confirmation table: QSL, EQSL, LOTW, QRZ.COM, ... so 1 QSO require a query on the main table and a query on "secondary" table with same data.
Then i should "parse" data (pretty fast on memory) and go on next QSO. Imagine now a system with 1 main table and 10 secondary tables. 100.000 QSO means 1.000.000 queries (not everytime i need enriched data, but actually Log4OM retrieves all informations from a single row in a single pass today)...
Now imagine this on a tablet PC...

Well. Now you know almost everything about the current V2 development. Also consider time is the most precious resource, and the rarest. My efforts are, right now, 80% on mantaining and Optimizing V1 (some of the V2 changes already has been reflected in V1), and 20% on V2. V1 is still active, and currently under development more than previously expected (i planned a "maintenance mode" for V1 but continuous requests and integrations are keeping it still under constant rework).

When V2 is planned? i'm pretty sure V2 will not be a complete "game changer". All main functionalities are already implemented in V1, that is quite stable and hardened. V2 will not be a "completely new" software, as V1 modules are already have been ported and optimized in V2 (and vice versa), so there is not actually a planned V2 date. It's something under work, without pressure, without hurry. It should be done well, almost perfect (on code side, where the user hardly seems something), without shortcuts and "direct wiring" that sometime i've used to connect parts of Log4OM V1 and are currently preventing some huge reworks. There is not a planned release date for V2. There will be soon a release of V1 :)
Daniele Pistollato - IW3HMH

w9mdb
Advanced Class
Posts: 63
Joined: 13 Jul 2014, 13:05

Re: Current V2 status

Post by w9mdb » 14 Dec 2017, 15:31

As for your DB query problem won't an sql join work to prevent your 1M query example?
Joins are not as fast as a single table but if it makes the code easier to maintain a slight performance hit may be worth it.

Mike W9MDB

User avatar
G4DWV - Guy
Old Man
Posts: 305
Joined: 11 Sep 2014, 17:02

Re: Current V2 status

Post by G4DWV - Guy » 16 Dec 2017, 22:12

Thanks for the update. I was disappointed to read that 80% of your (valuable) time is being spent on v1. I say that as v1 is such a fantastic program, I really cannot wait to see what you can come up with for it's replacement :) .
73 de Guy G4DWV 4X1LT
You've never known happiness until you're married; but by then it is too late.

User avatar
IW3HMH
Site Admin
Posts: 2110
Joined: 21 Jan 2013, 15:20
Location: Quarto d'Altino - Venezia (ITA)
Contact:

Re: Current V2 status

Post by IW3HMH » 18 Jan 2018, 15:24

w9mdb wrote:
14 Dec 2017, 15:31
As for your DB query problem won't an sql join work to prevent your 1M query example?
Joins are not as fast as a single table but if it makes the code easier to maintain a slight performance hit may be worth it.

Mike W9MDB
Ciao Mike, yes of course if the join is straight. If you have a master row and, as example, 10 rows for awards and 5 rows for confirmations/qsl, 3-4 rows for other things you will have a multiple row join to be managed by code. Everything is possible, but actually log4om loads and scan 50.000 QSO in 0,11 sec from database to data structure in memory. Making join and software parsing of the resulting carthesian product will make it slower. This is why some fields are stored as CSV into single fields... it's faster to make a SPLIT in memory than a join on disk
Daniele Pistollato - IW3HMH

Post Reply

Who is online

Users browsing this forum: No registered users and 1 guest