View Issue Details
|ID||Project||Category||View Status||Date Submitted||Last Update|
|0002094||3 - Current Dev List||Bug||public||2017-07-04 14:51||2019-03-10 17:54|
|Target Version||Fixed in Version||18.104.22.168|
|Summary||0002094: Remove options to reset spots in DX cluster|
|Description||These options were added in an attempt to fix a cluster disconnect problem that was due to other causes.|
But with this "option" still there, users are seeing (a) duplicate spots after reconnect and (b) incorrect sorting of spots.
Getting rid of these "options" and causing a reset to happen whenever the reconnect occurs solves for all use cases.
I'll post an image of this at a later date.
|Tags||No tags attached.|
ResetSpots.PNG (68,997 bytes)
ResetSpots.PNG (68,997 bytes)
I could use a little clarification or confirmation. Here's my take:
1) We should remove the options from the UI.
2) The code should behave as if the "Reset on new connection" option is NOT enabled.
3) The code should behave as if the "Reset on automatic reconnection" option IS enabled.
Is that correct? From my read of the code, that means that when DX Cluster reconnects automatically (due to the connection dropping, and needing to retry and reconnect automatically), it will erase all the previously seen spots.
Is that what is desired? Should effort be put into fixing whatever deeper issue makes this reset necessary? It seems like a pretty invasive thing to lose a connection, regain it, then drop all the previous spots.
||reassigned for feedback / question|
You're probably right... but here's a few things to consider.
When a cluster connection is made, Logbook sends (what essentially is) a "login script". This login script tells the cluster to send back a number of things including cluster Announcements, solar weather (WWV and WCY), and DX spots. For each of these, the user can select the quantity of each.
Specific to DX spots - the user can select increments from 0 to 500.
When that quantity of spots are received, Logbook sorts them by date/time and displays them.
[When users have NOT reset the spots on Connect or Reconnect, they have complained about seeing unsorted duplicate spots in the display. So...]
Clearing the spots ensures that there are no duplicate spots and that they are all sorted correctly.
I think Erik/Rick added the "Reset Spots on Reconnect" to mask a completely different problem. The code should behave as though that option IS enabled (as you suggest).
I'm pretty sure Simon created the "Reset Spots on Connect" option. The only good reason for NOT using it is if you want to bounce between various different clusters and get spots from them (say a DX cluster like mine... than a skimmer node - which adds two different styles and sources). But the problem is... I don't think they'll be sorted. But they may not be duplicates. I'm open to either state for this one.
||Thanks -- I guess I follow that. But why not fix sorting between connections by assuring that sorting just works correctly? The local data might be sourced from multiple DX cluster servers. Why wouldn't the DX Cluster dialog be fixed to sort the data by the observation timestamp, regardless of its source? If the only reason to clear previously received data is to hide the lack of correct sorting, fixing sorting should be preferred over clearing data.|
To the question, "why not fix sorting between connections by assuring that sorting just works correctly?" The answer is... because 100% of the spots will be identical and redundant in the "Reset spots on Reconnect" scenario. (To which, 100% of them will need to be re-sorted and 50% that remain will need to be discarded... just to get back to the condition that the user would be in if they did "reset spots on reconnect.")
The "login script" is going to send "sh/dx" or "sh/mydx" followed by a number that describes how many spots to send. ALL of them will be duplicates when "Reset spots on Reconnect" is used. Those duplicates could be sorted... but people will complain about the need to remove the (now sorted) duplicates. It's just simply a unneeded "feature". Removing the "Reset spots on Reconnect" and defaulting that behavior to "always reset spots on reconnect" is the right thing to do. In that scenario, there are no duplicates to remove and no re-sorting to do. We've taken complaints by people regarding the unsorted duplicates where people are not using "reset spots on reconnect."
The "Reset on Connect" actually does account for the use case where cluster data is sourced from multiple DX Clusters... and that's why it was included by Simon. A reasonable feature.
||This one warrants a discussion.|
I haven't been invited to participate in any conversation, so I'm not sure why this is assigend to me. In the interest of moving things forward, here's my take:
It seems like it's easy to detect if a given spot is a duplicate. Two spots are duplicates if they have the same values in the "Time", "Frequency", "DX Call", and "Spotter" fields.
We haven't agreed on a definition of "Reset spots". To me, it means that the content of the DX Cluster window -- the displayed list of spots -- is emptied. Is that a correct definition?
If so, I think the simplest course of action is to:
1) We should remove the options from the UI.
2) Change the code so that no reset -- no complete erasure of the DX cluster window -- ever automatically occurs.
3) When a spot arrives, see if it is a duplicate of a spot already known. If so, no further processing (including alarm evaluation) is done. It's not added to the list because it's already in the list.
The claim is made that "100% of the spots will be identical and redundant in the 'Reset spots on Reconnect' scenario". How can we know that all of those spots would be 100% duplicate? If time passes between disconnecting and reconnecting, why isn't it possible that new spots arrive in the interim? It might be that almost all of the initially requested spots on re-connection are spots already seen, but isn't it possible that one or more spots have arrived between the disconnection and re-connection?
Why can't the code be made to handle reconnections differently? It would be helpful to not make a long request at startup, for both the client and the server. Is it not possible to request spots since a certain time (the last seen spot)? That would cover the interim reporting case (if it exists). If the long initial list must be requested, why can't received spots simply be discarded if they're already in the client's displayed list?
Resetting the the list -- clearing it and loading it again -- requires a great deal of work on teh client because, for each spot, the client must compute the Country, Mode, Band, S (what's "S"?) columns again. Managing duplicates is an appealing approach compared to completely erasing the previous spots because it would avoid re-doing that expensive work.
Right. I was going to assign it to you and then plan some time to discuss it. It can be assigned to me... and we plan to discuss it.
Either way... we should have a conversation about it. (ie. no conversation has occurred yet)
Adding some additional notes. I'll pull together more comments later.
There are pretty much two issues at work here.
One - the problem as it stands is that cluster spots should always get refreshed when the connection is re-made for whatever reason. The reason for this is that the initial "login script" is going to ask for some number of spots (sh/dx or sh/mydx xxx). If the spots aren't cleared, then the problem isn't the duplicates; the problem is that they are going to be sorted out-of-order. This is annoying people. They believe that the spots aren't being sorted correctly. The problem isn't the sorting. The problem is that the spots aren't cleared when the connection is re-made. The existing spots don't really matter because new ones are going to replace them.
The simplest way around this is to get rid of these two options and 'always reset spots' on either "Reconnect" or "Connect." I'm open for a discussion about this.
Two - the problem not being addressed here... and I see it as a separate issue completely... is whether or not we provide a new feature that de-duplicates spots based on some criteria.
I love the way BandMaster does it. If we did something like that, it would be cool. But probably a major undertaking. (Bandmaster puts new spots for the same call on band & mode under the latest spot for that call. It has an expansion arrow so you can expand the previous if you like.)
The problem with getting rid of duplicate spots based on certain criteria is that people want to be able to go back to the previous comment field. This is very useful for a number of reasons. It tells a DXer whether or not the station is split... what's the split... who's the operator... how do I QSL... and so on. So if we de-duplicate spots in real-time, we would need to be careful about purging spots and losing the fidelity of previous comments.
||We just simply need to remove both of these options to reset the spots and cause a reset under both of these conditions unconditionally. (The reason for this is that the so-called login script will regenerate the spots in both situations regardless. So let's remove them. No one requested that they be added. Having them causes problems. The person who added them was trying to hide a chronic disconnect problem... which it did not.)|
We never had the conversation about managing duplicates. After more than eight months, this issue was reassigned to me, so I must assume we're simply not interested in having that conversation.
I've removed the options, and the code will behave as if the options were both set. That is, the spot list will reset on the first connection and will reset on reconnection, as well.
The change is made with this checkin:
||Validated (problems related to duplicates and sorting are eliminated with this change)|
|2017-07-04 14:51||WA9PIE||New Issue|
|2017-07-04 14:57||WA9PIE||File Added: ResetSpots.PNG|
|2017-07-12 23:48||K7ZCZ||Note Added: 0003643|
|2017-07-12 23:48||K7ZCZ||Assigned To||=> WA9PIE|
|2017-07-12 23:48||K7ZCZ||Status||new => feedback|
|2017-07-12 23:48||K7ZCZ||Note Added: 0003644|
|2017-08-29 00:24||WA9PIE||Note Added: 0004081|
|2017-08-29 00:25||WA9PIE||Assigned To||WA9PIE => K7ZCZ|
|2017-08-29 00:57||K7ZCZ||Note Added: 0004082|
|2017-08-29 19:15||WA9PIE||Note Added: 0004083|
|2017-08-29 19:15||WA9PIE||Status||feedback => assigned|
|2017-09-18 00:14||WA9PIE||Project||3 - Current Dev List => 2 - Next Dev List (Holding Area)|
|2018-06-13 23:55||WA9PIE||Note Added: 0005293|
|2018-06-23 11:06||K7ZCZ||Status||assigned => feedback|
|2018-06-23 11:06||K7ZCZ||Note Added: 0005394|
|2018-06-23 11:06||K7ZCZ||Assigned To||K7ZCZ => WA9PIE|
|2018-06-23 17:02||WA9PIE||Note Added: 0005404|
|2018-06-27 14:18||WA9PIE||Note Added: 0005503|
|2019-03-02 01:09||WA9PIE||Note Added: 0007562|
|2019-03-02 01:09||WA9PIE||Assigned To||WA9PIE => K7ZCZ|
|2019-03-04 18:02||K7ZCZ||Note Added: 0007574|
|2019-03-04 18:02||K7ZCZ||Status||feedback => resolved|
|2019-03-04 18:02||K7ZCZ||Resolution||open => fixed|
|2019-03-04 18:02||K7ZCZ||Testing||=> Not Started|
|2019-03-05 01:45||WA9PIE||Note Added: 0007575|
|2019-03-05 01:46||WA9PIE||Project||2 - Next Dev List (Holding Area) => 3 - Current Dev List|
|2019-03-06 21:45||K7ZCZ||Fixed in Version||=> 22.214.171.124|
|2019-03-10 17:54||WA9PIE||Status||resolved => closed|
|2019-03-10 17:54||WA9PIE||Testing||Not Started => Beta Successful|
|2019-03-10 17:54||WA9PIE||Note Added: 0007660|