@Alexey Right now, the satellite will never accept orders submitted from the old endpoint, and database orders can only be submitted to the old endpoint. So now, anyone on any version (including 1.15.*) will not be able to submit orders.db orders.
When we made the change initially, we assumed that by the time we disabled the endpoint, every node would have sent their orders from the DB, but we didn’t account for the case where there are expired orders in the database. I believe that is what is happening with @Odmin’s node. This should all be fixed with 1.16, but yes, an announcement might be a good idea if node operators are running into issues involving orders.db.
Hello, I needed help to resolve the unsent orders situation. On windows the only thing I can see is the folder (C: \ Storj \ Storage Node \ orders \ unsent) with 1743 files. This help request comes due to the October payment, where I had a calculation of 18 and received only 13. I would appreciate much help in solving this problem. my node is with version v1.15.3
if it’s storj labs fault i totally agree… tho if it’s due to node’s not running under optimal conditions, then i would think it’s really the SNO’s own issue…
Well, if your node is 100% online and it’s receiving bandwidth both up & down, then why would the SNO not be compensated for a function in the software not working as intended “submit orders”. It’s outside of SNO’s control if a function that is meant to be reliable doesn’t work. So SNO’s should be compensated for this fault. On the other side of the coin I would expect StorJ to work with SNO’s to help identify the fault by submitting logs / other data StorJ requests etc…
We both SNO’s and StorJ want this to be 100% reliable, but SNO’s should be compensated for a software type fault.
An order should never be lost but in a queue to be sent as expected, regardless of reason as its an order and used for payments.
well who says the other side wants it if it has taken to long, the node may simply has been to busy to get around to trying to send it… i duno…
just saying it’s possible it’s related to something more of a hardware issue since many play fast and lose with the hardware requirements.
just saying it’s possible that some SNO’s will eventually have their nodes run on setups or with resources that are way to limited to ensure correct operation and thus would cause issues…
in such a case i wouldn’t expect Storj Labs to be held accountable, i don’t know if that is the case, nor how to figure out if it is or isn’t…
Well in that case I would expect to receive a warning from affected nodes then.
Because even though it could be one SNO’s fault, they can’t guess something’s off while the node keeps running and sending/receiving data normally according to the dashboard.
Situations where an SNO discovers too late that something was wrong and affecting its payments or audit/suspension score should be avoided as much as possible.
Things keep improving release after release on the long run though, so I’m pretty positive about that. There’s still room for improvement but we’re getting there
the reason i say it’s possible that it’s the SNO’s fault… is that really thus far i don’t think i really had any issues that wasn’t hardware related in one way or another…
but my system is also a bit overbuilt for what it’s doing, yet it is interesting that it doesn’t seem to be affected by the issues that plague so many other nodes.
maybe it’s just random chance, but i would like to think it’s due to the near perfect operation
but yeah maybe there should be a warning in case this is a problem… ofc it’s difficult to predict what’s the problem in advance, because if one could predict the problem, then one could also prevent it from happening in most cases.
sounds a bit like the orders gets filled and then the excess just end up getting stuck, kinda like a jammed sink, then maybe one added a trash grinder to it and the next time it’s the tap water filter thats full of sand…
fixing issues and monitoring them / preventing issues from happening or alerting that they are happening, can be very difficult… because the issues always arise where one doesn’t expect them…
and it takes a great many failures to make a continual success, look at cars or planes… it’s taken a century or more of major developments and still they do blow up on occasion…
sure it would be lovely if we could eliminate all issues happening with storagenodes in the software or be informed if they happen, but really i wouldn’t hold my breath… it might take 5-10 years before the software is near perfect, and until then the stability of the hardware / resources is most likely directly correlated to node stability.
and thus some will cause the issues themselves essentially… ofc without the issues the software will never get better at resisting them… so there is that…
We have dozens affected SNOs by bug with unsent orders (from thousands), however, they are existing.
So, yes, it’s indeed a configuration-related, otherwise we (SNOs) will be all affected.
So lets devs to figure out why is it happen and what are conditions to provoke a bug with unsent orders and fix it.
It’s never that black and white though. The file corruption itself is likely related to the setup, perhaps to unclean shut downs and similar things. It would be completely fair to say the node or the SNO messed up those orders, so that file would not be processed. However this issue seems to impact all other orders as well and SNOs find out because non of the following orders are sent either. I would say that is a flaw in the software. It should be able to just ignore a faulty order file and process the rest. I thought that fix was included in a previous release, but it seems that was only a partial fix.
Is a compensation for SNOs fair? Maybe, in some cases. The problem is that there isn’t really a good way to prove there was any bandwidth, because the nodes lost the proof. (And many SNOs were told to remove old order files, so they can no longer be used to prove it even if the satellites would ignore the expiration of the orders)
But of course this also hinges on how often this has happened. It always seems like a lot of cases when something impacts payout. Because everyone impacted by this would come here to post. But with the possibly 10k nodes out there if it impacts 10-20 users, the problem is still quite small. In my experience Storj Labs makes things right when it’s appropriate. But I don’t have the data they have to determine whether it is appropriate in this case.
I backup all old unsent order files, before deleting it from the unsent folder. And what if this issue get worst and start affecting all SNOs. Storj Labs must compensate affected SNOs.
In the past they have used surge payouts in following months if an issue effected many nodes (in that case it was massive overcompensation as well). If it does effect a lot of nodes, I have no doubt they will make it right. They would first have to fix the actual issue though, so give them some time with that first.
Hi folks, i wnted to post an update from me and @Alexey
Our devs are aware of the problem with broken uploads on low upstream channel, but this issue has a low priority right now and I do not expect this problem to be solved any time soon. low prioirity does NOT mean unimportant, it just refers to the order of tasklist - we have limited resources from engineering. At this point I dont have a timeline to report. When they ping me with that ifo, it will be posted here.
We unfortunately do not have a workaround for your specific case (tried known workarounds and they did not help). We are sorry, but for your circumstance our service would not work properly.
If you choose, you may request to close your account and have the on-file credit card info removed.