I think the issue is that the used key in the bandwidth dictionary is not being updated on the /api/sno/ route. It’s probably reading from the database and not the cache.
Similar for /api/sno/satellites/, the bandwidthDaily list is not updated for the latest day.
Thank you for managing to turn a Go application, the beauty of which lies in its static build, into an unportable one. I prefer not to install compilers on my servers for security reasons and to keep the system clean. However, due to the CGO requirements, I’m now unable to build your benchmark on my local machine and copy it to the server. Learning about Storj Labs’ development style!
CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -a -tags netgo -ldflags '-w' ./cmd/tools/piecestore-benchmark/
# storj.io/storj/shared/dbutil/sqliteutil
shared/dbutil/sqliteutil/db.go:86:28: undefined: sqlite3.Error
shared/dbutil/sqliteutil/db.go:87:25: undefined: sqlite3.ErrConstraint
shared/dbutil/sqliteutil/migrator.go:104:24: destDB.Backup undefined (type *sqlite3.SQLiteConn has no field or method Backup)
I’m not a Go developer so I’m not quite sure what’s going on with CGO, but I spun up an Ubuntu docker container, built the binary in there, then copied the binary to a Slackware server (Unraid) and ran it without any problems.
@d4rk4 - sqlite is a very valuable library! There aren’t very many other in-process database drivers with as high of quality as sqlite, so while using sqlite complicates the build, we think it is worth it.
you people are exploiting new storage node holder by sent garbage for last three month. i have 13. 1 tb of strogenode with id 1QGFX2rNUSoK4yi5AdG5yf1Jc3KS97vpjoKEorgNoWBMvuzRY2.
For the last three month my storage remains intact at 1.5 -1.6 tb, even after consuming over 1.5 tb of bandwidth on a month on month basis. For the first one and half month i got a storage around 1.9 tb. Now my node storage stands at (1.6 tb of storage out of 13.1 tb).Even my second storage node has same issue which have different ip. I am thinking of stopping the node at the end of this month. Even no proper explanation was given by Storj.They only sent a random reply like check your logs and update your node. Since i am using windows for running the node, it is updated automatically. I am not running the node to earn 3to 4 dollar a month. As per i know storj is struggling to get new customer, the way it used to get earlier. Even their earning excel sheet which was updated by the community is far from the reality.
It’s not that complicated. If you have extra resources available – run the node. If you don’t — don’t.
Storj does not owe anyone an explanation of how are they using the resources node operators are providing. You get paid according to published and agreed upon schedule.
Ingress of massive amount of data with very short TTL is possible and normal. You agreed to that. So I’m not sure what seems to be a concern?
Thats the kind of answer i was looking for.Since storj dont have proper answer.There are other nodes i am running which are more profitable then earning penny via storj. If storj dont have the answer, its ok.Even i dont have the time to waste in community forum.If there is profit, then there is a interest.There are many people who have much time to waste in social forum.
It’s a learning experience and just fun for most of the more active people on the forum.
You seem to have been unlucky in that your nodes mostly held test data. They will fill up with more permanent data. It just takes a while. Also, the recent testing is because big customers want to onboard. I’d say it’s worth sticking around to see how that would end up. But just to be clear, customer usage is growing exponentially and has been for a while. So your statements in your previous message are incorrect. This mass deletion of data is likely a one time thing. But it’s up to you if you want to stay.
I’m very impressed with my nodes performance after the recent improvements in storagenode software.
Syno 1GB RAM running 2 nodes, each with 4TB data plus almost 1.5TB trash each; I deleted databases and started used space filewalker to recreate them. The only improvements I could make myself for the nodes were moving databases to USB flash drive, alongside with using ext4 with noatime for drives and stick.
In 4 days, both filewalkers finished, in the same time with retain processes and trash cleanup.
After 5 days, my databases are back, and the 3TB of trash had been shriked to 500GB.
The activity of drives, after all these services finished, is like 4%.
So the improvements are awsome.
Hi everyone! I recently merged a few changes that I expect will improve performance of the piecestore benchmark (and the storagenode eventually). One of my benchmarks with an HDD showed an improvement of over 30%, but I’m curious what folks from the community with their setups will see.
To run the new benchmark, you’ll need to:
git clone https://github.com/storj/storj
cd storj
go install ./cmd/tools/piecestore-benchmark/
cd /mnt/hdd
mkdir benchmark
cd benchmark
piecestore-benchmark -pieces-to-upload 100000
Then to calculate the baseline benchmark (what’s the performance like before patches):
git clone https://github.com/storj/storj
cd storj
git checkout ae5dc146a3d33a2c5f2a62ade7e9293c8801b751
go install ./cmd/tools/piecestore-benchmark/
cd /mnt/hdd
mkdir benchmark
cd benchmark
piecestore-benchmark -pieces-to-upload 100000
(you can lower the -pieces-to-upload parameter if you have a slower node)
Thanks in advance to everyone who tests, as this might influence further development. Thank you all!
=== START OF INFORMATION SECTION ===
Device Model: ST20000NM007D-3DJ103
Firmware Version: SN01
User Capacity: 20,000,588,955,648 bytes [20.0 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate: 7200 rpm
Form Factor: 3.5 inches
SATA Version is: SATA 3.3, 6.0 Gb/s (current: 6.0 Gb/s)
FS: ZFS
/media/disk017/bench# zfs get all -s local zp017
NAME PROPERTY VALUE SOURCE
zp017 recordsize 128K local
zp017 mountpoint /media/disk017 local
zp017 compression lz4 local
zp017 atime off local
zp017 xattr sa local
zp017 primarycache metadata local
zp017 secondarycache metadata local
zp017 sync disabled local
zp017 dnodesize auto local