History has it that after 2000 years, they were still arguing… with no victory in sight for any of them.
You are using plural here. Given your next statement:
…which other file systems by your definition are also very good?
Old: EXT4 , XFS NTFS
New: ZFS, btrfs
Simplify mode OFF.
Out of that bunch, only NTFS is old, the others are roughly the same age.
I think he was trying to say:
Old= Classic, Traditional
New= Advanced, modern
Still haven’t received an answer how to convert hashstore to memtable ![]()
P.s: btrfs not compatible with reliability )
Nope. Don’t buy this just for storj. Buy it for your other workloads and users. And then storj can run seamlessly.
I don’t undertand this desire to opimize for crappy setups. Why?
Obviously, never buy anything made for consumers. Don’t even accept it as a gift. Not SSDs, not HDDs, not motherboards, not processors, or SATA cables, or power supplies. Never. Of course, everything made for consumers is a heap of overpriced garbage, made from parts that failed reliability requirements for proper qualification. Are you aware of binning? Where do you think the lowest bins of anything go?
This is excessive even for consumer device. Check what sector size did you use. Because shitty consumer SSD report 512 byte block size, while they are 4k; this results in massive write amplification.,
I understand. It was optimized for low TTL data, according to Storing, specifically for select, where people run it not on potato – so reliability concerns are reduced. This, however, is a scenario not applicable to public network – where people insist on running on whatever they found in the garage.
“This requires knowing what you are doing, and hence it’s worse” is not a valid argument on a technical forum ![]()
These ones:
I can’t find the source right away, but there are interviews floating around the webs from conferences where Theodore Ts (principal developer of Ext4) stated “that although ext4 has improved features, it is not a major advance; it uses old technology and is a stop-gap. Ts’o said that Btrfs is the better direction because “it offers improvements in scalability, reliability, and ease of management. Btrfs also has “a number of the same design ideas that reiser3/4had” and " In 2020, Btrfs was selected as the default file system for 33 for desktop variants"
I found this tangetial source here: Panelists ponder the kernel at Linux Collaboration Summit - Ars Technica
Anyway, ext4 is an ancient history, there is no point in adjsuting software for ill-fitting old filesystems when alteratives are realdily avaialble.
I must be lucky.
I have like 10 nodes, all on EXT4, all on spinning rust, all of them on piecestore, only non-standard thing I did was move DBs to SSD.
I just leave them to do what they want to do. All of them have success rates above 98%![]()
I have over 80 nodes now, file systems in use are NTFS, EXT4, ZFS. So far I am running only 3 hashstore nodes for testing, one for each file system. None of my filestore nodes having file system related problems.
I will not convert all nodes to hashstore. Also I will not start new nodes with hashstore until we have the repair tool. I dont want to lose entire nodes because of a single file error.
Cruelly, btrfs is hot garbage for storj. Or at least, the worst of the 3 I’ve tried (zfs with cache, ext4, and btrfs).
As an appreciation of Zeebo’s work.
I also compensate - instead of buying the latest and greatest cell phones and such, I just run the latest and greatest software there is
.
EXT4 is a file system that works on almost anything, works good on low end systems, is safe and reliable.
ZFS - Is a enterprise grade file system, that can be tuned for many different work loads, built for reliability, integrity, performance, and scalability. And still evolving. Yes, it requires more resources to perform. Your data is a lot safer.
Loss of speed from fragmentation has to be one of the oldest fallacies in the industry. Defragmenting a 10mb Voice Coil hard drive made SFA difference on an XT, other than wasting a day or two watching it defragment.
Anyone running nodes with hashstore and piecestore?
Do you see any difference between last month stored average?
I see a very low performance for hashstore.
Node with hashstore and 8GB RAM gained 200GB last month, but the piecestore nodes with 18GB RAM gained 700GB.
Usualy, the RAM wasn’t such a drawback with piecestore, so I only can blame hashstore. I compared nodes with similar age, so the deletes are similar too.
I have same results, 2 computers, one with high hashstore volume and another almost 100% piecestore. Second computer gained storage but hashstore rig loss storage, i can confirm data 100% sure for last 3 days (last month data on my side is not reliable because migrating nodes)
is there any possibilty of turn off hashstore and remigrate to piecestore? I switched off migrate_chore from first node corrupted for a single power failure a 2 month ago and praying for slowly delete old data in hashstore
I currently have 2 hashstore nodes and they gained 770 and 690GB in stored space last month. I forget when I updated them but they’re both on the latest version with the updated trash defaults.
Our beloved denmark gained 16TB from may 1 to 8 june. With 104 nodes it’s around 150GB in more than a month for each node or like 120GB a month each node. He is full hashtore in +600TB
Gaining or losing data has nothing to do with the way data is stored and everything to do with performance and uptime of the node.
yeah, you’re right, i just want to say hashstore is not performant or have serious problems
How did you arrive to this conclusion?
Maybe i’m wrong.
My second rig is running piecestore and gained like 120GB in last 3 days but my first rig lost 1TB. First rig running with 100TB hashstore total and second rig only have 7TB hashstore
It’s very possible im wrong because first rig have old nodes (like 2022-2023) and second nodes are from 2025