New decentralized storage competitor from Germany? Impossible Cloud

https://www.impossiblecloud.com/

Can anyone figure out what tech they are using? Is it based on Storj open source code?

1 Like

Seems they uses their own technology. But nodes must be trusted

Maybe this serves a bit as a wake-up call for Storj.
From what I read their offering sounds better suited for the European and especially the German market. There is a localized website, stated GDPR compliancy and working with certified datacenter might help to onboard customers.

And it does not sound like ‘some’ internet project, they incorporated a startup and received millions in funding already. They even want to go to the US market:

With its growing employee team, the company intends to incorporate in the United States, build an elastic network of enterprise-grade storage hubs, and expand the capabilities of its platform.

As they are working with datacenters, as single SNO there is no way to store data for them. So the only hope is that Storj will be able to compete with them.
Or merge with them. It could be beneficial from many aspects to have a non-US company to manage the node side of the operation.

In my opinion Storj is far ahead than the competition; the demand of a new professional SNO tier would be a possitive catalyst for growth.

Long-term SNOs with professional skills and high quality nodes, which is offered as a higher tier.

if SNOs will start to compete fot high tier we will end up like filecoin with hell amount of RAM and NVME instead of HDDs

3 Likes

if NVMe SSD’s was slightly cheaper i wouldn’t use anything else… HDD’s are ridiculously slow in the modern world.

I have one SSD node of 900GB it is yang, only started ar 3. june and it has only 175 GB.
I do not see any big difference or benefit, only it takes 2,5W not 8-9W like big HDDs. It also loose rases. I made it for test.

i wasn’t really thinking storj, just in general… solid state has a more dependable failure, and will be cheaper because chips can be manufactured cheaper than hdds.

i just don’t see HDD’s actually being the choice in the near future and if SSD pr GB prices are the same as HDD’s would you really want a HDD…

don’t think the wattage argument is very valid either, most of my NVMe SSD’s will use the same or more watts than a new HDD, the old SATA SSD’s do use less watts…

and in the future i’m sure NVMe SSDs will require less watts pr TB, as the manufacturing shrinks the chip die’s.

This is highly controversial, at least in my mind: hdds are well understood and predicable. Magnetic recording has existed for decades, and it did not change much. When they fail they fail predictably.

SSDs is the whole other realm. Sticking constellation of symbols into every cell drastically reduces reliability and introduced unobvious dependencies. We started with SLC now we are what, QLC? More? Rewriting whole blocks does the same on the larger scale.

Drive manufactures joined the bandwagon (see SMR) and everyone avoids them like plaque, and not just because of performance: now they HDD simplicity is gone and writing new data can destroy existing unrelated data. This is a horrible “feature” a storage device can have — but in case of SSD it was accepted as inevitable evil since day one.

Anecdote: I have seen a lot of articles claiming that SSDs fail “gracefully” — reverting to “readonly” mode. Well, apparently not always. Last week SSD in my 2012 mac pro failed — it failed to read data with “io error”, on a miriad of unrelated files. Some of those files were recently written, some 4 years ago, but machine was always on, so the cell refresh cycle should have caught it. And yet, here we are.

Yes, SSDs have much (3-4 orders of magnitude) lower latency, but that’s pretty much it. In all other respects HDDs are preferred (they also don’t “wear out”)

So the furthest I would go is to let them manage that small random IO (special devices in ZFS, cache devices in other filesystems, etc) but entrust the bulk of data to linear magnetic recording.

One might argue that filesystems should be periodically scrubbed and repaired regardless, as any media degrades over time, so what’s the difference in how each device fails? I think SSDs add another layer of uncertainty and/or correlation of what is affected when they fail, and assumptions about drive behavior in ZFS should be reviewed and adapted to the schizophrenic nature of SSDs failures before we can replace HDDs with them.

But I’m not a mass storage engineer, it’s just what I have been thinking.

1 Like

8 posts were split to a new topic: How to contact sales?