Peps, i just got an idea, and possibly a stupid question:
if RAM is scarce, and ssd’s kinda too, can’t just a RAID of many HDDs serve as RAM?
i know the RAM is super fast, but still, if You don’t have a leg, a prosthetic will do.
Anyone tried it for purpose to run those gigantic RAM consuming AI maybe?
(If Storj can be mounted as local disk, maybe the RAM is next step, just thinking loud…)
“Grok code fast 1” said:
“No, even a high-performance RAID of HDDs can’t effectively replace RAM as a “prosthetic.” Virtual memory (swap space) on HDD RAID can extend RAM, but access speeds are ~100,000x slower (milliseconds vs. nanoseconds), causing constant thrashing and unworkable performance for memory-intensive tasks like LLMs.”
Nothing to do with Storj… and also nothing to do with RAM scarcity. The same factories as always are cranking out memory as-fast-as-they-can: and selling every byte.
Every day there’s more RAM in the world than the day before. It’s not scarce: it has simply been expensive lately.
Yup, a SWAP file is not a replacement for RAM. It is an emergency crutch for if you ever do run out of RAM and it prevents the OS from crashing. It is not meant to be part of the normal operation. A properly designed system will never use the SWAP space. Like it said, it is there for the very rare occasions should you run out of RAM, just to keep your system from failing.
There is a typo, I fixed it for you:
“Swap is there to prevent your system from crashing (by not letting it run out of memory, hence an OOM (Out-Of-Memory) error), and since it’s a lot slower than regular RAM, it’s not there to improve performance, since read/write access to swap is a lot slower.”
Edit: my lights just dimmed, since someone put an LLM to work on “the system doesn’t crash, ZoMg!!!111oneeleven the OS just starts killing random processes until enough memory is available again! You are so wrong on that!!!111oneeleven”. I know.
So, to put words in your mouth … " disk is so slow that loading code from disk to ram is a terrible idea"
Except this is exactly what is done when you run a program. Even if you have no swap.
A long running system will swap out some unused memory and then that freed memory is available for real usage. This is what improves performance.
Swap over storj hmmm I suppose it could work but would probably die in some horrible lock up. Like running lvm/raid on nfs shares does
You totally can! … however using HDDs as virtual RAM is not a great idea.
If you had, say an Optane Drive, you could get some of the way. Electronics Wizardry made a great video about it - and many other ways of extending RAM - just a few days ago. It’s here:
Optane is also expensive, with the only real cheap part is the 16GB m.2 modules, but they are in my opinion far too small to be usable for anything great. It’s a few years ago, the 64 and 128GB modules were on fire sale. Those have all been sold
No, that’s not what improves performance, that’s what keeps your process still running. Otherwise the OS would have to kill your program to make space for the new code you are trying to load into RAM. Since you opened that program yesterday and haven’t used it since, it’s the best candidate for “something that I can shove out of the way quickly”.
Please repeat the following until it’s crystal clear: Swapping out to optane/disk/raid/zfs-over-iscsi/nfs-over-avian-carriers will never, (repeat the following twice:) never improve performance. Every nanosecond you add to the latency of getting that data is extra latency you didn’t have before. Since your CPU is stuck there waiting for that data in order to proceed with processing, every nanosecond of delay=wasted CPU doing nothing.
Is it faster to get a small chunk of data from swap and keep the process running, or kill the process and restart it every time you run out of RAM?
That’s oversimplified to the point of being misleading. Effective memory management preserves some RAM for immediate use by active or new processes - which can mean sending inactive data to disk to keep that spare space free. That’s done so those active/new processes don’t block - and that’s where you get the performance gains. That’s where users feel it.
Sure, dragging memory back in… eventually… takes time. But by the time that happens you’ve reaped all the benefits of having more RAM available for processes actually doing something. The swapped app doesn’t get faster: but others do.
But I agree there’s nuance: you can’t survive on a tiny bit of memory and hope a massive swap file will save you: nobody will be happy
What’s misleading about that specifically? That adding extra latency will never be faster than actual RAM?
You guys are arguing that not running out of gas on your car “improves performance”. No it doesn’t, it just keeps your car going. Adding a better/cleaner fuel “improves performance” (ie better efficiency/more power produced).
It’s because you’re talking about one specific swapped process/app: and saying that “can never improve performance”. But systems are running hundreds/thousands of processes, and you can definately improve the performance of the entire system (or at least the portions felt by users during interactive use) by prioritizing one task over another. Such as swapping an idle app so an active app isn’t stumbling over memory allocations.
Basically your looking at performance so incredibly narrowly - at a scope no regular user would - and telling them something “can never improve performance”. When… in regular use… in Window/Mac/Linux it absolutely does. Humans can feel it. Programs can measure it. . Which is why every OS does it.
So you can’t buy new phone or computer with AI integration, because the parts are going to the AI giants that sell that AI to the phone and PC buyers… that can’t buy the AI because no parts for their new devices.
Since you only need a gig of RAM to load a kernel + userland + a website, why don’t you run your computer on 1GB of RAM and a 24TB swap? Why stop there though? Just run 512MB of RAM and mount a storj bucket on that puppy!
On second thought, why not just set up a RAID0 across multiple tapedrives? Sure the tape controllers are a bit expensive, but think of all the money you’ll save by using cheap TB tapes. You can even label then with sequential numbers, and program something to ask you for a specific tape. Quick! Get an LLM to work on writing that code for you!
You’ll never run out of memory ever again. You only need a few hundred MB to run the program you are now using. And if the driver for your wifi card ends up being swapped to the tapes, don’t worry, it will be loaded back into RAM, as soon as the program you are currently using wants to send/receive data on the network.
Infinite loop. Don’t you worry, all of your programs will continue running as fast, or even faster (since there will always be free RAM) than before.