Windows lovers: Don't mix win7 and win10 disks!

or bye bye filewalker!

its a difference between finishing a filewalker run in 1h and in 6days…

It turns out that it matters whether the drive was formatted under windows 7 or under 10. (The disks, are SATA ultrastars and good and working.)

Supposedly this is NTFS and this is NTFS, but it turns out it’s NOT.
I realized that drive formatted under win7, is too slow under win10 Os, as it has already been all written with data.

Is it possible to upgrade a full NTFS Win7 drive to NTFS standard from Windows 10 without moving the content?

I determined this:

Used with a Win10 Os:

  1. 4TB drive, formatted under win7 can not be tame anyhow, no optimization helped: the speed of operations on small files is slow (on bare metal): 330KB/s, while counting the number of files, (right click, and properties under windows). it looks like this: tens per second arrives on screen)

  2. The same disk already formatted under win10 and with the same files
    back on it: now counts at 12MB/s (with 2 thousand files per second arrives)

this is only counting under windows, (right mouse click on the folder and properties)
but it turned out that programs (storagenode.exe) have the same slow performance on files with a disk formatted under Win7, when ran on a computer with Win10 OS. All this in GPT obviously (not MBR).

Of course, this 4TB drive from Win7, connected to Win7 OS works as fast as in point 2), and only under Win10 no longer! (but even there, on win7 the disk performance can break, so Win10/ or Servers 2016, 2019 are better)

Now, i have 16TB’s, full with files, what to do now… 1 way is to do robocopy to the other 16TB drive, which is formatted already under win10. Robocopy flies at 10MB/s.
That’s about 2-3 weeks of copying.

Because robocopy from 4TB (Win7 formatted) to 16TB (win10 formatted) was going at 10MB/s.
But when I formatted this 4TB already under win10, robocopy now from 16TB (win10 formatted) to 4TB (Win10 formatted) goes at 30-50MB/s!
Same computer and drives and same files.

Question: is it possible somehow to upgrade such a full disk to NTFS standard from Windows 10, without flipping files?

P.s.
And it’s not about the “Enable Fast Startup” option,
Disks scanned with chkdsk /x, no errors already before copy tests,
its just initialized under win7 and now need to use them under win10, and they are full and what to do…

i did few days of testing, thought its VM’s fault, chipset, or CPU, but no.
i formatted the disks under service unit with win7 Os,
as which I use to run hardware tests…i couldn’t have known it will be a difference between finishing a filewalker in 1h and in 6days…

Just a guess… Could this be because of different settings for NtfsDisable8dot3NameCreation flag?

Answer is: Because u copied to second disk you difragmented it at same time, next coping work much faster, because disk head don’t need to search pieces any more it takes one after another files.

4 Likes

Yes i thought about that too. You need to migrate from 4TB to 16TB anyway, and as a bonus You get a free defrag, But 1 thing.
I still thinks that defragmentation won’t help in what is most important, the speed of blob folder processing with a HDD started in win7, if Youw ant to run it under win10. Because i observed under win7 Os that analyzing the disk alone unlocked the disks desired speed, and the defragmentation wasn’t made at all. Just the fast (few minutes) analyze by Windows optimize Drive build-in tool. And then the disk started to act fast. But that was only true for 4TB disks, it wasn’t working on 16TB ones i don’t know why. Anyway, even if it worked for 4TB’s, it was gone with restart, so i had to renew the Windows “analyze” in windows defragmentation tool. I also optimized a 16TB drive’s MFT with UltraDefrag 7.14, but that had no effect. Sometimes after restart, even “analyze” on 4TB drives didn’t work, so i pressed 2nd button “optimize” but that also gave no effect, You can imagine how confused i was.

So im saying: after i backuped all 4TB files on a 16TB, even if i just shift+del all the files from 4TB, instead of format, that would not bring me the speed alone, i had to format the disk under win10 so it behaves.

@pangolin possibly, if all i need is to disable this, then You won a cookie,
i would like someone to confirm what exactly to do in this situation, not felling strong on my own here. is it safe for existing storj files etc?

EDIT:
mmm i dont know, just did a test,

so disk 4TB ((win10 format), 8dot3 disabled) to disk 16TB ((win10 format), 8dot3 enabled), robocopy goes 50-90MB/s write
read from source stable avg. 65MB/s

so lookls like it dosnt matter it has create 8dot3 enabled, and lets disable that:

disabled d:
and robocopy again starts, /MIR o:\storj d:\test3, and again writes 45-90MB/s, and read stable at 65MB/s, seems it doenst matter if 8dot3 disabled.
Must be more to that NTFS formating under 10 than that.
Oh God, i really hoped there can be some rebuild done, without backuping and formating everything…

If you make Analize it cache MFT in process for some time, so it working faster.

2 Likes

my MFT for a 16TB disk is 66GB, no wonder it could not cache it.
But for 4TB are also some 10GB, no free RAM to cache that either.

This would work and in the reverse direction. You may format it under Win7 and move files to there and you will have a speedup too, maybe even better.
Because you basically did a full defragmentation when you copied/moved files to an empty disk.
The only huge difference between these OSes - their defaults regarding a storage. Win 10 likely will enable a deduplication and encryption features by default (not sure about this though - perhaps it could be changed in your version of Win10) also disabled/enabled 8dot3 as mentioned by @pangolin, which were absent or complicate to enable/disable under Win7.

hello Alex.

Yes but it looks to me, that benefits, was not all because of that. Don’t have evidence for that because i would need to repeat the operation, but this time not to format the 4TB after backing up files from it, but shift+delete all disk, which would took some time. And robocopy on empty but not formatted disk to see isolated effect of only defragmented files landing speed. I have another 4TB disk like that so i can technically do it for science.

Now i discovered that on 2nd 16TB disk, full with storj, disabling to create 8dot3names actually unlocked robocopy write speed like 5 times! :

Oh snap, and also striping existing storj files does same for read! Whoaaa:

Dear Diary: Details of unlocking write/read speed for a 16TB disk

Day 1710 of being a Windows SNO:

robocopy from 4TB (formated win10) to 1_3_16TB(win7 formatted, full of storj and slow, with 8dot3names Enabled)
is stuttering, because of max write speed of 1_3_16TB i see.
the read is at full 50-60MB and stops to wait for write.
which is at max 13-15MB/s shows me taskmanager
-ops after a minute its suddenly unlocked the write speed and is now 14Mb-108MB and not stuttering, but spieks up and down but constant flow
wonder if read speed also unlocked? as i checked was slow on storj’s files at 1_3_16TB,
lets see
yeap its very slow read from 1_3_16TB
lets see a robocopy from it to the fast now 4TB

yea read is at 6-9MB/s so is write is.
lets “fsutil 8dot3name set g: 1” now so disable and see now
quite OMG, the 4TB now writes what 1_3_16TB is able to throw at it with avr. of 65MB/s speed, because the write is spiked and its from 24-100MBs, but its constant, so it keeps up!

so disabling creation of 8dot3name worked for unlocking a write speed of a new files under Win10 Os of a disk, that was formatted under win7!

probably it did not for reading speed of what already been written with that lets see:

yea it did not, but we can try strip that folder we are reading and see:

“fsutil 8dot3name strip /s /v G:\Storj\blobs\qstuylguhrn2ozjv4h2c6xpxykd622gtgurhql2k7k75wqaaaaaa”

and its stripping it, not sure the speed task manager shows in KB/s but thats not file copying speed, so can’t really compare, the writings on the CMD moves as fast as files under robocopy did, so maybe its fast enough so i can check for a moment if the reading of a folder under windows properties was changed for better, lets wait and see.

after 30minutes, it went for 1/4 of subfolder in that blob, i just can’t wait to check
a “br” folder windows says, it contains 154MB, and the strip went thru it in ~4s i counted in my head, so maybe the real speed is around 38MB/s, that would be not so bad.

lets pause it then and see,
OMG yes, it counts files really fast now on, up to point, the folder is stripped, then it counts it slow again, so yeah, striping 8dot3name works here with 1_3_16TB for unlocking a read speed for small storj files!

So i guess i don’t have to format these disks, but just to stripe!

this disk is not defragmented, its fragmented like crazy rather. And here, the read speed is great!

Please do it, we would have a proof that Win7 :heart_eyes: is better.

It was great in its time. It really was. Along with XP ranks on my top versions of Windows.
But it has died.
It is dead.
Let. It. Go. :smile:

1 Like

I’m joking. I’m prefer to use Linux in my daily work. However, this exact server do not want to work under Linux properly, Soo, I forced to use Windows.
See

But, anyway. The platform doesn’t matter too much, if you can keep it running.

Tho striping a storjs folder worked for speed up, i don’t know why the slow performance is back after disk being disconnected and reconnected, or at restart. This does not happen with a formatted disk (under windows 10), those ones remember to stay fast with small files under Win10 Os. So im back to beginning,

really?

only format to make it permanent?

Do you suggest us to test something which is not ever the betta?
Disclaimer: we do not test filesystems, especially the exotic ones (less than 1% of the world usage) and especially - any network ones (and we do not have plans to test them, sorry!).

You testing without a filesystem? Now all those bugs making sense. :sweat_smile:

Pangolin, No? some misunderstanding: Os is short from “Operating system”.
Alex, no, im looking for help! Maybe someone knows, i want to avoid formating, of 10 x 16TB disks, mayby there is a commandline way…Not sure Windows is exotic.

Beside that: looks like the new major pattern, with majority of files being TTL set for 30days, helps here. Because i noticed that new files after disabling 8dot3 from now on are fast to read, so hope is filewalker will be fast too. Seems just a problem with existing storj files that somehow windows doesn’t want to make fully fast even after striping, because any dismount cancels all the effort, but new files seems to keep the performance and don’t fear any restart. So after 30days most files of the disk will be replaced and without 8dot3, and should be fast, maybe i dont need formatting, but just wait.

maybe the 4 tb MFT was fragmented, because windows tools does not defragment that MFT, only ultradefrag does.?

1 Like

No, I mean that we usually does not test filesystems, we test our code. But I think builders are working on Linux hosts, so it would be hard to test a particular native code with a native filesystem.

I could recommend only to perform a defragmentation. All other methods would require to move data from the disk and put it back (it will be a defragmentation too at the end).

I did tried 5 different defragmentation programs.
i did defragmented a whole storj/blobs/qstuylguhrn2ozjv4h2c6xpxykd622gtgurhql2k7k75wqaaaaaa folder of 150GB size
And despite, folder behaves slow.
So defragmentation does not help.
Also the whole disk had complete MFT optimization by UltraDefrag, did not helped.
Once again the “qstuylguhrn2ozjv4h2c6xpxykd622gtgurhql2k7k75wqaaaaaa” folder had 2,5h of stripe 8dot3name, and that gave read speed up boost, before defragmentation, but the effect was lost to disk disconnection, (e.g.: restart) and did not come back since. So defragmentation give little in terms of small file read. I see theres a lot more to fsutil commands, but that is a task for a professional service man with experience in the field, im out, im gonna try ask some local technicans maybe tommorow what to do.

Also i robocopy /MIR from that folder, to win10 formated disk, and back and guess what, its 20% fragmented (~2000 files out of ~10000) but the read speed is great!
for me it means, that its a matter of some files rules, or file link in windows reginstry, (i don’t know, hard link, or other details from the depths of NTFS) how windows sees the files, probbaly beacuese new files are now written without this bs 8dot3name, thats why i praise TTL, we shall see after 30 days if my nodes are suddenly faster in small files operations!

This is exactly what is needed to be fully defragmented. All defrag programs are trying to achieve the same in-place, not always successfully, but at least they should defragment the free space (moving used clusters to the end of most of a used space).