Providing options as an argument vs editing a config file

The downside is complexity. I don’t need to push or pull files for a 2 line change.
Usually I just open the file (in nano :)), change it, save it. Most of the time this works, when it doesn’t, I quickly undo the last change and try to figure out why it didn’t work. If I need to do a bigger edit that may not be as fast to undo or I am not sure if the change is going to work, then I either make a backup or a test copy.

But with source control I probably would have to do something like this: edit the file on one server, push the change to the git server (also writing a comment on how I added a semicolon), then pull it on the “use” server that I need. If I have two “use” servers and the files in both need to be slightly different, I get even more complexity.

And honestly, if there was a situation that my change broke something, but it became apparent a month later, I could probably debug the breaking faster (imagining it was some new bug) than I could going through change history trying to figure out if that change was the one which broke it. I either still remember the change or if I don’t, then just debugging it would be faster.

This has not happened to me. My stuff either crashes immediately, works OK or seems to work OK, but I get calls that it actually doesn’t work in about 15 minutes.
Probably the longest time I have to wait for answers is when I am tweaking a backup script. I change it, run it, but it may run for a few hours or longer.

My stuff is very simple. One of the more complicated things I did was modifying the code of Storj v2 node so it would use files for storage instead of some kind of a database (which was really slow if it got big). The complexity was because it was a language I did not know (node.js) and the code was split into multiple files. Still, do the change, fire up a test environment, try to upload a file, download a file, delete a file, it if works, update the node and replace the 2-3 files I edited. It actually worked quite good.

That’s cool, still no scrollbar to quickly go back up 1-10k lines (I don’t know from the beginning). From what I see in the screenshots, tmux is usually used to display multiple windows side by side, though I guess it probably can be used to display one window/tab at a time with some kind of title. It may be useful if I cannot easily use my GUI client to connect to that server, but. Still a GUI client has tabs, windows that can overlap (this is quite important to me) and scrollbar :slight_smile:

I like using the mouse, I guess that I got used to it, since pretty much all the time I used a PC it was Windows and I used DOS only to play some games, but even then, Norton Commander supports mouse :slight_smile:

I don’t know, but something not being “a thing” or being “unsupported” does not stop me from using it. There is an alternative that runs on Linux, though I do not remember what it’s called as I don’t use in on Linux. I can use bash or php there.

EDIT: GUI ssh vs CLI ssh. I use CLI ssh a lot - in scripts, connecting from one server to another. It is useful because I can do zfs send | ssh x zfs recv. However, I do not really need that in the initial connection from my PC to the server. The most I need is to upload or download files and some ssh clients also have scp or zmodem support for that.

git commit -a --amend
git push force --with-lease

or use a Gerrit:

I usually use a Visual Studio Code, because you can connect it to any server via ssh and edit remote files with a comfort (syntax highlighting, suggestions, linters, etc.), you can connect it into a docker container or pod in the k8s cluster, wsl2, and so on, and it has a lot of extensions and integrations, include GUI way to use a source control systems. And of course you can open a terminal right there.
By the way, you can open a VSC on GitHub in your browser - just press . (dot)
I use vim too, but much less often.

No servers involved. Git is local. You just do what you do, and commit. Locally.

It’s also a good idea to document what you’ve changed, so you already have the text to for the commit comment ready anyway. Using git costs you therefore nothing.

It’s just a matter of policy, if you always do this way — no extra thinking is involved.

And regressing an issue is always much faster than debugging — just binary search between the commit history. Can even be done automatically.

It’s a slippery slope. At some point maintaining the old stuff and working things around to make old stuff work overweights cost of migration to new stuff. Delphi is dead for all intents and purposes. Clinging to it is a classic sunken cost fallacy. There are modern tools to solve modern problems much better and more efficiently.

Why would you chose to use Norton commander and then use mouse there? Explorer is optimized for supports mouse; console tools are optimized for a keystrokes.

I used Dos Navigator, and then FAR manager. It’s the best thing ever. All to get away from a need to use mouse from day one. :).

I think FAR manager is still being developed, last I checked was few years ago.

Dos navigator actually had the best Tetris implementation I have ever seen! Oh the amount of time that was sunk there… good times.

Edit: yay!!! It’s still alive! https://www.farmanager.com/

Interesting. The only time I have seen git used was with some central server. But then again, it was used with a big program and not a 100 line script.

I think you are a bit overestimating the complexity of what I do :). I probably approach writing a script the way I approach repairing a tape deck. After I do the repairs, I check if it works properly and that’s it. If it fails some time in the future, it may not be because of my previous repairs, but because something else failed, so instead of retracing my steps, I just find the source of the problem and fix it.
I don’t remember ever having to undo a change that I made earlier than a few days before. I do it, the customer becomes happy (problem fixed, yay), still complains (not fixed yet) or starts complaining about something new (I probably broke it or maybe the problem was always there just that the previous problem was masking it).

How can it be done automatically? I mean, OK, let’s say I have all the previous versions saved and have comments on what was changed. I still would have to try each one to see if it works. Though, I guess, in my case it would be 2 versions saved if I saved only those versions that actually worked (at least initially).

Can I make a working 32bit Windows program with it? Yes.
It has been a while since I used it, because now I just use bash to do the same thing and do not need the program to run on Windows or have a GUI. But if I needed a program that ran on Windows or had a GUI, well, I would use it, I guess.

Just like I can call people and write text messages using a 15 year old phone.

Because Windows does not start for some reason or because I want to play a DOS game.
I never knew about Dos Navigator or FAR manager. At the time I did not have internet connection and used to think that “microsoft.com” written in a magazine was some kind of a program, because it had the “.com” extension.
I didn’t always use the mouse with NC, but sometimes I did. The dialogs (format a floppy etc) were similar to comparable Windows dialogs and were quite OK to use with the mouse.

1 Like

Via script, bi-secting revisions to find which revision where it works vs not. I do that quite often; granted, you would need to be able to automate the “does it work” checks. (I understand, you are not doing anything complex with this, but I still like overkill solutions).

Can you still run 32 bit apps on modern windows?? I seriously don’t know, but if so – I’m surprised and kudos to Microsoft for maintaining backwards compatibility for so long!

I like that approach. I do the same with everything but the phone, ironically. My car is 12 years old, the display – 10 years old, etc. but the phone and a laptop I upgrade every year, like a clockwork. Apple stuff has such a good resale value that I effectively “rent” equipment at a fraction of a cost.

DosBox?

I actually don’t remember having mouse in dos times. Maybe I’m mis-remembering. 286 and 386 hardware it was just dos, and precious ram would not be wasted by some unnecessary mouse driver.

It’s more fun on old hardware, but I was mainly talking about the time when Windows 95 was the latest OS. Some DOS games had mouse support and when the driver was loaded, I could use the mouse with NC as well.
Initially I did not have internet connection and when I got it later, I still could not effectively look information up, because my English skills were not that great.

Most of the time I can’t do it or writing such test would take much longer than writing the script in the first place.
I know that the way I do things would be really bad with anything complex. However, for simple things, it is much faster (for me). It’s the same like using a shovel to dig a 1m wide and 1m deep hole is faster (and cheaper) then getting an excavator.

Yes, you can’t run 16bit programs on 64bit windows, but 32bit stuff works. This is the reason why Windows has “Program files” and “Program files (x86)” and so on. One of the good things about Windows is the backward compatibility.

I tried a touchscreen phone and did not like it. I also sometimes have to use an Android phone or an Android TV (or STB) (because of an IPTV app) and usually when I need to do so, I don’t like it.
Car - 40 years old, I also use tape decks (R2R and cassette) a lot, both for playing music and recording new tapes and I also use VCRs, though not as much as I used to. My normal size (17") laptop is also something like 6-7 years old, but still good enough, except for playing new games. I also have a few mini laptops (6", 8") that are newer.

1 Like

6" laptops :rofl:
How can you work on that? I’ve seen someone strugeling with an excel table on a 13", but 6"? WOW!

easy. 10-11 years ago I have used a pocket PC (3.5". iPAQ 1940, resistive screen and stylus, you know) to fix my server on my work, while I was on the beach (GPRS via bluetooth).
My friends make a photo while I was working like that…

And some people claim to be able to work using a phone.

The mini laptop has a great feature in that it fits in a pocket, so I can carry it anywhere. Unlike a phone, it has a normal Linux OS (Kubuntu in my case) and a somewhat usable keypad. It also has a serial port (useful to connect to the console of some switch) and an ethernet port.

Would I write a program or even a 100 line script with it? Not if I can avoid it. But it is useful for some things especially if there is no table in that location. Using a big laptop by holding it on one hand and typing (or using the touchpad) with another is much worse than holding the small laptop with both hands and typing with my thumbs.

A long time ago I have a Psion 5 PDA that I used a lot. The mini laptops are basically the same thing, just with modern insides.

LOL! What a nice little thing! It has a touchpad also. I think the CLI Linux makes sense, but not Windows, with all those windows :wink:
Maybe if you shrink the interface to 25-50%, but the visibility it’s limited by resolution.

I use Kubuntu, with the GUI. Linux with just CLI is not very useful to me since I tend to have multiple windows/sessions open and it does not seem that something like tmux allows the windows to overlap.

It’s not something I would want to use the whole day, but for a short time it’s convenient as I don’t have to carry the big laptop. For example - replace a broken SSD in a server, then use this PC to connect to that server and run zpool replace command.

The resolution is quite good actually, Unlike some of the other small laptops I have, here the DPI is not too high. With high DPI displays, if something goes not work with UI scaling (say, a Windows program using Wine), then I have to look at the display from very close up.