![](/static/66c60d9f/assets/icons/icon-96x96.png)
![](https://programming.dev/pictrs/image/028151d2-3692-416d-a8eb-9d3d4cc18b41.png)
To be fair, there should be some heuristics to boost priority of anything that has received input from the hardware. (a button click e.g.) The no-care-latency jobs can be delayed indefinitely.
To be fair, there should be some heuristics to boost priority of anything that has received input from the hardware. (a button click e.g.) The no-care-latency jobs can be delayed indefinitely.
Why can Windows do it when Linux can’t?
Windows lies to you. The only way they don’t get this problem is that they are reserving some CPU bandwidth for the UI beforehand. Which explains the 1-2% y-cruncher worse results on windows.
I agree that UI should always take priority. I shouldn’t have to do anything to guarantee this.
I have HZ_1000, tickless kernel with nohz_full
set up. This all has a throughput/bandwidth cost (about 2%) in exchange for better responsiveness by default.
But this is not enough, because the short burst UI tasks need near-zero wake-up latency… By the time the task scheduler has done its re-balancing the UI task is already sleeping/halted again, and this cycle repeats. So the nice/priorities don’t work very well for UI tasks. Only way a UI task can run immediately is if it can preempt something or if the system has a somewhat idle CPU to put it on.
The kernel doesn’t know any better which tasks are like this. The on-going EEVDF
, sched_ext
scheduler projects attempt to improve the situation. (EEVDF
should allow specifying the desired latency, while sched_ext
will likely allow tuning the latency automatically)
No, I definitely want it to use as many resources it can get.
taskset -c 0 nice -n+5 bash -c 'while :; do :; done' &
taskset -c 0 nice -n+0 bash -c 'while :; do :; done'
Observe the cpu usage of nice +5
job: it’s ~1/10 of the nice +0
job. End one of the tasks and the remaining jumps back to 100%.
Nice’ing doesn’t limit the max allowed cpu bandwidth of a task; it only matters when there is contention for that bandwidth, like running two tasks on the same CPU thread. To me, this sounds exactly what you want: run at full tilt when there is no contention.
“The kernel runs out of time to solve the NP-complete scheduling problem in time.”
More responsiveness requires more context-switching, which then subtracts from the available total CPU bandwidth. There is a point where the task scheduler and CPUs get so overloaded that a non-RT kernel can no longer guarantee timed events.
So, web browsing is basically poison for the task scheduler under high load. Unless you reserve some CPU bandwidth (with cgroups, etc.) beforehand for the foreground task.
Since SMT threads also aren’t real cores (about ~0.4 - 0.7 of an actual core), putting 16 tasks on a 16/8 machine is only going to slow down the execution of all other tasks on the shared cores. I usually leave one CPU thread for “housekeeping” if I need to do something else. If I don’t, some random task is going to be very pleased by not having to share a core. That “spare” CPU thread will be running literally everything else, so it may get saturated by the kernel tasks alone.
nice +5
is more of a suggestion to “please run this task with a worse latency on a contended CPU.”.
(I think I should benchmark make -j15 vs. make -j16 to see what the difference is)
I once helped a person with their computer. They complained the they cant save the their photos. Well, their onedrive was filled to brim with crap, while the local 1Tb disk was empty because they had zero idea how storage and folders work. I had to explain her there is literally 1000x more fast disk space available, so please dont save into onedrive.
Holy fuck I have not seen this fresh wtf garbage in ages. Thanks.
I once said that the current “AI” is just a excel spread sheet with a few billion rows, from what all of the answer gets interpolated from…
Especially now that they’ve started releasing tritiated water into the ocean for the next 20-50 years or whatever the fuck the plan is supposed to be.
the tritiated water is no-more concentrated than what other power plants around the world release. (the latter may be surprising to know) In addition, tritium has a half-life of only 12.3 years and is diluted in a literal sea, which is an extremely good radiation shield.
So it is btrfs snapshot time again and making it a bootable backup before pacman -Syu
?
I have had only single time I remember when the Arch upgrade truly fucked up the system: libreadline.so was broken so bash didn’t work. :D
I always have a second bootable system in case the main system is unable to boot… So I can at least troubleshoot the main system.
What pressuring? They’re scared to shit to point of hallucinating threats because they think because we have a prime-eval grudge against them.
If they magically would co-operate with us, drop their shit we would more than happy resume the trade with them.
Our sorrow of the year is the death of Martti Ahtisaari, a Peace Nobelist. May his legacy to be respected in honor, his wisdom would be in great need as of today.
Sauli Niinistö, who will soon peacefully leave us as a president and join the same history books. Sad we can’t have a another Kekkonen, depends who you ask. I hope the next president will have a stone cool head in this heated world.:)
Quantum computing is going to make it possible to solve problems that normal computers simply cannot do.
Most of these are optimizing problems like “compute the best solution to traveling salesman” or “find a molecule that binds to this receptor”.
On normal computers solving such problems “perfectly” takesexponential amount of computing time vs. the size of the problem.
Quantum computers are going to chop down that exponential thing a little, so we can see the results before the sun burns out. The reason QCs are theoretically able to do this is that each added qubit improves the machines performance exponentially.
However, the qubit state is so fragile that we need hundreds of them to make a single “stable” logical qubit that can do operations repeatedly. What the quantum computer uses as qubit (photons, super-conducting wire) is irrelevant as long as the system can do useful work.
Because of the fragility, the results are gathered using thousands of runs on the quantum machine and measured statistically.
We are not quite there yet to solve any useful sized problems.
Finland’s blank NATO papers were kept in a safe (30 years figuratively?) and as soon as the war(s) started to cause us harm, they were pulled out of the safe and ratified.
From the news at the time of NATO ratification: “Look in the mirror” - Sauli Niinistö
From the news of the last two weeks: Now the eastern border is pretty much closed for the foreseeable future.
My armchair stance: If the Soviets angered nearly 4 million Finns in the 1940s who had only pitchforks and cows and the result was 126 875 dead and 188 671 wounded Soviets. [*](https://fi.wikipedia.org/wiki/Talvisota) Now there is a nation of +5 600 000 grumpy Finns with access to modern weaponry and a bitter memories of the past…
I don’t know what the russian leaders are hallucinating trying to anger us more? :P
deleted by creator
nohz_full
confusingly also helps with power usage… if the cpu doesn’t have anything to run, no point waking it up with a scheduler-tick IPI… but also no point trying to run the scheduler if a core is peaking with a single task… Withnohz
the kernel overheard basically ceases to exist for a task while the it is running. (Thought the overhead just moves to non-nohz cpu cores)