Anki for vocab
Textbook for grammar
Immersion for everything else
Also input and output are two different skills.
Found the spreadsheet https://goo.gl/z8nt3A
And the source: https://www.hardwareluxx.de/community/threads/die-sparsamsten-systeme-30w-idle.1007101/
Still you can calculate how much you will save with 2w power reduction with selling this one and buying different NAS.
You can reduce the disk idle time after access to 5-15 min for better power saving.
Maybe you are looking at the wrong thing. CPU + motherboard controllers idle state matters more than spun down hdds
I saw a spreadsheet somewhere of a lot of cpu + motherboard combinations with idle power consumption for ultra low energy NAS optimisation.
Modem translates fiber signals / DSL into twisted pair cable
Acces point translates twisted pair into wifi
I think you are looking for all in one router
For AI/ML workloads the VRAM is king
As you are starting out something older with lots of VRAM would be better than something faster with less VRAM for the same price.
The 4060 ti is a good baseline to compare against as it has a 16GB variant
“Minimum” VRAM for ML is around 10GB the more the better, less VRAM could be usable but with sacrefices with speed and quality.
If you like that stuff in couple of months, you could sell the GPU that you would buy and swap it with 4090 super
For AMD support is confusing as there is no official support for rocm (for mid range GPUs) on linux but someone said that it works.
There is new ZLUDA that enables running CUDA workloads on ROCm
https://www.xda-developers.com/nvidia-cuda-amd-zluda/
I don’t have enough info to reccomend AMD cards
A fellow had just been hired as the new CEO of a large high tech corporation. The CEO who was stepping down met with him privately and presented him with three numbered envelopes. “Open these if you run up against a problem you don’t think you can solve,” he said.
Well, things went along pretty smoothly, but six months later, sales took a downturn and he was really catching a lot of heat. About at his wit’s end, he remembered the envelopes. He went to his drawer and took out the first envelope. The message read, “Blame your predecessor.”
The new CEO called a press conference and tactfully laid the blame at the feet of the previous CEO. Satisfied with his comments, the press – and Wall Street - responded positively, sales began to pick up and the problem was soon behind him.
About a year later, the company was again experiencing a slight dip in sales, combined with serious product problems. Having learned from his previous experience, the CEO quickly opened the second envelope. The message read, “Reorganize.” This he did, and the company quickly rebounded.
After several consecutive profitable quarters, the company once again fell on difficult times. The CEO went to his office, closed the door and opened the third envelope.
The message said, “Prepare three envelopes.”
Stolen from reddit
Lemmy.world had announcement a while ago that they won’t support creating content with VPN and Tor due to CSAM spam that is going on
https://rocm.docs.amd.com/projects/radeon/en/latest/docs/compatibility.html
https://rocblas.readthedocs.io/en/rocm-6.0.0/about/compatibility/linux-support.html
Yes on four consumer grade cards
If I want to have mid-range GPU with compute on linux my only option is nvidia.
There still is no support for ROCm on linux but this is still good to hear
When the AI and data center hardware will stop being profitable.
I think more important is compute per watt and idle power consumption than raw max compute power.