• 1 Post
  • 16 Comments
Joined 1 year ago
cake
Cake day: June 20th, 2023

help-circle



  • You may not have considered the Intel Arc GPUs. Basically they were bad on Windows and are slowly improving, but unsure about their state on Linux. The cards were quite bad at some point, and well worse than an experience with NVIdia, despite the libre stack.

    I would say the “best” depends on goals here. I generally encourage use of AMD over NVidia, but the difference is quite small. If you’re already going with CachyOS, then you’re well beyond the skill level to be able to navigate the tiny additional complexity of an NVidia card. Just buy the best bang for buck and your use case.

    As for Mali, recent kernels and Mesa versions have made significant inroads. I do believe we’ll get pretty good support for Mali by the time the Qualcomm ARM Laptops become available for Linux.












  • I think it’s worth thinking about this in a technical sense, not just in a political or capitalist sense: Yes, car companies want self driving cars, but self driving cars are immensely dangerous, and there’s no evidence that self driving cars will make roads safer. As such, legislation should be pushing very hard to stop self driving cars.

    Also, the same technology used for self driving is used for AEB. This actually makes self-driving more likely, in that the car companies have to pay for all that equipment anyway, they may as well try and shoehorn in self driving. On top of this, I have no confidence that the odds of an error in the system (eg: a dirty sensor, software getting confused) is not higher than the odds of a system correctly braking when it needs to.

    This means someone can get into a situation where they are:

    • in a car, on a road, nothing of interest in front of them
    • the software determines that there is an imminent crash
    • Car brakes hard (even at 90mph), perhaps losing traction depending on road conditions
    • may be hit from behind or may hit an object
    • Driver is liable even though they never actually pressed the brakes.

    This is unacceptable on its face. Yes, cars are dangerous, yes we need to make them safer, but we should use better policies like slower speeds, safer roads, and transitioning to smaller lighter weight cars, not this AI automation bullshit.



  • This is probably even true in the philosophy sense. Basically instead of a single lever, each of us gets a lever which might change something or might not, or it might do something unrelated. This means that everyone’s responsibility for that decision is dithered. This sort of rewrites the trolley problem. How does it change the philosophy? No idea.