I have a 5950X computer and a Mac mini with some form of M2.
I render video on the M2 computer because I have that sweet indefinite Final Cut Pro license, but then I copy it to the 5950X computer and use ffmpeg to recompress it, which is like an order of magnitude faster than using the M2 computer to do the video compression.
I have some other tasks I’ve given both computers and when the 5950X actually gets to use all its cores, it blows the M2 out of the water.
A phone CPU challenging a top of the line desktop GPU is crazy.
Desktop CPU, with a 170W TDP.
Granted, the comparison is an extremely specific synthetic benchmark, but still, I agree: utterly wild.
It doesn’t really challenge the desktop CPU in multithreaded tests where the 170w are actually relevant.
The test also includes AI tasks, the Apple chip seems to spend around 20% of real estate on that, the desktop CPU had none.
been like this with the Apple A chips for years
I have to demonstrate to my friends every time how my MBP M2 blows my Ryzen 5950x desktop out of the water for my professional line of work.
I can’t catch quite the drift what x86/x64 chips are good for anymore, other than gaming, nostalgia and spec boasting.
I have a 5950X computer and a Mac mini with some form of M2.
I render video on the M2 computer because I have that sweet indefinite Final Cut Pro license, but then I copy it to the 5950X computer and use ffmpeg to recompress it, which is like an order of magnitude faster than using the M2 computer to do the video compression.
I have some other tasks I’ve given both computers and when the 5950X actually gets to use all its cores, it blows the M2 out of the water.
Is it possible you’re using your desktop’s GPU for ffmpeg encoding, and not the CPU, by chance?