It’s the other way around, an Apple Silicon Mac would be able to run an intel binary through Rosetta (I think there’s almost no exceptions at this point). It’s intel macs that can’t run Arm specific binaries.
I thought a few days ago that my “new” laptop (M2 Pro MBP) is now almost 2 years old. The damn thing still feels new.
I really dislike Apple but the Apple Silicon processors are so worth it to me. The performance-battery life combination is ridiculously good.
Also because, as a person who has studied multiple languages, German is hard and English is Easy with capital E.
No genders for nouns (German has three), no declinations, no conjugations other than “add an s for third person singular”, somewhat permissive grammar…
It has its quirks, and pronunciation is the biggest one, but nowhere near German (or Russian!) declinations, Japanese kanjis, etc.
Out of the wannabe-esperanto languages, English is in my opinion the easiest one, so I’m thankful it’s become the technical Lingua Franca.
It’s UE in Spanish, from Unión Europea. (Non-doubled letters because it’s a single Union, there’s no plural like in “States”).
Sometimes people in Spain do use the English acronyms for both EU/USA, but I don’t think I’ve seen it often. Both UE and EEUU are more common from what I’ve seen, and also people rarely say these out loud, it’s exclusively a written language problem.
I’m talking about running them in GPU, which favours the GPU even when the comparison is between an AMD Epyc and a mediocre GPU.
If you want to run a large version of deepseek R1 locally, with many quantized models being over 50GB, I think the cheapest Nvidia GPU that fits the bill is an A100 which you might find used for 6K.
For well under that price you can get a whole Mac Studio with those 192 GB the first poster in this thread mentioned.
I’m not saying this is for everyone, it’s certainly not for me, but I don’t think we can dismiss that there is a real niche where Apple has a genuine value proposition.
My old flatmate has a PhD in NLP and used to work in research, and he’d have gotten soooo much use out of >100 GB of RAM accessible to the GPU.
If it’s for AI, loading huge models is something you can do with Macs but not easily in any other way.
I’m not saying many people have a use case at all for them, but if you have a use case where you want to run 60 GB models locally, a whole 192GB Mac Studio is cheaper than the GPU alone you need to run that if you were getting it from Nvidia.
So the lack of apple-branded AI Slop is slowing down the sales for iPhones but not for Macs?
Edit for clarity: I’m aware sequoia “has” apple intelligence but in a borderline featureless state, so it’s as good (or as bad) as not having anything.
Over the past 5 years, I’ve installed ubuntu about 30 times on different computers. Not once has an install on an SSD taken me more than an hour, with it typically taking me 30 minutes or less except for rare occasions where I’ve messed something up.