Apple M1
This week Apple announced the first set of products using its new M1 chip.
The new MacBook Pro 13”, MacBook Air, and Mac Mini have replaced their Intel chipsets with the new Apple M1 silicon. While the event itself contained some concrete information, a lot of the numbers presented were relative (3x, 5x, etc.) and muddled in marketing fluff.
That makes a direct comparison between current Intel silicon and Apple’s difficult. And maybe, just maybe that’s a good thing.
Benchmarks
AnandTech has published a deep dive analysis of what we might expect from the M1 based on the A14 chip currently in the iPhone 12 and 2020 iPad Air. This report is excellent. It’s detailed, deeply technical, and provides reasonable comparisons.
These show that the M1 could very well be the fastest chip out there in its class (as determined by power consumption, product positioning, and more).
It’s also essentially meaningless for most users.
Computers (if any form) are more than just the speeds and feeds of one specific piece of hardware inside of them. What truly matters is not only the actual performance but the perceived performance of the device by its user.
This is where things get complicated.
I say that based on experience and the fact that Apple’s marketing machine—one of the best, well funded such machines—failed to draw clear and simple guidelines for customers to select a new device.
The GHz is Dead
For a very, very long time, computers have been judged on one thing: speed.
Most directly translated, this was the clock speed of the primary hardware element; the CPU (central processing unit). When I started with computer (uh oh, going seem real 👴🏼 right now), a very fast home computer clocked in at 4 MHz. The fastest computer in the world at 125 MHz.
I currently have significantly more computing power & capabilities running at the same 125 MHz on wrist inside of a year old Fitbit with a week long battery life.
Moore’s law has carried us a long way indeed.
For decades, the main claim of performance has been clock speed of the CPU. This provided the market with what it clamoured for, one number to compare.
This product with a 2 GHz CPU is better than this one with a 1.5 GHz CPU. The implication is that it’s 25% better. It probably wasn’t but that didn’t really matter…marketing and the psychology of product positioning is fascinating.
But over the past few years, this system has started to breakdown.
Clock speeds have remained remarkably stable for the past decade. Depending on the intended use of the chip—mobile chips are typically designed to run slower to reduce power consumption—speeds usually range between 1.5—3GHz.
Outside of speed, the density of these chips (more processing cores) has been increasing significantly. Advancements in GPUs (graphics processing units) and other subsystems have keep the inexorable march towards “faster” going.
But that’s a lot harder to boil down to one simple number.
Well Rounded
The M1 is a very well-rounded chip designed for today’s computing problems.
Modern CPUs and GPUs are designed around “cores”. The core is essentially a logical processing unit. This mimics that we used to only achieve by building systems with multiple CPUs in them. As manufacturering technology has improved, we have been able to add more density to similar size chips. This density comes in the form of additional cores allowing the processor industry to continue innovating (as discussed above).
The M1 contains 4 high performance cores (codenamed Fire Storm) and 4 high efficiency cores (codenamed Ice Storm). The system will determine which of these 8 computing cores is best suited to handle generic tasks.
There is additional dedicated hardware for audio and video tasks. Graphics processing (2D and 3D) have an additional 8 custom cores. Finally, machine learning tasks (mainly using developed models) have 16 custom hardware cores of their own.
Contrast this with Intel or AMDs offerings. They have multiple cores in their chips but they are all similar. There’s no optimization beyond “in use/idle”. For speciality tasks, the chips rely on other systems for performance boosts.
This not only takes time to get the data to those subsystems but it creates a wide variance in available performance. The overall system performance depends on other components beyond just the primary chip.
…and I say chip because “CPU” is no longer sufficient.
What Apple delivers with the M1 is truly a system on a chip or SoC. This isn’t the first. They are commonly used in all manner of electronics and larger computing devices like laptops, desktops, and servers having been moving this way for a while now.
To properly compare the M1 to the internals of other computing devices, you really need to combine the CPU, GPU, RAM, I/O controllers, and other custom processors.
The M1 does this literally in a “package on package” (PoP) design. Think LEGO for chipsets. They take the components they need in order to create the system they want.
This is why I believe the entire M1 line up is the same chip with all the same features enabled. When Apple decides to differentiate it’s hardware, we’ll see the M2 with a completely different configuration aimed either at their mid-range line up (iMac’s and MacBook Pro 16”) or the pro lineup (iMac Pro and Mac Pro).
The modular, LEGO-style approach allows a common computing platform that will reduce costs and let the team use other hardware to differentiate the offer to customers.
Back To Benchmarks
Which brings us back to benchmarks.
The idea behind commonly used computing benchmarks is to provide a more relatable comparison point. Sadly, these benchmarks have fallen into a trap too.
Some of these benchmarks are based on “real world” scenarios. Things like video rendering (typically very taxing on a system), file transfers, and office suite usage are common foundations for these tests.
There are also synthetic tests that are designed to sum up varying types of tasks into one more relatable number. The challenge here is that this number only has meaning compared to other systems running the same test and it’s not necessarily a reflection of the performance a user would experience.
Worse, both of these types of benchmarks can be gamed. While they try to solve a real problem, they only add to it.
We need a better comparison tool.
Marketing Fail
I would love to claim that I’ve solved this problem but I haven’t. I’m ok with that because no one has.
Apple’s November 2020 event proves it. Their claims were all centered around relative performance gains and nothing concrete that could be used as a comparison.
This comes to light in the differences between the three new products announced. They all share the same internal chipset—the shiny new Apple M1—but have different performance claims.
I’m confident that the chip inside each of these is the same physical hardware and configured in the same way. I think that Apple is smart enough to stay away from the confusion chipset schemes other manufacturers have in their lineups.
Because Apple controls the entire ecosystem of hardware and software, the use of the M1 can be differentiated in other ways.
Each of these systems is going to deliver similar performance over but will shine in specific scenarios and use cases. I truly believe it’s as simple as this;
- The MacBook Air with M1 is the everyday system. It’s light, has killer battery life, and is inexpensive for Apple levels of inexpensive 😉. It won’t shine in any particular scenario but most users will never notice
- The MacBook Pro 13” with M1 aims to get the most performance you can from the M1 in a portable form factor. It’s active cooling (a/k/a fan) and bigger battery while drive the chip
- The Mac Mini is the “ultimate expression” of what the M1 is capable of. With the larger “thermal envelope” (a/k/a biggest fan) and constant power, the Mac Mini will let the M1 run all of it’s cores almost constantly
What appeared to be the biggest challenge is actually the easiest; deciding betwee the MacBook Air with M1 vs. the MacBook Pro 13” with M1. Again, this one is actually simple. Pick the MacBook Air unless you need longer battery life, with a brighter screen, and a bit more performance at the cost of cash and weight.
The Mac Mini is actually the more challenging decision. The power in the M1 means that for most workloads, this Mac Mini should outperform not only the previous Mac Minis but also a good chunk of the iMac lineup.
If you’re willing to pair your own monitor and speakers with it. This is a potential steal given the pricing and the ability for the M1 to run iOS and iPad OS apps on macOS Big Sur.
But Will It Work?
Even with all of the challenges positioning the M1 in the market and trying to appease the technical community, we haven’t yet discussed the real challenge for the M1.
Will today’s software work well on it?
CPUs run instructions in a specific format. For years and years and years, the standard for that format on laptops/desktop/servers has been the x86 instruction set.
Intel created this standard and continues to use it to this day. AMD licenses it in order to have a route to market for their processors.
While ARM-based designs—which includes the M1—have been gaining in popularity for the past 10 years in the mobile space, they have faltered in other areas.
Technically Windows 10 can run on ARM but there are a host of issues with it mainly around software compatibility.
If the software you’re trying to run isn’t built for the CPU instruction set of your computer, it won’t run without help. That help typically comes in the form of a translation layer.
This is essentially when one CPU pretends to be another. Unlike virtualization—where a CPU pretends to be another CPU of the same type—there is a massive performance cost to this translation.
This translation layer is in place in macOS Big Sur under the name “Rosetta 2” and Apple claims that “some of the most graphically demanding apps perform better under Rosetta 2”.
How is that possible? It all comes back to performance.
If the M1 is 3x more performant than a last generation Intel system, that provides a lot of head room to sacrifice in order to gain compatibility!
More importantly, Apple has been laying the groundwork for this type of transition for years. Their APIs—programming interfaces for technology in your system—will automatically take advantage of the new M1 hardware.
Their build tools—Xcode—already build on Intel for the A series and S series chips in other Apple devices.
There are pundits casting suspicions that this transition will be like the disastrous one from PowerPC to Intel 15 years ago. It won’t be.
This transition should be seamless with the worst case scenario for performance being the a neutral state between generations. The best case scenario is that fully native apps that leverage key macOS APIs will see a massive boost.
Set For Success
Anyone who uses primarily Apple software—iMovie, Garageband, Pages, Keynote, Numbers, Final Cut Pro X, Logic Pro X, etc.—will undoubtedly see a huge boost in performance right out of the gate.
Apps that are macOS native and not cross compiled will also see big gains in the short term. It will take a little while for some of the third party cross-platform development tools to catch up with the new architecture and it’s potential optimizations but—again—we’ll be waiting for those in order to gain new performance gains…not make up performance that we’ve lost.
Not to mention the fact that you’ll open up a new aspect to your computing through iOS and iPad OS apps. macOS gaming alone is going to seismically change overnight with this new architecture.
Will Reality Crash In?
All of this is speculative. The benchmarks and technical speculation may or may not be accurate. Even if the benchmarks are accurate—and I suspect they are—until we see these systems in the real world, we won’t know for sure what the impact to the user experience is.
It’s my belief that this will be a substantial leap forward for performance.
The hardware is amazing. The design choices the Apple Silicon team has made are reasonable and the performance of the iPhone and iPad have consistently blown away all other competitors in actual use…if not benchmarks too.
Combine this with the tight end-to-end integration in the Apple ecosystem. macOS is optimized for the M1, the hardware is designed to common and popular use cases, and the apps are compiled with tools designed to provide top end performance.
Keep in mind that Apple has pulled all of this off with a significant reduction in power consumption. The increase battery life of the two MacBook offers is proof enough of that.
To get up to a 42% increase in battery life (14hrs to 20hrs on the MacBook Pro 13” for video playback) alongside a 280% increase in CPU speed is nothing short of mind-blowing.
I can’t wait to get my hands on one of these machines. I’ll either be laughing as I enjoy massive performance increases and capabilities or google recipes for humble pie.
Results Rolling In…
As reviewers have tested out the new M1 systems, the reviews are starting to roll in. The initial performance testing (via benchmarks) are looking very, very good.
Now that the M1 is in the wild, Anandtech has a deeper dive from their hands-on testing. The results are spectacular.
John Gruber has an excellent perspective on his top tier blog, Daring Fireball. Read that one and then check out this video from Marques Brownlee as he details his experiences:
…so does Dave Lee…
…and this one from Rene Ritchie:
…and finally, this side-by-side from MacRumors: