CPU Showdown: Apple Silicon M1 vs Intel i9 - A Deep Dive into the Future of Processing Power 🚀💻

CPU Showdown: Apple Silicon M1 vs Intel i9 - A Deep Dive into the Future of Processing Power 🚀💻
Demystifying the Inner Workings of a CPU: The Epic Battle Between Apple Silicon M1 and Intel i9 Explained 🤔
Image by author via Dalle 2

Discover the Technical Marvels and Limitations of Modern CPUs and Find Out Which One Reigns Supreme in This Ultimate Showdown of Processing Power 💻

The Epic Battle Between Apple Silicon M1 and Intel i9 Explained

The central processing unit, or CPU, is like the engine in your car or the brain in your skull, and it is like a really fancy calculator used to run the applications on your computer. When you write software in a language like JavaScript or Python, you’re actually writing a set of instructions that will be executed as machine code by a CPU, which itself is a carefully crafted piece of metal and silicon that contains billions of tiny transistors or on-off switches that represent ones and zeros.

In the heart of a computer, the Central Processing Unit (CPU) orchestrates a symphony of transistors, the fundamental building blocks of digital circuits. These transistors, when grouped together, form the logic gates that are the maestros of mathematical computations. Take the AND gate, for instance. It’s like a strict gatekeeper who only opens the door when both of its binary visitors carry a ‘true’ pass. If even one of them fails to present this pass, the gate remains firmly shut, outputting a ‘false.

But the magic doesn’t stop there. By weaving together a tapestry of these basic logic gates, the CPU can tackle even the most intricate computational conundrums. It’s a testament to the power of simplicity, where a handful of fundamental elements can give rise to a world of complexity. Modern chips contain billions of transistors, and they can be flipped on and off in billions of cycles per second. The state of the CPU is synchronised by an oscillator known as the clock generator.

 

Useful resources:
[1] https://www.reddit.com/r/askscience/comments/1eadrp/what_is_the_actual_logic_gate_structure_of_a/?rdt=53649
[2] https://en.wikipedia.org/wiki/Logic_gate
[3] https://www.techtarget.com/whatis/definition/logic-gate-AND-OR-XOR-NOT-NAND-NOR-and-XNOR
[4] https://www.physicsforums.com/threads/logic-gates-and-cpus-basic-design-structure-of-computer-processors.540424/
[5] https://electronics.stackexchange.com/questions/222514/can-you-make-a-cpu-out-of-logic-gates

Photo by Erik Mclean on Unsplash

 

In general, the more times the clock can pulse per second, the faster the CPU can compute, which is normally measured in gigahertz. Gamers will sometimes overclock their CPUs to gain more performance at the cost of higher temperatures and a shorter life expectancy. Now, in order to run applications, it interacts with the system memory, or RAM, in a series of four steps known as the machine cycle, or instruction cycle. ✅

Our journey begins with the first step: the fetch phase.

a software programme as a treasure map

Picture a software programme as a treasure map, with each instruction being a clue, all stored within the vast expanse of the Random Access Memory (RAM). The Central Processing Unit (CPU), our intrepid explorer, is equipped with a special tool known as registers. These registers act like a temporary storage locker, holding the memory address that the CPU is keen to explore.

In this initial phase, the CPU, like a seasoned treasure hunter, fetches the next clue from the map, stored at the address currently held in its programme counter. This clue is then safely stored in the instruction register, ready to guide the CPU on its next adventure. This is the essence of the fetch phase, the first step in the intricate dance of the instruction cycle, setting the stage for the computational magic that follows.

 

The programme counter starts at zero and copies that address to the memory address register. Then the control unit sends out a signal to copy the data from that address to the instruction register, at which point it needs to figure out how to use this instruction in the decoding phase. The control unit parses the actual bits in the instruction. Most importantly, the optcode contains the instruction, like add or subtract, and the operand is the address in memory to perform that operation.

The grand finale of this computational ballet is the execute stage. Here, the CPU, like a skilled conductor, takes the decoded information and transforms it into a symphony of electrical signals. These signals are then dispatched to the relevant sections of the CPU, each playing their part in this intricate performance.

 

One of the key performers is the Arithmetic Logic Unit, or ALU. This is the virtuoso of the CPU, capable of performing complex mathematical compositions on the data. Once the ALU has completed its performance, the results are stored back in the RAM, altering the state of the programme like a dynamic, ever-evolving score. This entire cycle, from the initial fetch to the final execution, is repeated at a breathtaking pace, billions of times every second. Modern chips, like a well-rehearsed orchestra, employ multiple CPU cores to perform multiple computations in harmony, all in parallel.

This is the essence of how a CPU operates behind the scenes. However, if you’re a developer, there’s a whole universe of knowledge about processor architectures waiting to be explored. It’s like understanding the nuances of each instrument in an orchestra, essential for creating a harmonious symphony of code [1][2][3][4].

 

Resources:
[1] https://en.wikipedia.org/wiki/Instruction_cycle
[2] https://www.codecademy.com/article/the-instruction-cycle
[3] http://codingatschool.weebly.com/the-fetch-execute-cycle.html
[4] https://www.sciencedirect.com/topics/computer-science/instruction-fetch
[5] https://adacomputerscience.org/concepts/arch_fe_cycle

 

More useful resources:

[1] https://www.codecademy.com/article/the-instruction-cycle
[2] https://en.wikipedia.org/wiki/Instruction_cycle
[3] https://www.physicsforums.com/threads/logic-gates-and-cpus-basic-design-structure-of-computer-processors.540424/
[4] http://codingatschool.weebly.com/the-fetch-execute-cycle.html
[5] https://adacomputerscience.org/concepts/arch_fe_cycle


Now let’s break down some more advanced concepts like Apple silicon & how to run performance benchmarking on your own machine, and learn about how processor architectures differ and how those differences affect our productivity as developers. Let’s dig deeper into why the new Apple Silicon machines have been really kicking up a storm and changing the industry, software development included.

 

Spoiler alert: the M1 machines have been beating the Intel machines in pretty much every build test I’ve thrown at them except a couple. But it’s not all roses, so let’s talk about some downsides later, too.

 

So why is it so darn fast?

The first point I want to discuss is the physical differences between the new Apple Silicon way and the old Intel and AMD ways. This is what Apple, Silicon, and your refrigerator at home have in common.

All right, imagine for a minute that you want to make a turkey with cheese sandwich. You go over to the refrigerator, and in one place you have the turkey breast, the cheese, the mayonnaise, and the mustard. All those ingredients are right there in one place, and you don’t have to run around the house to gather them; you don’t have to drive to lots of stores to pick them up, and this saves you lots of time and energy.

This kind of efficiency can be found in the new Apple silicon chips, which aren’t just new processors. Apple Silicon is a collection of many chips that are housed inside one silicon container and this type of system is known as a system on chip or SOC. It’s essentially an entire computer on one chip: the main CPU, the GPU, the I/O controller, and the ML engine are all colocated so when the task is to make an electronic sandwich, so to speak, or, in other words, to do some work that involves all these different components, a system on chip is going to be a lot more efficient in terms of energy usage, using only a tiny bit of power and at the same time, it’s going to be faster than a typical machine that has all the components separate.

Intel-based machines have a CPU that’s a single chip, the memory is located somewhere else on the motherboard and the I/O is somewhere else. The individual components might be even more powerful than the ones available at the moment on the latest Apple silicon-based machines, but that comes at a cost, since these powerful components are like supermarkets that carry different sandwich ingredients.

As a result, when you want a sandwich on an Intel machine, you’ll have to drive all over town because one store will have the turkey, another store will have the cheese, and yet another one will have the mayonnaise.

Imagine each store in a bustling marketplace, each renowned for their exceptional processes and the finest ingredients. However, just like these stores, every component within a computer, especially the CPU, requires a significant amount of energy to operate. This energy demand can sometimes come at the cost of overall efficiency.

While each component excels in its function, the collective power consumption can be substantial. It’s a delicate balance between performance and power efficiency, akin to running a series of high-end gourmet kitchens—each producing exquisite dishes but also consuming a great deal of energy. The challenge lies in optimizing this energy use without compromising the quality of the output. And since all you really want is to make a simple sandwich, you’re wasting a ton of time and energy picking up the ingredients from all the different stores.

 

Of course, some might say the drawback with the system-on-chip design is that, at least for now, with the current selection of Apple Macs, you won’t be able to upgrade or change any of the components. You get what’s on the menu, and that’s it, but that’s not news to most people who are familiar with the Apple ecosystem. ✅

 

Resources:
[1] https://www.newegg.com/areyouahuman
[2] https://www.anker.com/blogs/chargers/how-much-wattage-does-my-pc-need
[3] https://www.techspot.com/guides/2641-measuring-power-guide/
[4] https://www.buildcomputers.net/power-consumption-of-pc-components.html
[5] https://www.cpubenchmark.net/power_performance.html

Photo by Samule Sun on Unsplash

Once you accept that idea, you might even see the benefits of having a more efficient design where all the components are on one chip, which outweigh the cons and still provide better performance than the alternative.


All right, that’s enough talk about food, so how does this all affect real-world development workflows❓

So I’ve been doing a bunch of developer-focused tests on the latest Apple Silicon machines, as well as comparing them to other machines like the Intel Macs and PC machines too. And in general, the new design has been showing really great promise for my own workflows as a developer.

Now, there are lots of technology stacks that developers use, of course, and I’m trying to build in a few of them, but now let me share some of the results I’ve seen with Node.js and JavaScript tests. And after that, I’ll also discuss the tech stacks that have the biggest gains on the new machines and the stacks that have the biggest losses right now.

I started off by trying out some existing JavaScript tests that are in the browser and then in Node. The browser test consisted of running Spee dom eter 2.0, which is a browser benchmark that measures the responsiveness of web applications. It uses demo web apps to simulate their actions, such as adding to-do items, you visit the app in your browser of choice (I tried Chrome and Safari for this) and you execute the automated test that runs through a collection of applications built with some of the more popular UI frameworks, like Angular, React, Ember, even VanillaJS and Jquery, and a whole bunch more.

When it was finished, I found that it had significantly more iterations on the M1. Safari had the best results, even going off the scale, and Chrome did pretty well too. I also ran some JavaScript benchmarks in a Node environment and found a pretty CPU-intensive algorithm called Fancook Redux that was implemented in JavaScript and created for Benchmarks Game, a website that collects algorithms and runs tests in different languages.

 

While my 16-inch MacBook Pro with the Intel Core i9 processor did beat the MacBook Air with the M1 chip, it really didn’t do so by a lot.

When you consider the price differences between the machines and the fact that the M1 stayed cool throughout the test and that the battery hardly took a hit on the M1, you might be wondering whether the extra few seconds saved while running this benchmark on the Intel i9 are really worth the money.

 

So running benchmarks is often very telling, but it doesn’t necessarily line up with real-world scenarios. That’s why I also like to conduct my own tests, whether using my own projects or other open-source projects that are out there.

I ran a build of the official NativeScript plugin repository, which is a project based on NX Workspaces. If you’re not familiar with this, it basically allows you to scale large JavaScript and other tech stack projects.

In my test, the build that took about three minutes on each machine only differed by tens of seconds, with the M1 MacBook Air beating out the Intel MacBook Pro two out of three times.


Now let’s talk about which developer stacks benefit the most from the new Apple silicon chips at this time. 👊

For JavaScript developers, the benefits are already visible. However, if you are building mobile apps for iOS or compiling C++ code, then this is where you’ll see a 40% to 50% improvement in build times. I ran a few Xcode builds and Swift builds, and I did some C++ algorithms and built OpenCV and WebKit.

In all those tests, the M1 came out on top❗️

 

So what dev stacks have benefited the least❓

In my own testing, any builds that have to do with running natively built software or building using native tooling have absolutely destroyed Intel in speed and battery performance (Native being compiled for the Apple silicon architecture). ✅

Even when running some software via Apple’s Rosetta, that’s the translation layer that allows you to run Intel and AMD-based x64 and x86 programs on the new Apple hardware. Even some of those software packages ran better than on Intel, and that’s just amazing, but there are workflows that aren’t ready yet, in my opinion.

If you are an Android developer, AndroidStudio and the official Android emulators work on Apple Silicon, but they’re currently using Rosetta for translations, and while Rosetta is generally pretty good at running x86-targeted code on Arm chips, it’s not enough for the CPU-hungry Android workflows, and I found the results to be not very usable at the moment. ❌

The latest version of .Net isn’t fully supported to run on Arm. Simple console applications ran just fine for me, but testing any web workflows like ASP.NET Core doesn’t work at all yet. ❌

 

If you need to develop .Net applications, I suggest using a PC with Windows and if you think you can usea virtual Windows machine, I’ve tested this as well, Parallels is the only vendor that currently supports the M1 chip to create a virtual Windows environment, but the issue is that the Windows guest operating system for Arm is still quite immature and needs a lot of work. ❌

Also, Visual Studio is unfortunately not compatible with Arm at all, and in my testing, is not stable, even using the built-in Windows translation of X64 software to run on Arm hardware.

Apple silicon - image concept

For game developers that use Unity, I’m pleased to say that it works surprisingly well on Apple silicon via Rosetta. However, it’s not as performant as running natively on X86, but by the time you read this article, Unity might have a version natively compatible with M1, which I know they’re working hard to get out as soon as possible. ✅

 

Overall, Apple silicon has really put a boost into many workflows for developers and other professions, and eventually even gaming. With the M1 tests we’re seeing improvements with just their entry-level machines. The upcoming second and third-generation Apple Silicon machines will be even more performant, and I can’t wait to see that. 👀

I think it’s really going to help us as developers move to the next level, and I think it’s just going to lift up the entire industry. 🙌

 

Thanks for reading and Happy Easter!! 🥚🐇

Photo by Waranya Mooldee on Unsplash
Photo by author

All images are provided by the author or via Unsplash, Wikimedia & Dalle 2 ✅

Apple | CPU | Intel | Chips | Apple silicon | MacBook PRO

 

Apple Silicon M1 vs Intel i9 - by elitelux.club

Lord1