Today, Pixelmator has announced that it has agreed to be acquired by Apple. From the brief posting:
Today we have some important news to share: the Pixelmator Team plans to join Apple.
We’ve been inspired by Apple since day one, crafting our products with the same razor-sharp focus on design, ease of use, and performance. And looking back, it’s crazy what a small group of dedicated people have been able to achieve over the years from all the way in Vilnius, Lithuania. Now, we’ll have the ability to reach an even wider audience and make an even bigger impact on the lives of creative people around the world.
Regarding any immediate changes, the post states:
Pixelmator has signed an agreement to be acquired by Apple, subject to regulatory approval. There will be no material changes to the Pixelmator Pro, Pixelmator for iOS, and Photomator apps at this time. Stay tuned for exciting updates to come.
My Thoughts
This could be huge in many respects. I suspect there are two possible things that we can see. The first is that once the deal closes, I suspect that many of Pixelmator’s features could be incorporated into Apple’s own Photos app. Furthermore, I could see Apple utilizing Pixelmator as a means of testing out early Apple Intelligence features, particularly within the Photomator app, given that the purpose of that app is to allow you to edit your photos in a non-destructive manner. By using this approach, they could test out new AI features faster before incorporating them into the main Photos app.
The second outcome is a bit different. There are other companies, particularly Adobe, which have artificial intelligence photo enhancement tools already incorporated into their products. Apple likely needs something that can compete. While Apple could absolutely build something, it would take some time. It would be faster to acquire an existing product, and Pixelmator is likely that product.
I can honestly see Pixelmator and Photometer quickly become the new “Image Playgrounds” apps. It is undoubtedly an undertaking to incorporate Apple’s image generation tools into Pixelmator and/or Photomator, but that would definitely be much more of an expense than to build out their own app entirely. I could then easily see Apple providing these two apps for free with basic features, but then having the subscriptions for Pixelmator and/or Photomator for the basis of more advanced photo features powered by Apple Intelligence.
Undoubtedly, it will be interesting to see how Apple incorporates the apps into their own product suite, or what they end up doing with Pixelmator in the long run.
Today Apple has unveiled the final new release related to the Mac, this time the MacBook Pro. As expected the new MacBook Pros have the M4, M4 Pro, and the newly unveiled M4 Max.
Display and Camera
At the top of the display is the notch and within the notch is the camera. There is a new 12 Megapixel Center Stage camera. Center Stage is intended to keep you and everyone else around you in frame as much as possible. This camera also supports Desk View, so you can display what is happening on your physical desktop while in a FaceTime call.
The display on the MacBook Pro is a Liquid Retina XDR display. It has always come with a glossy finish, but that now changes. There is now a Nano Texture option. Much like the other Nano Texture displays, this is designed to reduce glare in bright light situations. This will cost an extra $150, but if you are frequently in areas with bright light, it might be worth looking at.
M4, M4 Pro, and M4 Max
The MacBook Pros are powered by Apple Silicon and can be configured with three different processors, the M4, the M4 Pro, and the M4 Max. There are a few configuration options for each model.
M4
The M4 comes in 10-Core CPU and 10-Core GPU model. This can be configured with 16GB, 24GB, or 32GB of memory. The base model comes with 512GB of storage and this can be configured with either 1TB or 2TB of storage. The maximum memory bandwidth for the M4 is 120 gigabits per second.
According to Apple, the MacBook Pro with M4 delivers:
- Up to 7x faster image processing in Affinity Photo when compared to the 13‑inch MacBook Pro with Core i7, and up to 1.8x faster when compared to the 13-inch MacBook Pro with M1.
- Up to 10.9x faster 3D rendering in Blender when compared to the 13‑inch MacBook Pro with Core i7, and up to 3.4x faster when compared to the 13‑inch MacBook Pro with M1.
- Up to 9.8x faster scene edit detection in Adobe Premiere Pro when compared to the 13‑inch MacBook Pro with Core i7, and up to 1.7x faster when compared to the 13‑inch MacBook Pro with M1.
M4 Pro
The M4 Pro comes in two variants. The first is a 12-Core CPU, 16-core GPU version or a 14-Core CPU. and a 14-Core CPU with a 20-Core GPU version. Both models come with 24GB of unified memory, and can be configured with 48GB. The M4 Pro models come with 512GB of storage, and can be configured with 1TB, 2TB, or 4TB of storage. The maximum memory bandwidth for the M4 is 273 gigabits per second.
According to Apple, the MacBook Pro with M4 Pro delivers:
- Up to 4x faster scene rendering performance with Maxon Redshift when compared to the 16-inch MacBook Pro with Core i9, and up to 3x faster when compared to the 16-inch MacBook Pro with M1 Pro.
- Up to 5x faster simulation of dynamical systems in MathWorks MATLAB when compared to the 16-inch MacBook Pro with Core i9, and up to 2.2x faster when compared to the 16-inch MacBook Pro with M1 Pro.
- Up to 23.8x faster basecalling for DNA sequencing in Oxford Nanopore MinKNOW when compared to the 16-inch MacBook Pro with Core i9, and up to 1.8x faster when compared to the 16-inch MacBook Pro with M1 Pro.
M4 Max
The M4 Max is a new chip not released until today. Much like the M4 Pro, the M4 Max comes in two variants. The first is a 14-Core CPU with 32-Core GPU version. This can only be configured with 36GB of unified memory. This memory has a maximum bandwidth of 410 gigabits per second, which is nearly 3.5x more memory bandwidth than the M4, and 1.5x more memory than the M4.
The second variant is a 16-Core CPU with a 40-Core GPU. This starts at 48GB of unified memory, but can be configured with 96GB or 128GB. The memory in this model is 546 gigabits per second, which is 4.5x the memory in the M4, 2x that of the M4 Pro, and 1.33x more memory bandwidth than the 14-Core M4 Max version.
Both M4 Max variants come with 1TB of storage, but can be configured for 2TB, 4TB, or even 8TB of storage, depending on needs.
And the MacBook Pro with M4 Max enables:
- Up to 7.8x faster scene rendering performance with Maxon Redshift when compared to the 16-inch MacBook Pro with Intel Core i9, and up to 3.5x faster when compared to the 16-inch MacBook Pro with M1 Max.
- Up to 4.6x faster build performance when compiling code in Xcode when compared to the 16‑inch MacBook Pro with Intel Core i9, and up to 2.2x faster when compared to the 16‑inch MacBook Pro with M1 Max.
- Up to 30.8x faster video processing performance in Topaz Video AI when compared to the 16‑inch MacBook Pro with Intel Core i9, and up to 1.6x faster when compared to the 16-inch MacBook Pro with M1 Max.
Connectivity and Ports
Similar to the M4 Mac mini, there is a difference in ports with the M4 and the M4 Pro, not in the number, but the USB-C ports. For the M4, you get three Thunderbolt 4 ports, up to 40 Gigabits per second, and the M4 Pro and M4 Max devices come equipped with three Thunderbolt 5 ports up to 120 gigabits per second. This is the same setup as the Mac mini with M4 and M4 Pro.
The number of displays supported varies depending on the M4 version. The M4 and M4 Pro can support up to two external displays up to 6K at 60Hz over Thunderbolt, or one display up to 6K at 60Hz, and one display up to 4K at 144Hz over HDMI. The HDMI is also capable of supporting one display at 8K resolution at 60Hz, or one display 4K at 240Hz, both of these are over HDMI.
The M4 Max can have up to four external displays, three displays up to 6K with 60Hz over Thunderbolt, and one at 4K up to 144Hz on HDMI. Alternatively, you can have two external displays up to 6K resolution at 60Hz, and one external display up to 8K resolution nat 60Hz, or one display up to 4K at 240Hz on the HDMI port.
Along with the Thunderbolt ports, you also get an SDXC card reader, a dedicated HDMI port, and a 3.5mm headphone jack.
The Wi-Fi in all models is Wi-Fi 6E and support for Bluetooth 5.3 is also included.
Pricing and Availability
The M4 MacBook Pro comes in the same two sizes of 14-inch and 16-inch. The pricing differs for each model and chip. For the 14-inch you can get an M4 model starting at $1599. The M4 Pro model starts at $1999, and the M4 Max starts at $3199.
The 16-inch starts at $2499 for the M4 Pro with 14-Core CPU, 20-Core GPU, 24GB of unified memory, and 512GB of storage. The 16-inch M4 Max version starts at $3499 for a 14-core CPU with a 32-Core GPU, 36GB of unified memory, and 1TB of storage.
All of the M4-line of MacBook Pros are available to order today and will be available starting November 8th.
Closing Thoughts
The MacBook Pros continue to be the workhorses of the Apple laptops. Many users do a ton of work on these devices and now with M4 processors they should be able to accomplish even more than before. The new M4 Max adds even more horsepower to the laptops and are welcome upgrades. The line up is a bit strange, but for today’s modern Apple, it is makes sense because it is not too dissimilar to the iPhone Pro line of devices. If you have an Intel-based MacBook Pro, now would be a great time to update your MacBook Pro.
Today Apple has unveiled a new Mac mini that has the M4. This is not just a spec bump, but it includes a couple of new features, chief amongst them is a new form factor.
Form Factor
The Mac mini was introduced in 2005, and was a smaller version of the Mac, hence the name Mac mini. The Mac mini was 6.5 inches wide, had a 6.5 inch depth, and was 2 inches tall. This remained the form factor until 2011 when a new Unibody version was introduced, one that eliminated the internal disc drive. This Mac mini was physical larger at 7.7 inches wide, 7.7-inches in depth, and only 1.4 inches tall. All Mac minis introduced since 2011 have had the exact same physical footprint, including the M1 and M2 Mac minis. This all changes with the M4.
In 2022 Apple introduced a whole new machine, the Mac Studio. This took some of the design elements from the Mac mini but expanded them. The M1 and M2 Mac Studio were 7.7-inches wide, had a 7.7 inch depth, but was significantly taller at 3.7 inches.
The M4 Mac mini takes some design cues from the Apple TV. The M4 Mac mini is 5 inches wide, has a 5 inch depth, and is only 2 inches tall. This means that it is smaller than the previous Mac mini, but still a bit larger than an Apple TV. Before we dive into the ports, let us look at the processor.
M4 and M4 Pro
The Mac mini has come with a variety of processors. The previous M2 Mac mini was available in both M2 and M2 Pro variants. The same continues for the M4 Mac mini, with the M4 and M4 Pro. The M4 consists of a 10-core CPU, with 4 performance cores and 6-efficiency cores, and a 10-Core GPU. According to Apple, the M4 Mac mini is significantly faster than the M1 Mac mini. Specifically,
When compared to the Mac mini with M1, Mac mini with M4:
- Performs spreadsheet calculations up to 1.7x faster in Microsoft Excel.
- Transcribes with on-device AI speech-to-text up to 2x faster in MacWhisper.
- Merges panoramic images up to 4.9x faster in Adobe Lightroom Classic.
The M4 Pro has tow configurations, a 12-core version with 8 performance cores, and 4 efficiency cores with a 16-Core GPU. The other M4 Pro option is a 14-core CPU, with 10 performance cores and 4 efficiency cores and a 20-core GPU. From Apple’s press release:
When compared to the Mac mini with M2 Pro, Mac mini with M4 Pro:
- Applies up to 1.8x more audio effect plugins in a Logic Pro project.
- Renders motion graphics to RAM up to 2x faster in Motion.
- Completes 3D renders up to 2.9x faster in Blender.
All M4 and M4 Pro models have a 16-core Neural engine for machine learning and Apple Intelligence tasks.
Ports
The M4 Mac mini has a total of 7 ports, an ethernet jack, an HDMI port, and 5 USB-C ports. Of these ports, two are on the front, much like the Mac Studio, and three are on the back. The two on the front are USB-C with USB 3 speeds up to 10 gigabits per second. The three ports on the back are Thunderbolt/USB 4 ports. For the M4 models, these are Thunderbolt 4 ports, which can delivery data up to 40 Gigabits per second. The M4 M4 Pro devices are Thunderbolt 5 ports, which can deliver a whopping 120 Gigabits per second. The USB portion can deliver up to 40 Gigabits per second.
The difference in Thunderbolt ports does mean that there is a difference in DisplayPort compatibility. The Thunderbolt 4 ports support DisplayPort 1.4 while the Thunderbolt 5 ports support DisplayPort 2.1. The HDMI port on either model can support one display with 8K resolution at 60Hz, or 4K resolution at 240Hz.
By default the ethernet port is a gigabit port, but you can opt for a 10-gigabit per second port for $100 more. The Mac mini has long had a headphone jack this is still present on all models of the M4 Mac mini.
Pricing and Availability
The M4 Mac mini starts at $599 for 16GB of unified memory and 256GB of storage. You can configure the M4 models with 24GB or 32GB of memory, and up to 2TB of storage.
The M4 Pro Mac mini starts at $1399 for a 12-core CPU and 16-core GPU, 24GB of unified memory, 512GB of storage. You can configure the M4 Pro Mac mini with 48GB or 64GB of unified memory, and 1TB, 2TB, 4TB, or 8TB of storage.
The M4 Mac mini is available for pre-order today and will be available for delivery and in store on Friday November 8th.
Closing Thoughts
While other devices have received a redesign specifically for the lower power usage of Apple Silicon, the Mac mini was not one of them. The Mac mini has finally received its redesign. The smaller form factor takes cues from both the Mac Studio and Apple TV. The M4 and M4 Pro should be great upgrades from anyone who has an Intel Mac, and if you are upgrading from the M1, it will still be a solid update.
Today Apple unveiled a new iMac, one powered by the M4. While it might seem like a small update from the M3, there are a number of improvements, including the M4, ports, and colors, just to name a few items.
M4
The 24-inch iMac is powered by the M4 chip. This comes in two processor configurations, an 8-core CPU with 8-Core GPU model, and a 10-Core CPU with 10-Core GPU model. According to Apple, the M4 iMac is up to 1.7x faster for daily productivity and up to 2.1x faster for graphics editing and gaming; at least when you compare it to the M1 iMac.
Display
The size of the iMac has not changed, but there is a new option, a nano-texture display option. This is a similar display as on the iPads and on the Apple Studio Display. This is an option and will cost $200 more. This option is only available on the
Beyond this, there is a new 12Megapixel Center Stage camera. This should provide even better quality, because this camera is capable of providing Desk View, which is the ability to show your desk while in a video call, the previous iMac could not provide you this functionality.
Colors
The 24-inch iMac has come in a variety of colors. The available colors have been updated. There are seven options:
Silver
Blue
Purple
Pink
Orange
Yellow
Green
Unlike like the previous model, all of the colors are available for any processor choice. There is a difference depending on the model, and that is with the ports. To go with this, are new color-matched accessories, including the Magic Keyboard with Touch ID, Magic Trackpad, and Magic Mouse. These all now have USB-C cables, instead of the previous lightning. Beyond the port change, the design and port locations have not changed at all.
Ports and Connectivity
Depending on the processor, you will either get two or four ports. The 8-Core CPU model has two thunderbolt/USB 4 ports. The 10-core CPU models have four thunderbolt 4 ports. All of the iMacs have Wi-Fi 6E and Bluetooth 5.3. The four thunderbolt four ports means that you can have up to two 6K external displays, which is an improvement over the M3 model, which only supported one external 6K monitor.
Pricing
There are actually four different configuration options available. These starting configuration options are:
8-Core CPU with 8-Core GPU, 16GB of unified memory, and 256GB of storage - $1299
10-Core CPU with 10-core GPU, 16 GB of unified memory, and 256GB of storage - $1499
10-Core CPU with 10-core GPU, 16 GB of unified memory, and 512GB of storage - $1699
10-Core CPU with 10-core GPU, 24 GB of unified memory, and 256GB of storage - $1899
You can configure the 10-Core models with up to 32GB of unified memory and up to 2TB of storage. The 10-Core models also come with Ethernet, whereas the 8-core model is Wi-Fi only, but you can add Ethernet to that model for $30.
Closing Thoughts
You can pre-order the new iMac today and they will be available starting on Friday, November 8th. If you are looking for a new iMac, now is the time to upgrade, particularly if you have an Intel machine, or want to upgrade from an M1 iMac.
Technology is consistently entertaining new crazes. Some examples include blockchain, subscription juicers, netbooks, 3D televisions, hyperloop, and "hoverboards", just to name a handful of examples. All of these were going to be "the next big thing", but none of these have panned out as the inventors intended.
There has been a term bandied about that people think may be the end-all for computers. Said term is "Artificial Intelligence", or "AI". The term "AI" can mean a variety of different things, depending on whom you ask. However, when most use the term AI, what they are expecting is a fully conscious and sentient entity that can think, act, and rationalize as a human would. This is called "Artificial General Intelligence". Today's technology is nowhere even close to being able to come to this reality. It is not yet known whether or not Artificial Intelligence will actually live up to its ultimate expectations.
The term "Artificial Intelligence" can garner a number of thoughts, and depending on who you ask, these can range from intrigue, worry, elation, or even skepticism. Humans have long wanted to create a machine that can think like a human, and this has been depicted in media for a long time. Frankenstein is an example where a machine is made into a human and then is able to come to life . Another great example is Rosie from the 1960s cartoon The Jetsons. In case you are not aware, The Jetsons is a fictional animated tv show that depicts the far future where there are flying cars, and one of the characters, Rosie, is an robot that can perform many household tasks, like cleaning and cooking.
We, as a society, have come a long way to creating modern "artificial intelligence", but we are still nowhere close to creating a robot that is anywhere close to human. Today's modern artificial intelligence falls into a number of categories, in terms of its capabilities, but it is still a long way off from being the idealistic depiction that many expect artificial intelligence to be.
Artificial Intelligence comes in a variety of forms. This includes automated cleaning robots, automated driving, text generation, image generation, and even code completion. There are many companies that are attempting to create mainstream artificial intelligence, but nobody has done so that we know of.
Apple is one of those companies, but they are taking a different approach with their service called Apple Intelligence. Apple Intelligence is Apple's take on artificial intelligence. Apple Intelligence differs in a number of ways from standard "artificial intelligence". This includes the use of on-device models, private cloud computing, and personal context. Before we delve into each of those, let us look at artificial intelligence, including a history.
Artificial Intelligence
Artificial intelligence is not a new concept. You may think that it is a modern thing, but in fact, it harkens back to World War II and Alan Turing. Turing is known for creating a machine that could crack the German Enigma codes. In 1950, Turing released a paper which was the basis of what is known as the "Turing Test". The Turing Test is one where a machine is able to exhibit intelligent behavior that is indistinguishable from a human.
There have been a number of enhancements to artificial intelligence in recent years, and many of the concepts that have been used for a while have come into more common usage. Before we dive into some aspects of artificial intelligence, let us look at how humans learn.
How Human Brains Operate
In order to be able to attempt to recreate the human brain in a robot, we first need to understand how a human brain works. While we have progressed significantly in this, we are still extremely far from fully understanding how a human brain functions, let alone even attempting to control one.
Even though we do not know everything about the brain, there is quite a bit of information that we do know. Human brains are great at spotting patterns, and the way that this is done is by taking in large amounts of data, parsing that data, and then identifying a pattern. A great example of this is when people look at clouds. Clouds come in a variety of shapes and sizes, and many people attempt to find recognizable objects within the clouds. Someone is able to accomplish this by taking their existing knowledge, looking at the cloud, determining if there is a pattern, and if there is one, identifying the object.
When a human brain is attempting to identify an object, what it is doing is going through all of the objects (animals, plants, people, shapes, etc.) that they are aware of, quickly filtering them, and seeing if there is a match.
The human brain is a giant set of chemical and electrical synapses that connect to produce consciousness. The brain is commonly called a neural network due to the network of neural pathways. According to researchers, humans are able to update their knowledge. In a technical sense, what is happening is that the weights of the synaptic connections that are the basis of our neural network brain are updated. As we go through life, our previous experiences will shape our approach to things. Beyond this, it can also affect how we feel about things in a given moment, again, based upon our previous experiences.
This approach is similar to how artificial intelligence operates. Let us look at that next.
How Artificial Intelligence Works
The current way that artificial intelligence works is by allowing you to specify an input, or prompt, and having the model create an output. The output can be text, images, speech, or even just a decision. All artificial intelligence is based on what is called a Neural Network.
A Neural Network is a machine learning algorithm that is designed to make a decision. The manner in which this is done is by processing data through various nodes. Nodes generally belong to a single layer, and for each neural network, there are at least two layers: an input layer and an output layer.
Each node within a neural network is composed of three different things: weights, thresholds (also called a bias), and an output. Data goes into the node, the weights and thresholds are applied, and an output is created. A node requires the ability to actually come to a determination and is based on training, or what a human might call, knowledge.
Training
Humans have a variety of ways of learning something that can include family, friends, media, books, TV shows, audio, and just exploring. Neural Networks cannot be trained this way. Instead, neural networks need to be given a ton of data in order to be able to learn.
Each node within a neural network provides an output, sending that to another node, which provides its output, and the process continues until a result is determined. Each time that a result is determined, a positive or negative correlation is determined. Much like a human, the more positive connections that are made, the better, and eventually, the positive correlation between an answer and the result will push away the negative connections. Once it has made enough positive correlations (gotten the right answer), it will eventually be trained.
There are actually two types of training: Supervised Learning and Reinforcement Learning.
Supervised Learning is the idea of feeding a training model so that it can learn the rules and provide the proper output. Typically, this is done using two methods: either classification or regression. Classification is pretty simple to understand. Let us say that you have 1000 pictures, 500 dogs, and 500 cats. You provide the training model with each photo individually and you tell it the type of pet for each image.
Reinforcement learning is similar, but different. In this scenario, let us say you have the same 1000 pictures, again 500 dogs and 500 cats. But instead of telling the model what is what, you let it determine the similarities between the items and as it continues to get them right, that will reinforce what it already knows.
Inference
Inference, in reference to artificial intelligence, is the process of applying a training model to a set of data. The best way to test a model is to provide it with brand-new data to try and infer the result with this brand-new data.
Artificial Intelligence works by taking the input of the new data, applying the weights, also known as parameters, that are stored in the model and applying them to the actual data.
Inference is not free, it does have a cost, most particularly when it comes to energy usage. This is where optimizations can be useful. As an example, Apple will utilize the Neural Engine as much as possible for its on-device inference. The reason for this is because the Neural Engine is optimized to perform inference tasks, while minimizing the amount of energy needed.
Artificial Intelligence Use Cases
No tool is inherently good or inherently bad, the tool is the tool. It is how it is used that determines whether it is a positive usage or a negative use. Artificial Intelligence is no different in this. Artificial intelligence can have a wide range of possible use cases. Current artificial intelligence is capable of performing actions related to detecting cancer, synthesizing new drugs, detecting brain signals in amputees, and much more. These are all health-related, but that is where many artificial intelligence models are thriving, at least at the moment, but that is not all that is possible.
Not all artificial intelligence usage is positive. There are many who will want to make what are called "Deep Fakes". A deep fake is a way of taking someone and either placing them in a situation where they never were, or even making them say something that they have never said. This is not new, not by a long shot. Since the inception of photos, there have always been manipulations. This is designed to influence someone into thinking a particular way. As you might guess, this can have detrimental effects because it distorts reality. While there are those who want to use these for nefarious purposes, there can be some positive use cases for this type of technology.
Back in 2013, country music artist Randy Travis suffered a stroke and, as a result, now suffers from aphasia, which, according to the Mayo Clinic, is "a disorder that affects how you communicate." This effectively left him unable to perform. However, in May of 2024, a brand-new Randy Travis song was released using artificial intelligence that used two proprietary AI models to help create the song. This was done with full permission from Randy Travis himself, so there is no issue there.
Let us look at a couple of different approaches used, including Large Language Models and Image Generators.
Large Language Models
Large language models, or LLMs, are those that are able to generate language that a human would understand. To quote IBM:
"In a nutshell, LLMs are designed to understand and generate text like a human, in addition to other forms of content, based on the vast amount of data used to train them. They have the ability to infer from context, generate coherent and contextually relevant responses, translate to languages other than English, summarize text, answer questions (general conversation and FAQs), and even assist in creative writing or code generation tasks." - Source: IBM.
LLMs can be used for generating, rewriting, or even changing the tone of text. The reason that this is possible is because most languages have pretty rigid rules, and it is not a complex task to calculate the probability of what the next word would be in a sentence.
The way that an LLM is trained is by consuming vast amounts of text. It then recognizes patterns from this data and then it can generate text based upon what it has learned.
Image Generation
One of the uses of modern artificial intelligence is the ability to create images. Similar to LLMs, there are image generation models that have been trained on a massive number of images. This data has been used to train the models which are used for the actual image generation. Depending on the model, you may be able to generate various types of images, ranging from cartoons to completely realistic ones.
Image generation models use a technique called Generative Adversarial Networks, or GANs. The way that a GAN works is using two different algorithms, the generator, and the discriminator, that work in tandem. The generator will output a bunch of random pixels as an image and then send it over to the discriminator. The discriminator, which has knowledge of millions of pictures of what you are trying to generate, will provide a result, which is basically a "Yes" or "No". If it is a 'no', then the generator will try again and again.
This back and forth is what is called an "adversarial loop" and this loop continues until the generator is able to generate something that the discriminator will say matches the intended type of image.
The training for GANs is quite interesting. It starts with an image and then purposely introduces noise into the image, and it does so again, and again, and again. This process reiterates a large number of times. This noisy data is what becomes the basis for the generator.
All of this is a good base for looking at what Apple has in store for its own artificial intelligence technologies, so let us look at that now.
Apple and Artificial Intelligence
You might think that Apple is late to the artificial intelligence realm, but in fact, Apple has been working with artificial intelligence for many years; it has just been called something else. Some of the areas where Apple has been using artificial intelligence have been with Photos, Siri, Messages, and even auto-correct.
Apple Intelligence
As mentioned above, Apple Intelligence is Apple's take on artificial intelligence. Apple Intelligence differs from standard artificial intelligence in that Apple intelligence is designed to work on YOUR information, not on general knowledge. The primary benefit of working on your data is that your data can remain private. This is done using on-device models.
On-Device Requests
A vast majority of Apple Intelligence requests will be performed on your device. There are a number of examples of this, including things like:
"Find me pictures of [someone] while in London."
"When is Mom's flight landing?"
Apple has been doing a lot of research with machine learning to be able to run on-device. This has meant that the machine learning models have needed to be kept the same in terms of quality but need to be able to be used on devices with limited amounts of memory. Limited, of course, is relative. We are not talking like 1GB of RAM, but more like 8GB.
The reason that Apple wants to be able to do much of the processing on your device is twofold. The first is response time. By having devices handle requests, they can be almost instantaneous. This is quite beneficial for those times when you may not have connectivity. Beyond this, sending all of your requests to the cloud would end up providing some sort of delay, even with a direct connection and incredibly fast connection speeds.
The second reason is privacy. Privacy is a big part of Apple's core beliefs. When using your own device and processing the request on the device, that means that nobody else will get access to your data, not even Apple. Instead, only you will have access to your data, which is great for your own peace of mind.
Even though as much as possible will be done on your own devices, there may be instances when your device is not able to handle your request locally. Instead, it may need to be sent to the cloud. This can be needed for larger models that require additional memory or processing to be done. If this is needed, it is handled automatically by sending it to Apple's Private Cloud Compute platform. Let us look at that next.
Private Cloud Compute
Nobody wants their data to get out of their control, yet it does happen from time to time. Apple takes data privacy seriously and has done a lot to help keep people's data private. This is in contrast to other artificial intelligence companies, who have no compunction to take user data and use it to train their machine learning models.
Apple has been working on reducing the size and memory requirements for many machine learning models. They have accomplished quite a bit, but right now there are some machine learning models that require more tokens, which means more memory, than devices are capable of having. In these instances, it may be necessary to use the cloud to handle requests.
Apple has 1.2 billion users, and while not all of the users will utilize Apple Intelligence immediately, Apple still needs to scale up Apple Intelligence to support all users who will be using it. In order to make this happen, Apple could just order as many servers as they want, plug them in, and make it all work. However, that has its own set of tradeoffs. Instead, Apple has opted to utilize their own hardware, create their own servers, and make things as seamless as possible for the end user, all while protecting user data.
Private Cloud Compute is what powers online requests for Apple Intelligence. Private Cloud Compute runs in Apple's own data centers. Private Cloud Compute is powered by a series of nodes. Each of these nodes uses Apple Silicon to process requests. These are not just standard Macs; they have been heavily customized.
Nodes
Each Private Cloud Compute node undergoes significant quality checks in order to maintain integrity. Before the node is sealed and its tamper switch activated, each component undergoes a high-resolution scan to make sure that it has not been modified. After the node has been shipped and arrives at an Apple data center, it undergoes another verification to make sure it still remains untouched. This process is handled by multiple teams and overseen by a third party who is not affiliated with Apple. Once verification has been completed, the node is deployed, and a certificate is issued for the keys embedded in the Secure Enclave. Once the certificate has been created, it can be used.
Request Routing
Protecting the node is just the first step in securing user data. In order to protect user data, Apple uses what is called "target diffusion". This is a process of making sure that a user's request cannot be sent to a specific node based on the user or its content.
Target diffusion begins with the metadata of the request. This information strips out user-specific data as well as the source device. The metadata is used by the load balancers to route the request to the appropriate model. In order to limit what is called a "replay attack", each request has a single-use credential which is used to authorize requests without tying it to a specific user.
All requests are routed through an Oblivious HTTP, or OHTTP, relay, managed by a third-party provider, which hides the device's source IP address well before it ever reaches the Private Cloud Compute node. This is similar to how Private Relay works, where the actual destination server never knows your true IP address. In order to steer a request based on source IP, both Apple's Load Balancer as well as the HTTP relay would need to be compromised; while possible, it is unlikely.
User Requests
When a user's device makes a request, it is not sent to the entire Private Cloud Compute service as a whole; instead, pieces of the request are routed to different nodes by the load balancer. The response that is sent back to the user's device will specify the individual nodes that should be ready to handle the inference request.
When the load balancer selects which nodes to use, an auditable trail is created. This is to protect against an attack where an attacker compromises a node and manages to obtain complete control of the load balancer.
Transparency
When it comes to privacy, one could say, with confidence, that Apple does what they say they are doing. However, in order to provide some transparency and verification, Apple is allowing security researchers the ability to inspect software images. This is beyond what any other cloud company is doing.
In order to make sure there is transparency, each production build of Apple's Private Cloud Compute software will be appended to a write-only log. This will allow verification that the software that is being used is exactly what it claims to be. Apple is taking some additional steps. From Apple's post on Private Cloud Compute:
Our commitment to verifiable transparency includes:
1. Publishing the measurements of all code running on PCC in an append-only and cryptographically tamper-proof transparency log.
2. Making the log and associated binary software images publicly available for inspection and validation by privacy and security experts.
3. Publishing and maintaining an official set of tools for researchers analyzing PCC node software.
4. Rewarding important research findings through the Apple Security Bounty program.
This means that should an issue be found, Apple will be notified before it can become an issue, take actions to remedy the issue, and release new software, all in an attempt to keep user data private.
Privacy
When a request is sent to Apple's Private Cloud Compute, only your device and the server can communicate. Your data is sent to the server, processed, and returned to you. After the request is complete, the memory on the server is wiped so your data cannot be retrieved. This includes wiping the cryptographic keys on the data volume. Upon reboot, these keys are regenerated and never stored. The result of this is that no data can be retrieved because the cryptographic keys are sufficiently random that they could never be regenerated.
Apple has gone to extensive lengths to make sure that nobody's data can be compromised. This includes removing remote access features for administration, high-resolution scanning of the Private Cloud Compute node before it is sealed, and making sure that requests cannot be routed to specific nodes, which may allow someone to compromise data. Beyond this, when a Private Cloud Compute node is rebooted, the cryptographic keys that run the server are completely regenerated, so any previous data is no longer readable.
For even more detail, be sure to check out Apple's blog post called "Private Cloud Compute" available at https://security.apple.com/blog/private-cloud-compute.
General World Knowledge
Apple Intelligence is designed to work on your private data, but there may be times when you need to go beyond your own data and use general world knowledge. This could be something like asking for a recipe for some ingredients you have, or it could be a historical fact, or even to confirm some existing data.
Apple Intelligence is not capable of handling these types of requests. Instead, you will be prompted to send these types of requests off to third parties, like OpenAI's ChatGPT. When you are prompted to use one of these, you will need to confirm that you want to send your request and that your private information (for that specific request) will be sent to the third party.
At launch, only OpenAI's ChatGPT will be available. However, there will be more third-party options coming in the future. This type of arrangement is a good escape valve should you need to get some information that is not within your own private data. Now that we have covered what Private Cloud Compute is, let us look at what it will take to run Apple Intelligence.
Minimum Requirements
Apple Intelligence does require a minimum set of requirements in order to be used. Apple Intelligence will work on the following devices:
iPhone 16 Pro/Pro Max (A18 Pro)
iPhone 16/16 Plus (A18)
iPhone 15 Pro/Pro Max (A17 Pro)
iPad mini (A17 Pro)
iPad Pro (M1 and later)
iPad Air (M1 and later)
MacBook Air (M1 and later)
MacBook Pro (M1 and later)
Mac mini (M1 and later)
Mac Studio (M1 Max and later)
Mac Pro (M2 Ultra and later)
There are a couple of reasons why these are the devices that can be used. The first is that they require a neural engine. For the Mac, this was not present until 2020 when the first Macs with Apple Silicon were released. For the iPhone, the first Neural Engine appeared with the A11 Bionic chip on the iPhone 8, 8 Plus, and iPhone X. All iPhones since have included a Neural Engine, but that is just one requirement.
The second requirement is the amount of memory. The minimum amount of memory to run the on-device models is 8 gigabytes. The iPhone 15 Pro and iPhone 15 Pro Max are the first iPhones to come with 8GB of memory. All M1 Macs have had at least 8GB of memory.
Now, this is the minimum amount of memory. Not all features will work with only 8GB of memory. One example is a new feature for developers within Apple's Xcode app. With Xcode 16, developers will have the option of using Apple's Predictive Code Completion Model. When you install Xcode 16, there is an option that allows you to download the Predictive Code completion model, but only if your Mac has 16GB of memory or more. To illustrate this, if you have a Mac mini with 8GB of memory, you will get the following installation screen.
Similarly, let us say you have a MacBook Pro with 32GB of unified memory, you will get this installation screen.
As you can see, the Predictive Code Completion checkbox is not even an option on the Mac mini with 8GB of memory. And the Predictive Code Completion is a pretty limited amount of knowledge. Swift, while being a large programming language, is limited in scope, and that model does not work on 8GB.
It would not be presumptuous to think that this may be the case for various Apple Intelligence models going forward. Now that we have covered the minimum requirements, let us look at some of the use cases that Apple Intelligence can handle, starting with something called Genmoji.
Enabling Apple Intelligence
As outlined above, Apple Intelligence is available for compatible devices running iOS 18.1, iPadOS 18.1, or macOS Sequoia 15.1. However, Apple Intelligence is not automatically enabled. Instead, you will need to enable it. Apple Intelligence is activated on a per Apple Account basis. This only needs to be done once. Once activated, it will need to be enabled per device. To activate Apple Intelligence perform these steps:
Open Settings on iOS, or iPadOS, or System Settings on macOS Sequoia.
Scroll down to "Apple Intelligence".
Tap, or click, on "Apple Intelligence" to bring up the settings.
Tap, or click, on "Join Apple Intelligence Waitlist". A popup will appear
Tap on the "Join Apple Intelligence Waitlist" button to confirm you want to join the waitlist.
Once you do this, you will join the Apple Intelligence waitlist. It may take some time before you are able to access the features. Once your Apple Account has had Apple Intelligence activated on it, you will then get a notification on your device indicating that Apple Intelligence is ready.
At this point, you can click on the "Turn On Apple Intelligence" button, and a popup will appear that will allow you to enable the features. Once you have enabled Apple Intelligence on your device, you will be able to use the features.
Closing Thoughts on Apple Intelligence
Many Artificial Intelligence tools require sending your private data to a server in the cloud to be able to perform a particular task. Doing this has the potential to not only leak your private data, but your private data can possibly be used to train additional artificial intelligence models. This is an antithesis to the core values of Apple, so Apple has taken a different approach with their own artificial intelligence that they are calling Apple Intelligence.
Apple Intelligence is designed to work on your private data and maintain that privacy. The way that this is accomplished is through a service called Private Cloud Compute. Private Cloud Compute is a set of servers in Apple's own datacenter that are built on Apple Silicon, utilizing features like the Secure Enclave to maintain the integrity of the server. Beyond this, each time that a request has been completed, the previous keys are wiped, and the server is completely reset and reinitialized with no data being retained between reboots.
Apple Intelligence is designed to help you accomplish tasks that you need, like summarizing text, generating new emojis, creating images, and more.
Apple Intelligence will be a beta feature starting in late 2024, with some overall features not coming until 2025, and it will be English only at first. Furthermore, these features will not be available in the European Union, at least not at first.
Apple Intelligence will have some pretty stiff requirements, so it will not work on all devices. In fact, you will need to have an Apple Silicon Mac or an iPad with an M1 or newer, or an A17 Pro. For the iPhone, you will need a device with an A17 Pro, A18, or A18 Pro. These devices are the iPhone 15 Pro, iPhone 16/16 Plus, or iPhone 16 Pro/Pro Max to take advantage of the Apple Intelligence features.
This is merely an introduction to Apple Intelligence, There will be more articles in this series, so be sure to check out those articles.
Today's modern internet is a leap forward from the start of the modern smartphone era of 2007 and 2008. Before then, particularly in the 1990s and early 2000s, if you were going to somewhere that you did not know very well you would need to print out a paper map using a site like Mapquest or Google Maps.
When Apple introduced the iPhone one of the few apps on the phone was a mapping app, specifically Google Maps. If you were following Apple back in 2012 it is possible that you remember Apple's announcement that they would be replacing Google Maps with their own Apple Maps. If you do not remember the announcement, it is quite likely that you do remember its launch. It was lackluster to say the least. Even though Apple Maps did not start off on the best foot, having any map app was better than no map app.
The one thing that Apple Maps has not had is a web version. You had to use either your iPhone, iPad, or Mac in order to use Apple Maps. That has now changed because today, Apple announced that there is beta of the web version of Apple Maps. Apple's announcement states:
Today, Apple Maps on the web is available in public beta, allowing users around the world to access Maps directly from their browser.
Now, users can get driving and walking directions; find great places and useful information including photos, hours, ratings, and reviews; take actions like ordering food directly from the Maps place card; and browse curated Guides to discover places to eat, shop, and explore in cities around the world. Additional features, including Look Around, will be available in the coming months.
All developers, including those using MapKit JS, can also link out to Maps on the web, so their users can get driving directions, see detailed place information, and more.
Maps on the web is currently available in English, and is compatible with Safari and Chrome on Mac and iPad, as well as Chrome and Edge on Windows PCs. Support for additional languages, browsers, and platforms will be expanded over time.
It is not clear why it took Apple 12 years to provide a web-based version of their Maps. Not having it for a few years makes complete sense, but then again it has taken 14 years to get an Apple Calculator app on the iPad, so I guess this is two years ahead of schedule.
Today Apple held its World Wide Developer Conference, or WWDC, keynote. The WWDC keynote is a way of Apple to highlight the features that will be coming to its platforms overs the next year. This year keynote is a big one with features that were highlighted for all of Apple’s platforms. I will highlight what I think are the biggest announcements.
Vision Pro
The Apple Vision Pro is Apple’s latest platform and while it has only been around for four months, there are some good updates coming. First, for those who use a Mac with their Apple Vision Pro you will be able to use a display that will be like having two 4K monitors side-by-side. This is great so you can have even more screen real estate.
For Photos, you will be able to use machine learning to make any photo a Spatial Photo, so you can view it with Apple Vision Pro.
The Apple Vision Pro is also coming to new regions both later this month and also in July.
Home Screen
Our devices are super personalized and we often use our home screen to reflect that. Now, you can do even more customization. You can customize your home screen to place icons where you would like them. This works well for images that may normally be covered by icons. Just like you re-arrange the icons, and you can place them wherever you would like.
Locking Apps
Some apps can contain some sensitive data, like photos or a journal app. There may be those times when you want to protect this data. You can now lock apps behind Face ID. This means that you can use require Face ID to open up an app.
Hiding Apps
There may be apps that you do not want others to see when you hand them your phone. You can now hide apps and these will be put into a "Hidden Apps" folder in the App Library, and they will be locked behind Face ID, just like other apps.
Icon Tint
There are those of us who like to have complete color coordination between our Home Screen and our icons, but icons come in a variety of colors. You can work around this by creating a shortcut that opens up the app, but this can be tedious. There is a new option for customizing the tint color of icons. This tint color applies to all icons, but you can select any color you want.
Control Center
The Control Center also allows you to organize, resize, and organize the icons as you see fit. You can even have groups of controls that make sense for you. Developers will be able to add additional control center items for you as well.
Contacts
A couple years ago Apple added a new way to limit which photos that an app could see. Now, you can do the same with Contacts. Instead of allowing an app full access, you can choose which contacts an app will have access to. This is a great addition for privacy.
Passwords
There is now a new Passwords app that works across Mac, iPhone, iPad, Apple Vision Pro, and even on Windows. This will allow you to store your passwords, one-time codes, passkeys, Wi-Fi, shared passwords, and Sign-In with Apple.
Messages
Messages gets some new features like custom Tapbacks, so you can now add emoji inline or as a tapback. Beyond this, you can use the new Genmoji feature to generate your own custom emoji-looking items to get the right images for the situation.
Sometimes all you need to do is add some emphasis to text. This can also be done through the new "text effects". These will allow you to add bold, italics, underline, or strike through some text. Beyond this, you can add some effects including:
Big
Small
Shake
Nod
Explode
Ripple
Bloom
Jitter
Here is a photo of some of them. When you preview them, they will show a preview of what they will look like.
TV
The AppleTV app is getting a new feature called "Insights". This is where you will be able to see other things that an actor or actress has been in as well as identify a song. You will then be able to add the song to an Apple Music playlist. If you use your iPhone as a remote while watching TV with others, you will also be able to get Insights.
Another feature is Enhance Dialog for TV speakers and speakers, this can be super helpful so you can hear what is being said. Subtitles will also get some tweaks, where they will automatically come up if you mute the audio or jump back in time.
Calculator
There is now a calculator on the iPad and it includes a feature called "Math Notes". Math Notes allows you to write out expressions with the Apple Pencil and calculator will compute the answer once you put an equals sign in the equation. If you need to add a column of numbers, this can be done as well by putting a line under all of the numbers to be added.
You will also be able to add graphs and update values in real-time with variables. Math Notes are not limited to just the calculator app. You can also use them in the Notes app.
macOS
macOS is the oldest and most mature of Apple’s operating systems, but it also seems some new features, like the aforementioned Passwords app. There are two iPhone-related features coming, mirroring and mirroring and notifications.
iPhone Mirroring
There may be instances when you would might want to be able to see what is happening on your iPhone while you are using your Mac. This can be particularly true if your iPhone is charging in another room. Now, you will be able to actually use your iPhone while it is elsewhere. You are able to interact with it by swiping and clicking, just as if you were using the iPhone.
When you do you your iPhone via mirroring, it will remain locked, so nobody else will be able to see what you are doing.
iPhone Notifications
Much like being able to add widgets from your iPhone to your Mac, you will be able to get iPhone notifications right on your Mac. Along with this, you can interact with them and they should even be able to open up via iPhone mirroring.
Mail
Mail is also getting some updates, including categorizations. This is done on device and emails will be put into one of a few categories:
Primary - Most important
Transactions - Receipts
Updates - Newsletters
Promotions - Marketing/Sales
This is a nice update as well. There is another thing coming to mail, Writing Tools.
Writing Tools will allow you to spell check, proof-read, and rewrite an email. It will not be limited Mail, but can also be found in Keynote. Pages, Notes, and even third-party apps.
Apple Intelligence
One of the big items highlighted is Artificial Intelligence. Artificial Intelligence will allow you to create images, rework text, and even use Siri to perform actions and find your own data. Apple could have just integrated existing Artificial Intelligence, but they have decided to go above and beyond with a new feature called Apple Intelligence.
Apple Intelligence is an initiative that takes Artificial intelligence and expands upon it to make sure that your information stays private. This is done through a combination of on-device and cloud infrastructure. A vast majority of the data will be on device, but for tasks that require more resources, there is the cloud portion.
But not just any cloud. Apple has dubbed their solution Private Cloud Compute. PrivatE Cloud Compute is built on Apple Silicon and uses many of the features built into the system. One of the features of this is that there is no data persistence, so your private data is only available to that server for that one request, before the data is wiped from the server.
Apple Intelligence provides access to your data so you can perform actions like "Find photos of Suzy in a Pink dress" and it knows enough context to be able to find what you are looking for.
Image Generation
One of the more common uses of current artificial intelligence is to generate images. You can do this on iOS 18, iPadOS 18, and macOS Sequoia as well. You will be able to create images to send to others based upon a template, and a few limited styles. Beyond this, you will be able to write out what you are looking for and it will perform a search.
Being able to use your own data is great, but sometimes you need access to general world knowledge. Apple has a solution with that as well.
ChatGPT
Apple is partnering with Open AI and to use their ChatGPT 4o model to allow you to ask Siri general knowledge. The request will be sent and it will end up responding. If an app needs to use your personal data, you will need to confirm that you want to send the data to ChatGPT before it sent. This way, you are always able to decide to not send the data.
Again, these features will be coming later in the year.
Closing Thoughts
All of the features outlined above should be coming over the next year. Some will be released this fall, while others will be later. There are a number of great features, like Home Screen customization, new text effects in Messages. The new Passwords app will make it easier to manage all of your passwords, and related information in a single location.
Artificial Intelligence is a big topic with a slew of features planned including Writing Tools, mail organization, and general ChatGPT features through Siri and throughout Apple's operating systems.
The iPad has had its ups and downs over the last 14 years. In that time there have been 37 Wi-Fi devices and 38 cellular devices across four different iPad families of iPad, iPad mini, iPad Air, and iPad Pro. The iPad devices have had an ever changing set of capabilities and features, including Wi-Fi improvements, cellular connectivity enhancements, improved cameras, Touch ID, and even Face ID.
Each iPad has its own place within the entire lineup. The iPad is the entry-level, while the iPad Pro is on the opposite side of the spectrum and has the latest features, and technologies. The iPad Air is a more affordable model that has some of the technologies that were originally on the iPad Pro and have made their way to the Air. This leaves the iPad mini, which to be honest, is a conundrum because it is a mix of the iPad and the iPad Air. It is a smaller size, so it is less expensive than the iPad Air, but has slightly better technology, so it is more expensive than the original iPad.
The current iPad lineup makes a lot more sense than it has previously. There is now a more consistent lineup with the iPad mini at 8.3 inches, the iPad and 10.9, the iPad Air at 11-inches and 13-inches, and the iPad Pro with an 11-inch and 13-inch version.
As mentioned above, the iPad Pro has the latest and greatest technologies in it. Apple has just released two new iPad Pro models, the 5th generation 11-inch and the 7th generation 13-inch. I have purchased one, and what follows will be a bunch of details about the iPad Pro as well as my own thoughts on the device, and its accessories. But first, let us take a brief look at my personal history with the iPad.
Personal iPad History
Being a person who likes to use the latest tech, it would not come as a shock that I have been using an iPad since it was first available back in April of 2010. Unlike iPhones, I have not purchased each new model of iPad. The iPad is not primarily a productivity item for me, instead it is used for development, playing some games, and occasionally used for performing light productivity tasks.
While I have not owned all of the iPad models, I have owned a few including:
Original iPad (2010)
iPad 2 (2011)
iPad 3rd generation (2012)
iPad Air 2 (2014)
12.9-inch iPad Pro 1st Generation (2015)
12.9-inch iPad Pro 2nd Generation (2017)
12.9-inch iPad Pro 3rd Generation (2018)
12.9-inch iPad Pro 5th Generation (2021)
To this list I can now add the 13-inch iPad Pro, or 7th generation iPad Pro.
As you can see I skipped a fair number of iPads, including the 4th generation, original iPad Air, and the 4th and 6th generation iPad Pros. The reason that I skipped these varied, but it mostly came down to the update not being compelling enough to upgrade. In the case of the 4th generation iPad, it was because it was released 7 months after the 3rd generation, although the 4th generation would have been a better device than the 3rd generation.
It should be noted that I have opted to get the largest screen, not only because I like the idea of having more screen real estate, but also because the highest end devices typically have the best technology in them and I do not mind living on the bleeding edge of technology when it makes sense. Furthermore, when I hand down my iPads to someone else, they can usually appreciate the larger screens.
For the 4th and 6th generation iPad Pros, I opted to not get these device because they did not offer a compelling enough change to warrant purchasing. The 4th generation iPad Pro only added a LiDAR camera, so it was not enough. Similarly, while the 6th generation iPad Pro offered a bit more, the M2 and Apple Pencil Hover. I will admit, I am glad that I skipped because it means that I can purchase the 7th generation 13-inch iPad Pro, which has some great additions. Let us start looking at aspects of the 13-inch iPad Pro starting with the System on a Chip, or SOC.
System on a Chip
Normally when Apple introduces a new device they offer the next processor in the line. For example, the 1st generation iPad Pro had an A9x, the next model had an A10X Fusion. The 3rd generation iPad Pro skipped to the A12X. The 4th generation had an A12Z, which only had one additional GPU core. The 5th Generation introduced the M1 to the iPad Pro line, and the 6th generation had the M2. It would make sense for Apple to have the 7th generation 12.9-inch iPad Pro to have the M3. However, they did not do that, and there is a good reason behind that. To explain we need to take a bit of a deeper dive into manufacturing.
Manufacturing Processes
Apple uses Taiwan Semiconductor Manufacturing Company, or TSMC, to produce their latest and greatest chips. Each of these chips has its own manufacturing process. The reason that Apple, and many other companies, use TSMC is because TSMC has the most advanced manufacturing plants in the world and can produce the best chips.
As technology has progressed the physical size of silicon chips has gotten smaller and smaller. It began with 20 micron, or 20,000 nanometers in size, in 1968. The 10 micron, or 10,000 nanometer, process was used in the Intel 8008 processor. The process size was reduced in 1974 to 6 microns, or 6,000 nanometers, and used in the infamous Intel 8088 processor.
The first sub-micron sizes were developed in 1987 with the 800 nanometer process. This process would not be used widely until 1993 when Intel introduced the P5 Pentium chip running at 60MHz or 66MHz. The 90nm process was developed in 2001 and was eventually used with Sony's Playstation 2, Microsoft's Xbox 360, and the AMD Athlon processors, just to name a few.
One of the turning points for manufacturing, at least in terms of Apple, was the 20nm process. Apple bought PA-Semi, in 2008 and they began putting out their own chips, starting with the A4 in 2011. The A4, A5, A6, and A7, were all manufactured by Samsung. In 2014 this changed when Apple began working with TSMC. The first A-series chip that TSMC produced was the A8, and this was on a 20nm process. The A8 came out in 2014 with the introduction of the iPhone 6, iPhone 6 Plus, and HomePod, to name just a few of the products.
The next process was the 16nm, or 14nm, and this included Apple chips like the A9, A9X, and A10 Fusion. This was a notable process node given that the 1st generation iPad Pro was introduced with the A9X. The A9 had 2 billion transistors, the number of transistors on the A9X was not disclosed.
The 10nm process was short lived, and mostly used by Apple for the A10x and A11. The A10X had 3.3 billion transistors, meanwhile the A11 had 4.3 billion transistors.
The 7nm process entered mainstream with the A12 Bionic, which debuted September of 2018. The 7nm process was used on a number of products including iPhone XS and iPhone XS Max. It was also used in the 5th generation iPad mini, iPhone XR, 3rd generation iPad Air, 8th and 9th generation iPad, and the 2nd generation Apple TV 4K. The A13 was in the iPhone 11 line and the Apple Studio Display. The A12 had 6.9 billion transistors, while the A13 had 8.5 billion.
The first 5nm processor was Apple's A14 Bionic, which has been in the 10th generation iPad, 4th generation iPad Air, and the entire iPhone 12 line. The A15 Bionic also used the 5nm process, and was in the iPhone 13 line, iPhone 14 and 14 Plus, 6th generation iPad mini, 3rd generation Apple TV 4K, and the 3rd generation iPhone SE. These came with 11.8 billion transistors on the A14 chips.
The 4nm process was used on the A16 Bionic, which has only been in the iPhone 14 Pro and iPhone 14 Pro Max, as well as the iPhone 15 and iPhone 15 Plus. The A16 had 15 billion transistors.
One thing that you may notice with all of the items listed above, is that as the process size gets smaller, the more transistors that can be put onto a chip. With more transistors that are on a chip means that there are more capabilities that a device can have. Typically, the smaller the process size, the more power efficiency increases. Progress has continued until the current size of 3 nanometers.
3 Nanometer
TSMC has indicated there are a number of 3 nanometer processes, including the N3, N3B, N3E, N3S, N3P, and N3X. Each of these has their own benefits. N3 is the base process, which offers 25 to 30% power efficiency over the N5 process. At the same time power increases 10 to 15%.
N3E uses 32% less power and offers 18% better performance. N3P should offer between around 33 to 34% less power than N5 and 19% to 20% better performance than N5. N3X will use a bit more power, but still offer a bit more performance than N5.
These comparisons are good to highlight, because the M2 was manufactured using the N5 process.
M3
The first set of N3 processors that Apple introduced were the A17 Pro and M3 and used the N3B process. It turns out that the N3B process was not the right approach to 3nm processing. While it would indeed work, it was more expensive and per an EE Times article from April of 2023:
Taiwan Semiconductor Manufacturing Co. (TSMC) is straining to meet demand from top customer Apple for 3-nm chips. The company's tool and yield struggles have impeded the ramp to volume production with world-leading technology
With costs being higher, and yields not being what they need. This means that it that it will not work with Apple needs millions of chips. What this means is that Apple is keen to move onto the next process, which will have better yields which means more chips can be manufactured. Apple has introduced a chip that matches this. It is called the Apple M4.
M4
With the introduction of the 5th generation 11-inch iPad Pro and the 7th generation 13-inch iPad Pro, Apple did something different and introduced an Apple Silicon chip on the iPad, before it does so on a Mac. Apple introduced the M4. According to Apple this uses the 2nd generation 3nm manufacturing process.
The M4 has a number of improvements, including the aforementioned 25% less power consumption and 18% improved performance. According to Apple the M4's neural engine is "capable of 38 trillion operations per second, "which is 60x faster than Apple's first Neural Engine in the A11 Bionic chip".
Part of the chip's ability to perform that many calculations per second comes from the memory bandwidth. Since the M1, the memory bandwidth in iPads has been 100 Gigabytes per second. There is now a 20% increase to 120 Gigabytes per second with the M4.
The M4 actually comes in two variants, a 9-core variant with 3 performance cores, and a 10-core variant with 4 performance cores. For either variant there are 6 efficiency cores. The 256GB and 512GB 11-inch or 13-inch iPad Pro will have the 9-core processor, while the 1TB and 2TB models have the 10-core processor.
This is the first time that Apple has differentiated models of the iPad by having different number of cores depending on model. This is not the first time Apple has done that in general, because this is quite common with Macs, in particular the MacBook Pro.
Media Engine
The M4 also has a couple of other enhancements over the M3, in particular related to the Media Engine. The first of these changes is that it now supports 8K High Efficiency Video Codec, or HEVC. HEVC is the successor to H.264, which has long been the standard compression format for video. HEVC is more commonly known as H.265 and its chief improvement over H.264 is that it can provide better quality video at the same bitrates, meaning that it can look better for the same video size.
The inclusion of 8K HEVC means that you can easily handle processing of 8K video, even though you cannot shoot 8K video on an iPad.
The second change is that there is now hardware accelerated decoding of AOMedia Video 1, or AV1. AV1 is similar to HEVC, except that it is designed for streaming over the internet. This is the first time that AV1 decoding is on an iPad. The benefit to having hardware-acceleration of AV1 decoding is that it will provide more power efficient playback, which means less wear on the battery.
Graphics
Media encoding is definitely an important aspect of the iPad, yet there is another important feature of the M4, the Graphics Processing Unit, or GPU. The GPU on the M4 adds some new enhancements of its own. There is a new 10-core GPU and it includes Dynamic Caching. Dynamic Caching. Dynamic Caching was introduced on the M3, but this is the first time it is on an iPad. Dynamic Caching is a technique that allows the hardware to allocate the proper amount of memory for each task. This can mean that more of the graphics processing unit will be utilized when needed. Furthermore, it also means fixed amounts of memory do not need to be allocated, potentially locking up the memory when not needed.
Dynamic Caching is not the only change. The M4 also supports Ray Tracing, particularly Hardware-accelerated Ray Tracing. Ray Tracing is a technique where the light is more realistically rendered, which can result in more life-like lighting. Here is a great example of how Ray Tracing can improve gameplay. It is title How Nvidia and Valve Gave Portal its Ray Tracing Makeover and while it covers the game Portal, the same information is still applicable to the iPad.
Setup
The first thing that one must do is setup their iPad. If you have a previous iPad you can use a direct connection between the devices to perform the setup. Alternatively, you can use an iCloud backup and restore that to your iPad. For me, I opted to do the direct connection. When you do a direct transfer between the devices, neither device will be usable while the transfer is taking place. The estimates for how long it would take started off at 2 hours and eventually crept up to an estimate of 6 hours. Ultimately, it ended up taking approximately 3 hours to finish, which is still longer than I would have liked.
Given how much is on my iPad, It might have made more sense to do an encrypted backup and then restore to my new iPad, but I did not do that. While writing this section I opted to look at my iPad storage and I saw that I had 35 gigabytes of synchronized media, and I should have removed this before beginning the transfer, it would have saved some time off of the total transfer time.
One thing I did not try was to use a thunderbolt cable to see if I could transfer the data using that method instead. If anybody has done this, I would be interested in knowing how well it worked. If it is not supported, I think that should be something that Apple adds as an option. I understand not allowing standard USB-C cables, as these will only transfer up to 40 Gigabits per second, or a theoretical max of 5 Gigabytes per second. Although, even this would be significantly faster than using Wi-Fi.
I have purchased the cellular version of the iPad since the 1st generation iPad Pro in 2015. Normally, I would just physically move the SIM from my old iPad to the new one. However, the cellular 13-inch iPad Pro does not have a physical SIM slot, only an eSIM. Therefore, my existing data plan for my iPad needed to be moved. The setup steps account for this and my data plan moved successfully without any issues.
Once the setup was complete, all of the apps needed to download and then I could use my new iPad Pro.
Dimensions and Weight
The 13-inch iPad is roughly the same size as the 5th generation iPad Pro. In fact, it only has 0.64% more physical area. This is accomplished by it being 1mm taller and 0.6mm wider. This physical size is needed to account for the new display size of 13-inches. One area where the dimensions have changed is in the depth.
One of the highlighted aspects of the 13-inch iPad Pro is how thin it is. According to Apple it is the "thinnest device we've ever made". Apple's "Let Loose" event video mentioned that it is thinner than the iPod nano, which was Apple's thinnest device. The iPod nano was 5.3mm. The 13-inch iPad Pro is indeed thinner at 5.1mm. This is a significant reduction from the 6.4mm on the 5th generation, which means that the 13-inch iPad Pro is 20.3% thinner, or just over 20% thinner than the 5th generation.
This is a huge difference. The reduction in the physical size does have some implications most notable in the weight, which has gone from 685 grams to 582, a difference of 103 grams of 15.04%. This is a huge and noticeable difference. Another quite noticeable difference was going from the iPhone 14 Pro Max to the iPhone 15 Pro Max, where the devices went from steel to titanium. The reduction in weight for the iPad is almost twice the amount for the iPhone Pro changes.
Next, let us turn to another visual item, and the one that iPad users see the most, the display.
Display
Almost every single product that Apple has created, and that has a battery and is portable, has needed a display of some sort. One notable exception to this is the iPod shuffle, but that was a unique product. Each and every iPad that Apple has sold has had a display. Apple has attempted to obfuscate the technology used in each type of display. To date Apple has employed some marketing names including:
Retina
Liquid Retina
Liquid Retina XDR display
There is now a new marketing name. The 13-inch iPad Pro has a display that Apple is calling "Ultra Retina XDR". The technology used for the display has varied over time and has included:
Liquid Crystal Display (LCD)
Light Emitting Diode (LED)
Micro-Light Emitting Diode (Mini-LED)
Different display technologies have their own benefits and approaches.
The 13-inch iPad Pro has a new display. The actual technology used for this display is not brand new, but it is new for the iPad Pro. The Ultra Retina XDR display is powered by Organic Light Emitting Diodes, more commonly known as OLED. Apple has used OLED on devices in the past, most notably on the Apple Watch and the iPhone.
OLED is better in some ways, most notably in that it uses less energy. This is accomplished by only lighting up the pixels that are needed.
In a battery constrained device, like the Apple Watch, OLED is the only way to go because pixels that are not lit do not consume any energy. Even with the Always On display of the Apple Watch and iPhone, items are only updated infrequently, at little as once a minute, so the devices are able to maximize battery and minimize energy usage.
When you extrapolate this technology to a larger device you can have the same benefits. There is a limitation to OLED which does not necessary appear in other display technologies; the organic nature of the diodes. Because the diodes in an OLED display are organic, they can degrade over time. Unfortunately, there is no way to regenerate the organic materials.
This new display is called Tandem OLED and it is worth diving into a bit.
Tandem OLED
Tandem OLED is a display technology that consists of two OLED displays that are connected with some sort of interconnect. This is similar to the way that an M2 Ultra chip is actually two M2 Max chips with their own interconnect. This interconnect actually allows sub-millisecond control over the color and luminance of each pixel, which means that colors can be controlled more fluidly for even better content viewing, particularly for video content.
The reason that there are two panels instead of a single panel is that a single panel is not capable of producing the brightness levels that Apple wants to achieve, at least not in the sizes needed for the 13-inch iPad Pro. The OLED display in the 13-inch iPad is capable of producing up to 1000-nits of brightness. High Dynamic Range (HDR) content is capable of showing up to 1600 nits of brightness.
It is not that an OLED display cannot display 1600 nits, it can. In fact the iPhone 14 Pro/Max and iPhone 15 Pro/Max can do up to 1000 nits, 1600 nits for HDR content, and up to 2000 nits while outdoors. The Apple Watch Series 9 can also do up to 2000 nits, and the Apple Watch Ultra 2 can do up to 3000 nits.
You might think that with a Tandem OLED that ProMotion might not be supported, but it still present. It still has the same range of 10Hz to 120Hz, so if you are accustom to using ProMotion, it is still present.
When I first used the 13-inch iPad Pro, I did not really notice the difference in the screen. It is not that I did not believe it was an OLED screen, I did. However, it became quite apparent once I started working on my review on my 13-inch iPad Pro. The black background of the Notes app was noticeably darker than on my 5th generation iPad Pro.
Now that we have covered the display, let us look at some wireless connectivity.
Wireless Connectivity
All devices these days have a myriad number of radios. This can be for Wi-Fi, Bluetooth, and even Ultra Wideband. The Wi-Fi in the 13-inch iPad Pro has the same Wi-Fi that has been on all 12.9-inch iPad Pros since 2020. That is Wi-Fi 6E, also known as 802.11ax. Wi-Fi 6E was included on the 6th generation 12.9-inch iPad Pro, but on my 5th generation 12.9-inch iPad Pro it only has Wi-Fi 6, so this has been a slight improvement. Have I noticed it during normal usage, no because I do not have a Wi-Fi 6E network, so until I have one I will not see any changes.
As for cellular connectivity, the 13-inch iPad Pro has 5G connectivity, as have all iPad Pros introduced since 2021, with the 5th and 6th generation 12.9-inch iPad Pros. So, this has not changed. I would not expect the 5G connectivity to change for a couple more generations, because 5G's successor, 6G, will not likely begin deployment until the 2030s, so it could be anywhere from 6 years to 15 years away, but likely closer to 10 years before deployment begins.
There has been a change from my 5th generation 12.9-inch iPad Pro, the 13-inch does not support EDGE, which is not really surprising given that the EDGE cellular networks were shutdown in 2022, so it makes no sense to continue to support it on the iPad Pro. Strangely though, the iPhone 15 line still does support GSM/EDGE.
Ever since the iPad was introduced in 2010 Apple has offered a cellular option. The cellular options, as you might expect, cost more due to needing the additional hardware needed. Each of the cellular iPad models has had a physical SIM. The iPads that have been introduced since 2018, have all had the option of using either a physical SIM or an electronic SIM, known as an eSIM. This changes with the 13-inch iPad Pro.
Much like when Apple introduced iPhone 14 line, there is no physical SIM slot on the 13-inch iPad. Instead, the only option is to use an eSIM. Therefore, if you have a physical SIM in your existing iPad it will need to be converted to an eSIM by transferring your existing account. For many, this will not be a problem, but it is something to be cognizant about when setting up your iPad Pro.
Cameras
The iPad is designed to be a versatile device and one way that this is accomplished is by providing the iPad with a camera. The camera on the iPad is by no means the best camera, the best camera is reserved for the iPhone Pro line. Starting with the 4th generation 12.9-inch iPad Pro Apple added a second camera to complement the existing camera. The Wide and Ultra-Wide cameras were present on the 4th, 5th, and 6th generation iPad Pros. However, starting with the 13-inch iPad Pro, there is no longer an Ultra-Wide camera. Instead, there is just a single camera.
The removal of the Ultra-Wide lens may be disappointing for some, but Apple has indicted that the 12 megapixel camera is improved in a number of ways. This includes capturing photos and video with better color and increased detail in low light. This can be useful for many situations. There is one in particular where the improved lighting can help, and that is with document scanning.
Document Scanning
A task that many users perform is to scan documents into Notes or another application. Typically when you take a picture of a form, or a receipt, you will get a bunch of shadows around the edges. Now, the iPad Pro camera system will attempt to remove as much of these shadows as possible, therefore when you do scan the shadows should be significantly reduced. Along with this, machine learning will be applied to make it even easier to get a more consistent color.
Front Facing Cameras
When the first iPhone was released, it only had a single back camera. It was the same for the iPhone 3G, and iPhone 3GS. But in 2010 when Apple added a second camera, this time it was on the front. This was great for taking selfies, but it was also very useful for another feature, FaceTime.
FaceTime is Apple's proprietary video call software that works on both iOS and macOS. For its entire existence, the iPad had the FaceTime camera along the top edge, near the power button. This remained the case for all iPads, except for the 10th generation iPad released in 2022. Now, the 5th generation 11-inch iPad Pro and 7th generation 13-inch iPad Pro now have the FaceTime camera along the landscape edge. This makes a lot more sense because a significant number of users use the iPad Pro while it is in landscape mode.
The FaceTime camera is not the only camera on the edge of the 13-inch iPad Pro. When Apple introduced the redesigned iPad Pro in 2018, with the 3rd generation 12.9-inch iPad Pro they added Face ID. The True Depth camera module that actually paints invisible dots on your face so that it can algorithmically compare what if finds to what is stored in the Secure Enclave, which stores the Face ID results. The True Depth Camera is actually now separated from the FaceTime camera. It is in fact separated with a magnet that is used to charge the Apple Pencil, more on that in a bit.
The specs of the front Facing camera has remained the same since the 5th Generation 12.9-inch iPad Pro. This means that there is a 12 megapixel camera with an aperture of ƒ/2.4, with 2x zoom out.
Now that we have covered all of the hardware of the 13-inch iPad Pro, let us look at some accessories, including the Apple Pencil.
Apple Pencil Pro
When the 1st generation 12.9-inch iPad Pro was announced, an accessory was announced and it was a stylus that was designed to work with the iPad Pro. The 1st generation Apple Pencil powered and charged via the Lightning port on the original iPad Pro. This remained the same for the 2nd generation iPad Pro.
The 3rd generation iPad Pro was a complete redesign, including the Apple Pencil. The 2nd generation Apple Pencil charged, and paired, strictly by magnets. By simply placing a 2nd generation Apple Pencil on an iPad Pro, it would pair and begin charging.
The 2nd generation Apple Pencil is compatible with the following devices:
3rd to 6th generation 12.9-inch iPad Pro
1st to 4th generation 11-inch iPad Pro
4th and 5th generation iPad Air
6th generation iPad mini
Missing from this list, is the 13-inch iPad M4 iPad Pro. That is because there is a whole new Apple Pencil, the Apple Pencil Pro.
Compatibility
Initially, you might think that the 2nd generation Apple Pencil and the Apple Pencil Pro should be interchangeable given the fact that they are both magnetic, but I can attest that the 2nd generation Apple Pencil cannot be used on the 5th generation iPad Pro. The reason for this is because of the placement of the magnets.
The magnets within the Apple Pencil Pro have a different placement, specifically the charging needed to be updated in both the iPad and the Apple Pencil in order to work with the landscape camera.
The Apple Pencil Pro works the same as the 2nd generation Apple Pencil. Once you place the Apple Pencil Pro on the 13-inch iPad Pro, it will pair and begin charging. Let us turn to some other features of the Apple Pencil Pro, starting with Apple Pencil Hover.
Apple Pencil Hover
Since my previous iPad was the 5th generation 12.9-inch iPad Pro, I had not yet had a chance to try out Apple Pencil Hover, because that was exclusive to the 6th generation iPad Pro. Apple Pencil Hover is a feature where you can, as the name suggests, hover over an element on the screen and it will highlight the item.
As an example, you can use the Apple Pencil Pro to hover over any standard control and it will be highlighted, similar to how you might use keyboard navigation. In addition to standard controls, you can also hover over app icons. During my testing, it seemed to work for most elements, with one exception, it does not work with hovering over an individual note, within the Notes app. It seems like an oversight for it to not work with Notes.
Barrel Roll
The Apple Pencil Pro has a gyroscope within it. This means that when you are using an app, like Notes, Freeform, or Pixelmator, you will be able to quickly change the size of the brush that you are using by simply turning the Apple Pencil Pro.
You might initially think, "What is the big deal, you can just adjust your grip and adjust the brush size that way." Yes, you can, however when you are drawing being able to quickly, and easily, adjust the angle of the brush without needing to lift up the tip, provides a much more natural mechanism for drawing.
Third-party apps will need to add support for barrel roll for it to work, but it is something that they can add. This is not the only new feature, there is another gesture, squeeze.
Squeeze Gesture
When you are holding a standard pencil you might be tempted to squeeze it. When you do this, nothing much will happen. However, with the Apple Pencil Pro, you will get a popup toolbar. This is the same toolbar that you can activate when you tap on the current item in the toolbar.
Much like the barrel roll, being able to quickly use the toolbar without needing to lose your place and focus is a big step forward particularly for those who like to use the iPad Pro for drawing.
Haptic Feedback
Another new feature that is a nice touch is Haptic Feedback. Now, when you squeeze the Apple Pencil Pro you will get a bit of feedback that will confirm the gesture that you performed. This is helpful for when you may not be able to fully see that a gesture was successfully completed.
Find My Support
The last new feature of the Apple Pencil Pro is something that many have wanted for each of the previous Apple Pencils, and that is the ability to use Find My to locate an Apple Pencil. This is now possible with the Apple Pencil Pro.
Find My is NOT automatically enabled when you connect an Apple Pencil Pro. Much like other devices, you need to actually add it. To add an Apple Pencil Pro to Find My, use the following steps:
Open "Find My"
Tap on "Devices"
Tap on the "+" symbol in the upper right corner.
Tap on "Apple Pencil". A popup will appear.
Per the popup, attach your Apple Pencil Pro, if it is not already attached. Another popup will appear.
In the "Add to Find My" popup, tap on "Add Pencil".
A confirmation will appear, and your Apple Pencil Pro will be added to Find My.
Once added, you will be able to use Find My to locate the Apple Pencil. It will show its last location. If it is currently attached to your iPad Pro, it will show that it is attached. If it shows that it is attached to your iPad, then you will likely want to locate the iPad Pro in order to find your Apple Pencil Pro.
You cannot use Precision Finding with the Apple Pencil, because the Apple Pencil Pro does not have a U1 chip, which is required for Precision Finding. Maybe that is something that a future Apple Pencil Pro can add. Even though there the Apple Pencil Pro is not capable of Precision Finding having basic Find My support is a significant upgrade and will bring you one step closer to finding a missing Apple Pencil.
Let us look at one other new accessory, the new Magic Keyboard.
Magic Keyboard
As outlined above already, 2018 was a big year for the iPad Pro. The 12.9-inch iPad Pro included a big redesign, which included flat sides, the 2nd generation Apple Pencil, USB-C, and an updated Smart Connector. The Smart Connector was relocated, which allowed a new accessory, the Magic Keyboard.
The Magic Keyboard is a combination keyboard and case that uses the magnets within the iPad and the Magic Keyboard in order to allow it to be placed properly as well as aligning the camera properly. The Magic Keyboard is not only a keyboard, but it also has a Trackpad as well.
Design
The overall design remains the same, in terms of having a fabric back to help protect the back of the iPad Pro. The Magic Keyboard has a keyboard at the bottom. The material in the Magic Keyboard is now aluminum. This means that the keyboard is lighter than the previous model. It is not just lighter by a little bit, much like the iPad itself, the weight difference between the previous Magic Keyboard and the new one is quite noticeable.
The two previous Magic Keyboards were effectively the same, and the only change was the size of the hinge to accommodate the slightly larger 12.9-inch iPad Pro. The previous Magic Keyboards had a piece of material that would cover the barrel shaped hinge. The one downside to this change is that you can no longer rely on the hinge to provide a bit of friction against a surface like you could with the previous Magic Keyboard. This is a minor change, but one that you should be cognizant about.
The new Magic Keyboard for the 13-inch iPad Pro now has a tubular shape to the hinge and it is now sans material around it. This gives it a bit more of an industrial look. The tubular hinge also means that the orientation of the charging port on the Magic Keyboard has rotated 90 degrees, to be perpendicular to the orientation of the USB-C port on the 13-inch iPad Pro. This results in the USB-C cable being parallel to the keyboard while open. On the topic of the keyboard, let us look at that next.
Keyboard Layout
One of the most requested features for the Magic Keyboard was the addition of row of function keys. This is now on the Magic Keyboard. Each of the icons in the function row are the same as on the MacBook Pro, with a slight tweak to the F3 key where the icon is in a grid instead of in masonry layout.
The function row even includes a dedicated Escape key. For those who want to use the Magic Keyboard with a terminal emulator, this is huge. If you have ever had to try and connect to a server and use VIM without an escape key, you know how big this is. There is an alternate key combination of command + period, and this still works even with a dedicated function row, but muscle memory with a physical escape key is just better.
One thing to note about the function row is that the keys are half-height. I am sure that some would prefer full height, I will gladly take half-height keys instead of no keys what-so-ever.
Let us now switch to something a bit different with the Magic Keyboard, the possible positions of the iPad.
Positions
The previous Magic Keyboard was somewhat limited in the angles that you could have the iPad. For instance, if you tried to have the entire back of the 12.9-inch iPad Pro. against the back of the keyboard, the iPad would lean forward. Yes, it is possible to use it in this position, but it is a bit awkward.
Now, with the new Magic Keyboard you are able to have the 13-inch iPad Pro be at a 90 degree angle with the entire back against the back of the keyboard case. On the previous Magic Keyboard there was one way to have the iPad at a 90-degree angle. You would need to place the iPad so it was a bit above the top row of the keyboard. Here is what that would look like.
You can accomplish the same thing with the 13-inch iPad Pro, and the iPad would be in the same general position, but now that the keyboard includes a function row, the iPad sits right at the top of the function row. Putting the 13-inch iPad in this position could be useful in situations where there might not be a lot of extra space, like on an Airplane.
Being able to position the iPad in a variety of angles is great, particularly if you use the iPad in a variety of situations. There is one last feature of the Magic Keyboard that has seen some changes, and that is the Trackpad.
Trackpad
The Magic Keyboard for the iPad Pro has more than just a keyboard. It also includes a Trackpad. This is important if you want to use the cursor on the iPad. The Trackpad on the new Magic Keyboard is significantly larger than on the previous model.
When I saw the differences in the Trackpad sizes, it actually reminded me of when I went from my old 2008 Black MacBook to my old 2015 MacBook Pro. There was a significant change in size to the Trackpad then, and this change feels very similar. It is so similar to that situation right down to the click of the Trackpad.
On the Magic Keyboard for the 5th generation iPad Pro, if you remove the iPad, you can feel the Trackpad actually click, just like the 2008 MacBook did when powered off. However, if you remove the 13-inch iPad Pro from its Magic Keyboard and try to click the Trackpad, nothing happens. This is exactly same behavior as on the 2015 MacBook Pro, which does not move unless there is power.
I cannot say that I have noticed the size difference between the two Trackpads, at least not in my typical usage. If I played more games or used the Trackpad on a more consistent basis I might actually notice it. For me, the Trackpad is mostly used when I need to move to a position on the screen quickly, like when clicking a button. When I do this, I typically use my forefinger and not my thumb. But, with the new Trackpad size, this behavior may change, only time will tell on that.
Benchmarks
No review would be complete without at least some benchmarks. For this review, I have included every Apple Silicon device that I personally own. All of these are running some version of Apple Silicon. There are no Intel Macs on the list. The reason behind this decision is that by not including Intel machines, the comparison will be a bit more consistent and equitable. Plus, I do not have any Intel machines that can run macOS Sonoma.
Geekbench 6
Single Core
Multi-Core
GPU (Metal)
13-inch iPad Pro (M4, 2024)
3712
13180
53622
MacBook Pro (M2 Max, 2023)
2701
14778
123331
iPhone 15 Pro Max (A17 Pro, 2023)
2915
7019
27153
Mac mini (M1, 2020)
2405
8790
33714
Mac Studio (M1 Max, 2022)
2388
12418
95601
12.9-inch iPad Pro (M1, 2021)
2305
8398
33200
iPad mini 5th gen (A15, 2021)
2133
5371
19918
Given that I am upgrading from the M1 iPad Pro to the M4 iPad Pro, it makes sense to compare the two directly. When you compare my M1 12.9-inch iPad Pro to the new 13-inch iPad you will see there has been a 61% increase in single-core processing, a 57% increase in Multi-core processing, and a 61.5% increase in GPU performance. This is a significant jump. This makes sense given that the process size has gone from 5 nanometers down to 3 nanometers, meaning more chips in the same space.
For the next test I ran Geekbench ML, which is designed to take a look at Machine Learning tasks. Apple positioned the M4 as "built for AI", but we will not know how well until there are features that can really take advantage of the processor. For now, we must rely on benchmarks and below are the Geekbench ML results for each device and the processor being used for Machine Learning.
Geekbench ML
CPU
GPU
Neural Engine
13-inch iPad Pro (M4, 2024)
4648
6773
9592
MacBook Pro (M2 Max, 2023)
3507
8049
9144
Mac Studio (M1 Max, 2022)
3003
6206
7809
12.9-inch iPad Pro (M1, 2021)
3018
3369
6907
Mac mini (M1, 2020)
3002
3538
6839
iPhone 15 Pro Max (A17 Pro, 2023)
4044
3678
6133
iPad mini 5th gen (A15, 2021)
3135
1933
4526
The ordering of these all makes sense, the newer devices have better scores than the older devices. The most stark difference is when you compare my 13-inch iPad Pro to my iPhone, where the GPU and Neural Engine are no where near each other. The GPU in the 13-inch iPad Pro is no where near my M2 MacBook Pro, which makes sense, given that the M4 in the 13-inch iPad is the base chip, whereas the MacBook Pro is an M2 Max, which is significantly more capable.
Again comparing the 12.9-inch M1 iPad Pro to the 13-inch M4 iPad Pro, there was a 56% increase in CPU processing, 205% increase in GPU processing, and a 37.8% increase in Neural Engine processing. The CPU processing change is in line with the single-core, and close the multi-core benchmark. So this makes sense. However, the GPU on the M4 iPad Pro is a lot faster. It is even faster than my Mac Studio with the M1 Max, but still slower than MacBook Pro with the M2 Max.
There is one last topic to cover, iPadOS.
iPadOS
Most modern hardware is not particularly useful without some sort of software and for the iPad Pro the software that powers the device. The iPad Pro hardware has always been pretty solid and recently the hardware has always outstripped the software.
There are many who have been wanting the iPadOS software to match the capabilities of the device. Some have suggested that the iPad Pro should be able to virtualize macOS, and it would provide an escape hatch for those who want to be able to perform tasks that iPadOS is not currently capable of doing. I think for them it could be a good thing. I would definitely try it out, because I do find myself being less productive on an iPad than I am on a Mac.
Even if Apple did not allow virtualization of macOS on an iPad Pro, there are still a number of things that Apple could add that would not degrade the current experience for many iPad users, but would improve those who need Pro level features. As one example, the ability to record and stream from an iPad. Currently, this is not possible due to limitations of iPadOS.
At one time I thought about trying to use the iPad as a primary device instead of using a MacBook. However, I could never use an iPad as I would a Mac.
Long time iPad Pro-User Federico Viticci has written an article about the shortcomings of iPadOS. This article compiles a long list of items that goes almost a decade and every single one of the items is worth reading.
Personally, for me, there are two things that would make iPadOS even better and provide a bit more "Pro" features.
The first of these would be the ability to use Xcode directly on the iPad. Yes, Swift Playgrounds is available, but there is definitely something different about having Xcode itself. With Xcode on iPad, it would not need to have any simulators because you could just use the device itself.
The second would be additional background tasks, not just audio recording, but allowing true background tasks that would not be killed by simply switching away from the app. Yes, this might require extensive vetting by Apple and even special entitlements (permissions) for this to happen, but it could be a possibility should Apple opt to make it happen.
It is not likely the 13-inch iPad Pro, particularly with 8GB of RAM in the 256GB or 512GB model, could not be capable of handling "Pro" features, it absolutely could handle it. This is because the Mac mini, which has many "Pro" features, has the same base specs, 256GB of storage and 8GB of unified memory. Therefore, this definitely seems more like a choice than any technical limitation.
Many people have been saying for a long time that the issue with the iPad is not the hardware, and I completely agree. The hardware has not been a problem on the iPad, instead the issue is software. We are just a few weeks away from Apple's World Wide Developer Conference. I hope that we will see a significant update to iPadOS, one that goes beyond just making the iPad seem like a larger iPhone with a few extra bells and whistles. Only time will tell us if this will actually be the case.
Even though all of the iPad hardware, including the accessories, are solid. There are some shortcomings with iPadOS. Next year will mark 10 years since the introduction of the iPad Pro, as well as 5 years since iPadOS became its own distinct operating system, separate from iOS. It is my thinking that if we do not see any significant, and I do mean substantially significant, change at WWDC 2025, then it might be time to just write off the iPad Pro as being anything except "More Expensive" and a showcase for the latest technologies, because at that point, Apple will have made it abundantly clear that the iPad is not worth their time and anybody trying to use it for actual productivity is fooling themselves that it is possible.
Closing Thoughts
The 13-inch iPad Pro is a great upgrade, particularly from the M1 iPad Pro. The new M4 processor provides a great upgrade, including even faster CPU, GPU, and Neural Engine. Some of that speed is due to the storage speed being 20% faster at 120 Gbps. The 13-inch iPad Pro is much thinner and therefore lighter. According to Apple it is the thinnest product they have ever released.
Beyond being super thin, and lighter, the 13-inch iPad Pro has an OLED display, specifically a Tandem OLED display. These two OLED displays allows for even richer colors, deeper blacks, and the dual displays allow for up to 1000 nits of brightness, and up to 1600 nits of peak brightness or HDR content.
The 13-inch iPad Pro has its own set of accessories, including the Magic Keyboard. Much like the 13-inch ipad Pro, the updated Magic Keyboard is lighter and slightly redesigned. The redesign includes a function row, including an escape button. This is great or those who rely on the escape key for functions. Even though the function row is half-height, it is still a great addition. The Magic Trackpad also comes with a larger Trackpad. One with Haptic feedback, similar to the MacBook Pro, where the Trackpad only simulates a click, but does not actually click.
The last accessory is the Apple Pencil Pro. It now includes a gyroscope, that is used with a new Barrel Roll feature. Barrel Roll, if implemented by developers, will allow you to change the shape of a brush, just as if it was a physical pencil. In addition to Barrel Roll, you can now use a squeeze gesture to bring up the toolbar, right where your Apple Pencil Pro is. Once it shows, you can easily switch tools, change colors, access the eraser, or any other option within the toolbar, all without needing to leave the current location or look away. You also do not need to worry about losing your Apple Pencil Pro, because it can be added to your "Find My" devices. Therefore, if you do manage to misplace it, you will at least know where its last location is.
If you are looking to purchase an iPad Pro, the 13-inch iPad Pro is a solid option. It is worthwhile keeping in mind that it is never a good idea to purchase something with the expectation that it could do something more in the future, because there is no guarantee of what it will be able to do. Instead, buy the iPad Pro for what is is capable now.
Sources: There are a couple of sources for some of the processor information.
Today Apple has sent out invitations to an event happening on May 7th, 2024, titled "Let Loose". The event will be available on Apple.com and via the Apple TV app at 7 a.m. Pacific Time on Tuesday, May 7th. The graphic for the event depicts a hand holding an Apple Pencil, therefore it is expected that this will be the anticipated iPad-focused event.
You can watch the event on the Apple Events page or via the Apple TV app.
As with most events, I will post my predictions sometime prior to the event.
Today Apple announced that WWDC 24 will take place from June 10th to June 14th.
The format will be the same as the past few years, in that there will be an in-person experience for a limited group of developers and the conference will be available to stream online for everyone. You will be able to stream the videos online at developer.apple.com or via the Apple Developer App.
Should you wish to attend in-person you have a short amount of time to do apply to attend in-person.
As part of their effort to help the next generation of developers, Apple will be announcing the winners of the Swift Student Challenge on Thursday, March 28th, 2024. These winners will be eligible to attend the keynote in-person. Along with them, 50 Distinguished Winners will be invited to the Apple Campus for a three-day experience. You can read more about the criteria for these on the Apple Developer website.