Technology is consistently entertaining new crazes. Some examples include blockchain, subscription juicers, netbooks, 3D televisions, hyperloop, and "hoverboards", just to name a handful of examples. All of these were going to be "the next big thing", but none of these have panned out as the inventors intended.
There has been a term bandied about that people think may be the end-all for computers. Said term is "Artificial Intelligence", or "AI". The term "AI" can mean a variety of different things, depending on whom you ask. However, when most use the term AI, what they are expecting is a fully conscious and sentient entity that can think, act, and rationalize as a human would. This is called "Artificial General Intelligence". Today's technology is nowhere even close to being able to come to this reality. It is not yet known whether or not Artificial Intelligence will actually live up to its ultimate expectations.
Apple is not known for being jumping on bandwagons and being the first to create new categories of technology; they typically leave that to others. However, if there is a technology that they can put their own spin on, they might do so. At their World Wide Developer Conference 24, they introduced one of these types of technologies, called "Apple Intelligence".
Apple Intelligence is not a single item; in fact, it goes against the grain of other AI assistants and only works on your data. Apple Intelligence consists of a variety of tools to help you accomplish a specific task. When introduced, Apple indicated that the initial features of Apple Intelligence would be released over the course of the iOS/iPad 18 and macOS Sequoia releases.
The items that comprise Apple Intelligence include: Writing Tools, Image Generation, and Personalized Requests. Initially, Apple wanted to have the first items available with iOS 18; however, during the beta, Apple realized that the features would not be far enough along for an initial iOS/iPadOS 18.0 and macOS Sequoia (15.0) release, so they were pushed to iOS/iPadOS 18.1 and macOS Sequoia 15.1.
Not every device that can run iOS 18.1, iPadOS 18.1, or macOS Sequoia 15.1 is able to support Apple Intelligence. To be able to run Apple Intelligence you need to have one of the following devices:
iPhone 16/Plus (A18)
iPhone 16 Pro/Pro Max (A18 Pro)
iPhone 15 Pro/Pro Max (A17 Pro)
iPad mini (A17 Pro or later)
iPad Air (M1 or later)
iPad Pro (M1 or later)
Apple Silicon Mac (M1 or later)
The reason that these devices are the minimum is a combination of needing 8GB of memory, as well as a neural engine.
This article is part of an on-going series that covers the features of Apple Intelligence, as they become available. This article focuses on the Apple Intelligence feature called "Summarization".
Summarization
Communication is an important part of human society. As humans, we have become quite adept at creating ways of communicating. There are effectively two types of communication: asynchronous communications and synchronous, or real-time, communications. Asynchronous communications could be something like newspapers, magazines, and for something more modern, email, and even social media. Real-time communications can include things like text messages, iMessages, WhatsApp, and Google Chat, just to name a handful.
There are those communications that are more informational and more than likely a one-way. The prime example of this is notifications from an app. This can be a notification about an email, a new podcast episode, or even just a notification about a new post from one of your friends.
With the amount of text that everyone comes across each day, it can easily become overwhelming. For notifications, you can just disable all notifications for an app within the Settings app on iOS and iPadOS, or System Settings on macOS, but this is not always a viable solution depending on your needs.
There are a number of areas where you can get summaries. This includes notifications and email. Let us start with notifications.
Summarizing Notifications
Sometimes, it would be great to be able to get a brief synopsis of the notifications that you have received. Now with Apple Intelligence, you can actually have this occur. Below is the summarized post from Ivory from my friend, Barry:
"Sequoia and Time Machine backups issues, one SSD stopped working, the other slow."
Here is the original text:
"Have you had any issues with Sequoia and Time Machine backups? I have two SSD's that used to alternate backups but one has stopped working and the other takes forever to run the "cleaning up" portion of the backup at the end."
This is a pretty good summary of the original text. When I saw this message, I immediately tapped to see the entire message. This is not the only example of summarization. Here is another example from Overcast:
"No episode today; return on Friday, October 10th; Google's Play Store remedies discussed."
The way that this seems to work is by summarizing the titles of the podcast. In most cases, this might be okay, but this is missing some key details, in particular, which podcast does not have an episode today. Now, later in the day, after additional episodes were downloaded, this was the summary:
"Stratechery discusses Google's Play Store remedies; Rebound Prime episode bootleg available"
As you could have surmised, this is a much better summary of the notifications that I received for the various podcasts I subscribe to.
Now, it should be noted that this is with iOS 18.1, which means that developers do not have access to any sort of application programming interface, or API, for suggesting anything to Apple Intelligence, so this is strictly what Apple's own models think is the proper summary.
Another tidbit to note is that each app will be summarized on its own. Therefore, you will get a different summary for your iMessage conversations, Instagram posts, and Overcast podcast notifications. That is not the only summarization that you can get; you can also get summaries of emails.
Email Summaries
Everyone has received a rather long email, and you may want a short summary of the email. Mail on iOS 18.1, iPadOS 18.1, and macOS Sierra 15.1 will handle this for you automatically. When you view your list of emails, you will see a summary directly below the sender and subject line.
While each email is automatically summarized, you can also get a longer summary within the email message. The way that you can do this is by using the following steps:
Open Mail.
Locate the email message that you would like to summarize.
Scroll up to the top of the email message.
Click on the "Summarize" button.
Once you click on this, Apple Intelligence will then analyze the email message and then provide a summary directly above the email.
Here are three different summaries of Justin Robert Young's "Free Political Newsletter."
From September 30th, 2024: "The article discusses the possibility of an October Surprise in the upcoming election, categorizing potential surprises into four types: policy surprises, opposition dumps, acts of God, and legal surprises. It also highlights James Carville's opinion that swing states are likely to move as a block, rather than splitting evenly."
From October 4th, 2024: "The article discusses the possibility of an October Surprise in the upcoming election, categorizing potential surprises into four types: policy surprises, opposition dumps, acts of God, and legal surprises. It also highlights James Carville's opinion that swing states are likely to move as a block, rather than splitting evenly."
From October 7th, 2024: "Democratic ads focus on healthcare and portray Kamala Harris as caring, while Republican ads portray her as frivolous and unserious. The GOP Senate map is favorable, but the party may not have the funds to play in all the states they could win."
All of these are decent summaries of the email messages. As you might suspect, you can only summarize a single email message at a time. You cannot summarize multiple emails, and this makes sense because the emails could be a variety of different topics. Now, the items above were decent examples, but not all emails are great for summarization. Here is what each of Audible's Daily Deal Emails results in:
"Today's Daily Deal is $2.99 and ends at 11:59 PM PT. Offer is not transferable, cannot be combined with other offers, and sale titles are not eligible for return."
Now, honestly, these are completely useless because the title is never displayed. The reason for this is because the emails from Audible never include the title within the email. Instead, the data is not shown until it is downloaded.
To Preview or Not to Preview
Mail provides you with the ability to control whether or not each message preview should be summarized or not. By default, this feature is enabled, but you can change it if you do not want any previews. The method by which you accomplish this depends on the operating system. You can use the steps below to change the setting.
On macOS
Open the Mail app.
Click on the "Mail" menu item.
Click on Settings.
Click on the "Viewing" tab.
Uncheck "Summarize Message Previews".
On iOS/iPadOS
Open Settings.
Scroll down to "Apps".
Tap on Apps to open up the apps list.
Scroll down to, or search for, Mail.
Tap on Mail to open its settings.
Under Message List, tap the toggle for "Summarize Message Previews".
These are pretty straightforward steps to change whether Mail summarizes message previews within the message list. This is not the only Apple Intelligence item related to Mail. Mail has a couple of other features, including smart replies and priority messages. Let us look at both, starting with Smart Replies.
Smart Replies in Mail
When you receive an email, you may want to write a reply, but may not always be able to come up with the right words. It could be helpful to have an appropriate reply generated for you. This is possible with a new feature called "Smart Replies". Smart Replies are designed to create a reply to an email on your behalf. This is done by looking for any questions within the email and then generating a contextual response.
As an example, I looked at an email that I got from Patreon for an episode of "The Morning Stream" with Scott Johnson and Brian Ibbott. Live listeners generate possible titles during the show, and sometimes topics can also generate titles. Within this particular episode, one of the titles was "Is it too early for a Chicken Big Mac?". The mail app on iOS provided two possible responses within the Quick Type bar, "Yes" and "No". If I clicked on one of these, it would provide an appropriate response.
For " Yes", it was "Yes, it is too early for a Chicken Big Mac. I'll have to wait until later in the day to enjoy one." For "No", it created "No, it's never too early for a Chicken Big Mac." For any TMS listeners, the answer is always "No, it's never too early for a Chicken Big Mac". This is just one example of how it might be used. Here is another example.
Recently, I went to a book signing for John Scalzi's Starter Villain at my local bookstore. I received the confirmation for the event, and the mail provided two options for replying.
The first option was "I'll be there", and the generated response was "I'll be there tonight. I'm looking forward to meeting John Scalzi and getting my book signed." The second option was "Can't make it", and the generated response for this was "Hi, Unfortunately, I won't be able to make it to the event tonight. Thanks…"
Both of these are appropriate, and for the "I'll be there" option, it absolutely took contextual clues from the email to provide an appropriate response. Obviously, your mileage will vary given that each email is different. I tested a bunch of emails, and some did not provide any smart reply options, so you may not always see suggestions. There is one last feature: Priority emails.
Priority Messages
A lot of people receive a tremendous amount of email in the course of a day. I am not one of these people. The emails that I receive are generally just informational emails, like from Patreon, bills, or even newsletters. It is not often that I get a personal email sent to me. However, there are those that get a lot of emails. For these individuals, it might be crucial to see the most important emails. Now, with iOS 18.1, iPadOS 18.1, and macOS Sierra 15.1, this is a feature that you can utilize.
Much like Smart Replies and Summarization, Priority Inbox is enabled by default, including on the "All Inboxes" mailbox, if you have more than one configured mail account. You can configure each inbox for Priority Messages by performing the following steps:
Open the Mail app.
Click on the inbox you want to configure for Priority.
Click on the "…" icon in the upper right corner.
Uncheck "Show Priority".
If you have Priority inbox enabled, Mail will attempt to bring the most important messages to the top of your inbox. This is useful to make sure that you see the items that you really need to see. Now, it should be noted, that this is not Mail Categorization. That is not available in iOS 18.1, iPadOS 18.1, or macOS Sequoia 15.1. Mail Categorization will be available in a future update.
Closing Thoughts on Summarization and Mail
You can easily get a quick summary of notifications. This could be a series of messages from a group chat, notification about new podcast episodes, or even notifications. Each summary is grouped by app, and these summarizations will be updated as new notifications come in. But these are not the only summaries that you can receive. Mail will automatically provide a summary for you. These summaries are shown below the sender and email subject and are typically only a line long. If you want a slightly longer summary, you can get this by clicking on the "Summarize" button above the email.
Mail will automatically organize your emails to show "Priority Messages". Priority Messages are those messages that Mail thinks are the most important to you. While it is enabled by default, you can configure this behavior on a per-inbox basis.
Be sure to check out all of the other articles in the series:
Today Apple has unveiled a new Mac mini that has the M4. This is not just a spec bump, but it includes a couple of new features, chief amongst them is a new form factor.
Form Factor
The Mac mini was introduced in 2005, and was a smaller version of the Mac, hence the name Mac mini. The Mac mini was 6.5 inches wide, had a 6.5 inch depth, and was 2 inches tall. This remained the form factor until 2011 when a new Unibody version was introduced, one that eliminated the internal disc drive. This Mac mini was physical larger at 7.7 inches wide, 7.7-inches in depth, and only 1.4 inches tall. All Mac minis introduced since 2011 have had the exact same physical footprint, including the M1 and M2 Mac minis. This all changes with the M4.
In 2022 Apple introduced a whole new machine, the Mac Studio. This took some of the design elements from the Mac mini but expanded them. The M1 and M2 Mac Studio were 7.7-inches wide, had a 7.7 inch depth, but was significantly taller at 3.7 inches.
The M4 Mac mini takes some design cues from the Apple TV. The M4 Mac mini is 5 inches wide, has a 5 inch depth, and is only 2 inches tall. This means that it is smaller than the previous Mac mini, but still a bit larger than an Apple TV. Before we dive into the ports, let us look at the processor.
M4 and M4 Pro
The Mac mini has come with a variety of processors. The previous M2 Mac mini was available in both M2 and M2 Pro variants. The same continues for the M4 Mac mini, with the M4 and M4 Pro. The M4 consists of a 10-core CPU, with 4 performance cores and 6-efficiency cores, and a 10-Core GPU. According to Apple, the M4 Mac mini is significantly faster than the M1 Mac mini. Specifically,
When compared to the Mac mini with M1, Mac mini with M4:
- Performs spreadsheet calculations up to 1.7x faster in Microsoft Excel.
- Transcribes with on-device AI speech-to-text up to 2x faster in MacWhisper.
- Merges panoramic images up to 4.9x faster in Adobe Lightroom Classic.
The M4 Pro has tow configurations, a 12-core version with 8 performance cores, and 4 efficiency cores with a 16-Core GPU. The other M4 Pro option is a 14-core CPU, with 10 performance cores and 4 efficiency cores and a 20-core GPU. From Apple’s press release:
When compared to the Mac mini with M2 Pro, Mac mini with M4 Pro:
- Applies up to 1.8x more audio effect plugins in a Logic Pro project.
- Renders motion graphics to RAM up to 2x faster in Motion.
- Completes 3D renders up to 2.9x faster in Blender.
All M4 and M4 Pro models have a 16-core Neural engine for machine learning and Apple Intelligence tasks.
Ports
The M4 Mac mini has a total of 7 ports, an ethernet jack, an HDMI port, and 5 USB-C ports. Of these ports, two are on the front, much like the Mac Studio, and three are on the back. The two on the front are USB-C with USB 3 speeds up to 10 gigabits per second. The three ports on the back are Thunderbolt/USB 4 ports. For the M4 models, these are Thunderbolt 4 ports, which can delivery data up to 40 Gigabits per second. The M4 M4 Pro devices are Thunderbolt 5 ports, which can deliver a whopping 120 Gigabits per second. The USB portion can deliver up to 40 Gigabits per second.
The difference in Thunderbolt ports does mean that there is a difference in DisplayPort compatibility. The Thunderbolt 4 ports support DisplayPort 1.4 while the Thunderbolt 5 ports support DisplayPort 2.1. The HDMI port on either model can support one display with 8K resolution at 60Hz, or 4K resolution at 240Hz.
By default the ethernet port is a gigabit port, but you can opt for a 10-gigabit per second port for $100 more. The Mac mini has long had a headphone jack this is still present on all models of the M4 Mac mini.
Pricing and Availability
The M4 Mac mini starts at $599 for 16GB of unified memory and 256GB of storage. You can configure the M4 models with 24GB or 32GB of memory, and up to 2TB of storage.
The M4 Pro Mac mini starts at $1399 for a 12-core CPU and 16-core GPU, 24GB of unified memory, 512GB of storage. You can configure the M4 Pro Mac mini with 48GB or 64GB of unified memory, and 1TB, 2TB, 4TB, or 8TB of storage.
The M4 Mac mini is available for pre-order today and will be available for delivery and in store on Friday November 8th.
Closing Thoughts
While other devices have received a redesign specifically for the lower power usage of Apple Silicon, the Mac mini was not one of them. The Mac mini has finally received its redesign. The smaller form factor takes cues from both the Mac Studio and Apple TV. The M4 and M4 Pro should be great upgrades from anyone who has an Intel Mac, and if you are upgrading from the M1, it will still be a solid update.
Technology is consistently entertaining new crazes. Some examples include blockchain, subscription juicers, netbooks, 3D televisions, hyperloop, and "hoverboards", just to name a handful of examples. All of these were going to be "the next big thing", but none of these have panned out as the inventors intended.
There has been a term bandied about that people think may be the end-all for computers. Said term is "Artificial Intelligence", or "AI". The term "AI" can mean a variety of different things, depending on whom you ask. However, when most use the term AI, what they are expecting is a fully conscious and sentient entity that can think, act, and rationalize as a human would. This is called "Artificial General Intelligence". Today's technology is nowhere even close to being able to come to this reality. It is not yet known whether or not Artificial Intelligence will actually live up to its ultimate expectations.
Apple is not known for being jumping on bandwagons and being the first to create new categories of technology; they typically leave that to others. However, if there is a technology that they can put their own spin on, they might do so. At their World Wide Developer Conference 24, they introduced one of these types of technologies, called "Apple Intelligence".
Apple Intelligence is not a single item; in fact, it goes against the grain of other AI assistants and only works on your data. Apple Intelligence consists of a variety of tools to help you accomplish a specific task. When introduced, Apple indicated that the initial features of Apple Intelligence would be released over the course of the iOS/iPad 18 and macOS Sequoia releases.
The items that comprise Apple Intelligence include: Writing Tools, Image Generation, and Personalized Requests. Initially, Apple wanted to have the first items available with iOS 18; however, during the beta, Apple realized that the features would not be far enough along for an initial iOS/iPadOS 18.0 and macOS Sequoia (15.0) release, so they were pushed to iOS/iPadOS 18.1 and macOS Sequoia 15.1.
Not every device that can run iOS 18.1, iPadOS 18.1, or macOS Sequoia 15.1 is able to support Apple Intelligence. To be able to run Apple Intelligence you need to have one of the following devices:
iPhone 16/Plus (A18)
iPhone 16 Pro/Pro Max (A18 Pro)
iPhone 15 Pro/Pro Max (A17 Pro)
iPad mini (A17 Pro or later)
iPad Air (M1 or later)
iPad Pro (M1 or later)
Apple Silicon Mac (M1 or later)
The reason that these devices are the minimum is a combination of needing 8GB of memory, as well as a neural engine.
This article is part of an on-going series that covers the features of Apple Intelligence, as they become available. This article focuses on the Apple Intelligence feature called "Typing with Siri".
Siri
Siri is Apple's personal assistant. Back in 2010 Apple acquired a voice assistant called Siri. In 2011, with the release of iOS 5 and Mac OS X 10.7 Lion, Siri became integrated into the operating system. When integrated with the operating system, Siri could perform a few more actions and over time you have been able to perform even more actions with Siri, like getting information about the weather, asking who was in a particular movie, or even getting the latest sports scores. Beyond being able to perform more actions and get more information from Siri.
Siri has expanded to more than just the iPhone and the Mac. You can use Siri on your Apple Watch, Apple TV, as well as on the HomePod. In order to use Siri with these devices you can either hold down a particular button, or you can use the phrase "Hey Siri" to activate Siri. This has been the wake word since 2011. Last year, in 2023 with the release of iOS 17, iPadOS 17, and macOS Sonoma, Apple provided the ability to use the word "Siri" instead of "Hey Siri". This was a boon, but this may not be the only way to interact with Siri.
Type to Siri
One of the limitations of Siri is that you need to use your voice to use Siri. This may work in a variety of situations, like while at home, while driving, or even in any area where you are alone. However, you may not want to use voice interactions but still may want to use Siri. There is a new way of using Siri, by typing to it.
The way that you use "Type to Siri" differs depending on the operating system. To use Type to Siri on iPhone and iPad you simply double-tap on the home indicator. On a Mac, use the keyboard combination Globe + S. If you have a keyboard connected to your iPad, you can also use the same keyboard combination.
It is different on macOS. By default the keyboard shortcut is to hit either of hte "command" keys twice. But this is not enabled by default. Before you can type to Siri, you will need to enable it. On macOS this can be done by using the following steps:
Open System Settings.
Click on "Apple Intelligence & Siri" to bring up the Apple Intelligence & Siri settings.
Enable the "Siri" toggle.
Once enabled, you can use press either of the command keys, twice in a row. However, you may want to have the same key combination as on iOS and iPadOS. This can be done by selecting the appropriate "Keyboard Shortcut" option within the Apple Intelligenice & Siri setting. The system options are:
Globe + S
Press Left Command Key Twice
Press Right Command Key Twice
Press Either Command Key Twice
Custom
If you select "Custom", you will need to enter in the keyboard combination that you want to use. It is best to avoid any existing system key combinations, otherwise you might become confused. Now, let us look at actually using Type to Siri.
Using Type to Siri
Once you bring up Type to Siri you will have a text box where you can enter in your request. After tapping the "send" button or hitting the enter key your request will be sent to Siri. Instead of your result being spoken out loud, your result will be shown on the screen. As you type, Siri will provide suggestions for items that you may want to do.
Suggested Actions
As an example, if you start typing "Create", you may get something like "Create a new note". Another example, if you type "Play", you may get suggestions for playing certain music playlists. For me, it was "Play New Music - 2024/09", "Play Heavy Rotation playlist", and "Play Guilty as Sin? by Taylor Swift". Each of these have been playlists, or songs, that I have been playing a lot lately.
The suggestions I got are from my iPhone. When I tried the same thing on my MacBook Pro I got "Open Playgrounds", "Play the news", and "Play some music". Similarly, on my iPad Pro I got "Play my voicemail", "Play my Audiobook", and "Open Playgrounds".
The different responses make complete sense because it is being processed locally and it is contextual to what you do on that device. Becaue I do not play music on my iPad Pro, Siri did not suggest that as an option. To be honest, I am a bit confused as to why it would suggest to "Play voicemails", when there is no phone app on the iPad.
Results
Just as when you use your voice with Siri you can perform more thn just the suggested actions. You can type the same requests that you would normally say. My go to example is asking the tongue twister "How much wood would a woodchuck chuck, if a woodchuck could chuck wood?". Siri, naturally responded with:
About as much ground as a groundhog could hog if a groundhog could hog ground.
How about another tongue twister?
These are just a couple of examples of what you can do when you type to Siri. This may not seem like a big deal, but being able to use your keyboard with Siri is a huge shift in how and when you might use Siri. You are no longer required to use your voice, which means that this can be used in almost ANY situation, which is something that many have wanted since Siri was introduced.
Closing Thoughts on Siri
Now, you do not need to be self-conscious about using Siri in public, because you do not need to say anything, you can simply type your request and have Siri show you the results. When you make a request, suggestions will be shown and you can simply type in your request and Siri will provide you the answer.
Siri will be getting even more features later, but this is the current new feature for Siri, at least as of iOS 18.1, iPadOS 18.1, and macOS Sequoia 15.1.
This post is just one in a series about Apple Intelligence, There will be more articles in this series, so be sure to check out those articles.
Today Apple unveiled a new iMac, one powered by the M4. While it might seem like a small update from the M3, there are a number of improvements, including the M4, ports, and colors, just to name a few items.
M4
The 24-inch iMac is powered by the M4 chip. This comes in two processor configurations, an 8-core CPU with 8-Core GPU model, and a 10-Core CPU with 10-Core GPU model. According to Apple, the M4 iMac is up to 1.7x faster for daily productivity and up to 2.1x faster for graphics editing and gaming; at least when you compare it to the M1 iMac.
Display
The size of the iMac has not changed, but there is a new option, a nano-texture display option. This is a similar display as on the iPads and on the Apple Studio Display. This is an option and will cost $200 more. This option is only available on the
Beyond this, there is a new 12Megapixel Center Stage camera. This should provide even better quality, because this camera is capable of providing Desk View, which is the ability to show your desk while in a video call, the previous iMac could not provide you this functionality.
Colors
The 24-inch iMac has come in a variety of colors. The available colors have been updated. There are seven options:
Silver
Blue
Purple
Pink
Orange
Yellow
Green
Unlike like the previous model, all of the colors are available for any processor choice. There is a difference depending on the model, and that is with the ports. To go with this, are new color-matched accessories, including the Magic Keyboard with Touch ID, Magic Trackpad, and Magic Mouse. These all now have USB-C cables, instead of the previous lightning. Beyond the port change, the design and port locations have not changed at all.
Ports and Connectivity
Depending on the processor, you will either get two or four ports. The 8-Core CPU model has two thunderbolt/USB 4 ports. The 10-core CPU models have four thunderbolt 4 ports. All of the iMacs have Wi-Fi 6E and Bluetooth 5.3. The four thunderbolt four ports means that you can have up to two 6K external displays, which is an improvement over the M3 model, which only supported one external 6K monitor.
Pricing
There are actually four different configuration options available. These starting configuration options are:
8-Core CPU with 8-Core GPU, 16GB of unified memory, and 256GB of storage - $1299
10-Core CPU with 10-core GPU, 16 GB of unified memory, and 256GB of storage - $1499
10-Core CPU with 10-core GPU, 16 GB of unified memory, and 512GB of storage - $1699
10-Core CPU with 10-core GPU, 24 GB of unified memory, and 256GB of storage - $1899
You can configure the 10-Core models with up to 32GB of unified memory and up to 2TB of storage. The 10-Core models also come with Ethernet, whereas the 8-core model is Wi-Fi only, but you can add Ethernet to that model for $30.
Closing Thoughts
You can pre-order the new iMac today and they will be available starting on Friday, November 8th. If you are looking for a new iMac, now is the time to upgrade, particularly if you have an Intel machine, or want to upgrade from an M1 iMac.
Technology is consistently entertaining new crazes. Some examples include blockchain, subscription juicers, netbooks, 3D televisions, hyperloop, and "hoverboards", just to name a handful of examples. All of these were going to be "the next big thing", but none of these have panned out as the inventors intended.
There has been a term bandied about that people think may be the end-all for computers. Said term is "Artificial Intelligence", or "AI". The term "AI" can mean a variety of different things, depending on whom you ask. However, when most use the term AI, what they are expecting is a fully conscious and sentient entity that can think, act, and rationalize as a human would. This is called "Artificial General Intelligence". Today's technology is nowhere even close to being able to come to this reality. It is not yet known whether or not Artificial Intelligence will actually live up to its ultimate expectations.
The term "Artificial Intelligence" can garner a number of thoughts, and depending on who you ask, these can range from intrigue, worry, elation, or even skepticism. Humans have long wanted to create a machine that can think like a human, and this has been depicted in media for a long time. Frankenstein is an example where a machine is made into a human and then is able to come to life . Another great example is Rosie from the 1960s cartoon The Jetsons. In case you are not aware, The Jetsons is a fictional animated tv show that depicts the far future where there are flying cars, and one of the characters, Rosie, is an robot that can perform many household tasks, like cleaning and cooking.
We, as a society, have come a long way to creating modern "artificial intelligence", but we are still nowhere close to creating a robot that is anywhere close to human. Today's modern artificial intelligence falls into a number of categories, in terms of its capabilities, but it is still a long way off from being the idealistic depiction that many expect artificial intelligence to be.
Artificial Intelligence comes in a variety of forms. This includes automated cleaning robots, automated driving, text generation, image generation, and even code completion. There are many companies that are attempting to create mainstream artificial intelligence, but nobody has done so that we know of.
Apple is one of those companies, but they are taking a different approach with their service called Apple Intelligence. Apple Intelligence is Apple's take on artificial intelligence. Apple Intelligence differs in a number of ways from standard "artificial intelligence". This includes the use of on-device models, private cloud computing, and personal context. Before we delve into each of those, let us look at artificial intelligence, including a history.
Artificial Intelligence
Artificial intelligence is not a new concept. You may think that it is a modern thing, but in fact, it harkens back to World War II and Alan Turing. Turing is known for creating a machine that could crack the German Enigma codes. In 1950, Turing released a paper which was the basis of what is known as the "Turing Test". The Turing Test is one where a machine is able to exhibit intelligent behavior that is indistinguishable from a human.
There have been a number of enhancements to artificial intelligence in recent years, and many of the concepts that have been used for a while have come into more common usage. Before we dive into some aspects of artificial intelligence, let us look at how humans learn.
How Human Brains Operate
In order to be able to attempt to recreate the human brain in a robot, we first need to understand how a human brain works. While we have progressed significantly in this, we are still extremely far from fully understanding how a human brain functions, let alone even attempting to control one.
Even though we do not know everything about the brain, there is quite a bit of information that we do know. Human brains are great at spotting patterns, and the way that this is done is by taking in large amounts of data, parsing that data, and then identifying a pattern. A great example of this is when people look at clouds. Clouds come in a variety of shapes and sizes, and many people attempt to find recognizable objects within the clouds. Someone is able to accomplish this by taking their existing knowledge, looking at the cloud, determining if there is a pattern, and if there is one, identifying the object.
When a human brain is attempting to identify an object, what it is doing is going through all of the objects (animals, plants, people, shapes, etc.) that they are aware of, quickly filtering them, and seeing if there is a match.
The human brain is a giant set of chemical and electrical synapses that connect to produce consciousness. The brain is commonly called a neural network due to the network of neural pathways. According to researchers, humans are able to update their knowledge. In a technical sense, what is happening is that the weights of the synaptic connections that are the basis of our neural network brain are updated. As we go through life, our previous experiences will shape our approach to things. Beyond this, it can also affect how we feel about things in a given moment, again, based upon our previous experiences.
This approach is similar to how artificial intelligence operates. Let us look at that next.
How Artificial Intelligence Works
The current way that artificial intelligence works is by allowing you to specify an input, or prompt, and having the model create an output. The output can be text, images, speech, or even just a decision. All artificial intelligence is based on what is called a Neural Network.
A Neural Network is a machine learning algorithm that is designed to make a decision. The manner in which this is done is by processing data through various nodes. Nodes generally belong to a single layer, and for each neural network, there are at least two layers: an input layer and an output layer.
Each node within a neural network is composed of three different things: weights, thresholds (also called a bias), and an output. Data goes into the node, the weights and thresholds are applied, and an output is created. A node requires the ability to actually come to a determination and is based on training, or what a human might call, knowledge.
Training
Humans have a variety of ways of learning something that can include family, friends, media, books, TV shows, audio, and just exploring. Neural Networks cannot be trained this way. Instead, neural networks need to be given a ton of data in order to be able to learn.
Each node within a neural network provides an output, sending that to another node, which provides its output, and the process continues until a result is determined. Each time that a result is determined, a positive or negative correlation is determined. Much like a human, the more positive connections that are made, the better, and eventually, the positive correlation between an answer and the result will push away the negative connections. Once it has made enough positive correlations (gotten the right answer), it will eventually be trained.
There are actually two types of training: Supervised Learning and Reinforcement Learning.
Supervised Learning is the idea of feeding a training model so that it can learn the rules and provide the proper output. Typically, this is done using two methods: either classification or regression. Classification is pretty simple to understand. Let us say that you have 1000 pictures, 500 dogs, and 500 cats. You provide the training model with each photo individually and you tell it the type of pet for each image.
Reinforcement learning is similar, but different. In this scenario, let us say you have the same 1000 pictures, again 500 dogs and 500 cats. But instead of telling the model what is what, you let it determine the similarities between the items and as it continues to get them right, that will reinforce what it already knows.
Inference
Inference, in reference to artificial intelligence, is the process of applying a training model to a set of data. The best way to test a model is to provide it with brand-new data to try and infer the result with this brand-new data.
Artificial Intelligence works by taking the input of the new data, applying the weights, also known as parameters, that are stored in the model and applying them to the actual data.
Inference is not free, it does have a cost, most particularly when it comes to energy usage. This is where optimizations can be useful. As an example, Apple will utilize the Neural Engine as much as possible for its on-device inference. The reason for this is because the Neural Engine is optimized to perform inference tasks, while minimizing the amount of energy needed.
Artificial Intelligence Use Cases
No tool is inherently good or inherently bad, the tool is the tool. It is how it is used that determines whether it is a positive usage or a negative use. Artificial Intelligence is no different in this. Artificial intelligence can have a wide range of possible use cases. Current artificial intelligence is capable of performing actions related to detecting cancer, synthesizing new drugs, detecting brain signals in amputees, and much more. These are all health-related, but that is where many artificial intelligence models are thriving, at least at the moment, but that is not all that is possible.
Not all artificial intelligence usage is positive. There are many who will want to make what are called "Deep Fakes". A deep fake is a way of taking someone and either placing them in a situation where they never were, or even making them say something that they have never said. This is not new, not by a long shot. Since the inception of photos, there have always been manipulations. This is designed to influence someone into thinking a particular way. As you might guess, this can have detrimental effects because it distorts reality. While there are those who want to use these for nefarious purposes, there can be some positive use cases for this type of technology.
Back in 2013, country music artist Randy Travis suffered a stroke and, as a result, now suffers from aphasia, which, according to the Mayo Clinic, is "a disorder that affects how you communicate." This effectively left him unable to perform. However, in May of 2024, a brand-new Randy Travis song was released using artificial intelligence that used two proprietary AI models to help create the song. This was done with full permission from Randy Travis himself, so there is no issue there.
Let us look at a couple of different approaches used, including Large Language Models and Image Generators.
Large Language Models
Large language models, or LLMs, are those that are able to generate language that a human would understand. To quote IBM:
"In a nutshell, LLMs are designed to understand and generate text like a human, in addition to other forms of content, based on the vast amount of data used to train them. They have the ability to infer from context, generate coherent and contextually relevant responses, translate to languages other than English, summarize text, answer questions (general conversation and FAQs), and even assist in creative writing or code generation tasks." - Source: IBM.
LLMs can be used for generating, rewriting, or even changing the tone of text. The reason that this is possible is because most languages have pretty rigid rules, and it is not a complex task to calculate the probability of what the next word would be in a sentence.
The way that an LLM is trained is by consuming vast amounts of text. It then recognizes patterns from this data and then it can generate text based upon what it has learned.
Image Generation
One of the uses of modern artificial intelligence is the ability to create images. Similar to LLMs, there are image generation models that have been trained on a massive number of images. This data has been used to train the models which are used for the actual image generation. Depending on the model, you may be able to generate various types of images, ranging from cartoons to completely realistic ones.
Image generation models use a technique called Generative Adversarial Networks, or GANs. The way that a GAN works is using two different algorithms, the generator, and the discriminator, that work in tandem. The generator will output a bunch of random pixels as an image and then send it over to the discriminator. The discriminator, which has knowledge of millions of pictures of what you are trying to generate, will provide a result, which is basically a "Yes" or "No". If it is a 'no', then the generator will try again and again.
This back and forth is what is called an "adversarial loop" and this loop continues until the generator is able to generate something that the discriminator will say matches the intended type of image.
The training for GANs is quite interesting. It starts with an image and then purposely introduces noise into the image, and it does so again, and again, and again. This process reiterates a large number of times. This noisy data is what becomes the basis for the generator.
All of this is a good base for looking at what Apple has in store for its own artificial intelligence technologies, so let us look at that now.
Apple and Artificial Intelligence
You might think that Apple is late to the artificial intelligence realm, but in fact, Apple has been working with artificial intelligence for many years; it has just been called something else. Some of the areas where Apple has been using artificial intelligence have been with Photos, Siri, Messages, and even auto-correct.
Apple Intelligence
As mentioned above, Apple Intelligence is Apple's take on artificial intelligence. Apple Intelligence differs from standard artificial intelligence in that Apple intelligence is designed to work on YOUR information, not on general knowledge. The primary benefit of working on your data is that your data can remain private. This is done using on-device models.
On-Device Requests
A vast majority of Apple Intelligence requests will be performed on your device. There are a number of examples of this, including things like:
"Find me pictures of [someone] while in London."
"When is Mom's flight landing?"
Apple has been doing a lot of research with machine learning to be able to run on-device. This has meant that the machine learning models have needed to be kept the same in terms of quality but need to be able to be used on devices with limited amounts of memory. Limited, of course, is relative. We are not talking like 1GB of RAM, but more like 8GB.
The reason that Apple wants to be able to do much of the processing on your device is twofold. The first is response time. By having devices handle requests, they can be almost instantaneous. This is quite beneficial for those times when you may not have connectivity. Beyond this, sending all of your requests to the cloud would end up providing some sort of delay, even with a direct connection and incredibly fast connection speeds.
The second reason is privacy. Privacy is a big part of Apple's core beliefs. When using your own device and processing the request on the device, that means that nobody else will get access to your data, not even Apple. Instead, only you will have access to your data, which is great for your own peace of mind.
Even though as much as possible will be done on your own devices, there may be instances when your device is not able to handle your request locally. Instead, it may need to be sent to the cloud. This can be needed for larger models that require additional memory or processing to be done. If this is needed, it is handled automatically by sending it to Apple's Private Cloud Compute platform. Let us look at that next.
Private Cloud Compute
Nobody wants their data to get out of their control, yet it does happen from time to time. Apple takes data privacy seriously and has done a lot to help keep people's data private. This is in contrast to other artificial intelligence companies, who have no compunction to take user data and use it to train their machine learning models.
Apple has been working on reducing the size and memory requirements for many machine learning models. They have accomplished quite a bit, but right now there are some machine learning models that require more tokens, which means more memory, than devices are capable of having. In these instances, it may be necessary to use the cloud to handle requests.
Apple has 1.2 billion users, and while not all of the users will utilize Apple Intelligence immediately, Apple still needs to scale up Apple Intelligence to support all users who will be using it. In order to make this happen, Apple could just order as many servers as they want, plug them in, and make it all work. However, that has its own set of tradeoffs. Instead, Apple has opted to utilize their own hardware, create their own servers, and make things as seamless as possible for the end user, all while protecting user data.
Private Cloud Compute is what powers online requests for Apple Intelligence. Private Cloud Compute runs in Apple's own data centers. Private Cloud Compute is powered by a series of nodes. Each of these nodes uses Apple Silicon to process requests. These are not just standard Macs; they have been heavily customized.
Nodes
Each Private Cloud Compute node undergoes significant quality checks in order to maintain integrity. Before the node is sealed and its tamper switch activated, each component undergoes a high-resolution scan to make sure that it has not been modified. After the node has been shipped and arrives at an Apple data center, it undergoes another verification to make sure it still remains untouched. This process is handled by multiple teams and overseen by a third party who is not affiliated with Apple. Once verification has been completed, the node is deployed, and a certificate is issued for the keys embedded in the Secure Enclave. Once the certificate has been created, it can be used.
Request Routing
Protecting the node is just the first step in securing user data. In order to protect user data, Apple uses what is called "target diffusion". This is a process of making sure that a user's request cannot be sent to a specific node based on the user or its content.
Target diffusion begins with the metadata of the request. This information strips out user-specific data as well as the source device. The metadata is used by the load balancers to route the request to the appropriate model. In order to limit what is called a "replay attack", each request has a single-use credential which is used to authorize requests without tying it to a specific user.
All requests are routed through an Oblivious HTTP, or OHTTP, relay, managed by a third-party provider, which hides the device's source IP address well before it ever reaches the Private Cloud Compute node. This is similar to how Private Relay works, where the actual destination server never knows your true IP address. In order to steer a request based on source IP, both Apple's Load Balancer as well as the HTTP relay would need to be compromised; while possible, it is unlikely.
User Requests
When a user's device makes a request, it is not sent to the entire Private Cloud Compute service as a whole; instead, pieces of the request are routed to different nodes by the load balancer. The response that is sent back to the user's device will specify the individual nodes that should be ready to handle the inference request.
When the load balancer selects which nodes to use, an auditable trail is created. This is to protect against an attack where an attacker compromises a node and manages to obtain complete control of the load balancer.
Transparency
When it comes to privacy, one could say, with confidence, that Apple does what they say they are doing. However, in order to provide some transparency and verification, Apple is allowing security researchers the ability to inspect software images. This is beyond what any other cloud company is doing.
In order to make sure there is transparency, each production build of Apple's Private Cloud Compute software will be appended to a write-only log. This will allow verification that the software that is being used is exactly what it claims to be. Apple is taking some additional steps. From Apple's post on Private Cloud Compute:
Our commitment to verifiable transparency includes:
1. Publishing the measurements of all code running on PCC in an append-only and cryptographically tamper-proof transparency log.
2. Making the log and associated binary software images publicly available for inspection and validation by privacy and security experts.
3. Publishing and maintaining an official set of tools for researchers analyzing PCC node software.
4. Rewarding important research findings through the Apple Security Bounty program.
This means that should an issue be found, Apple will be notified before it can become an issue, take actions to remedy the issue, and release new software, all in an attempt to keep user data private.
Privacy
When a request is sent to Apple's Private Cloud Compute, only your device and the server can communicate. Your data is sent to the server, processed, and returned to you. After the request is complete, the memory on the server is wiped so your data cannot be retrieved. This includes wiping the cryptographic keys on the data volume. Upon reboot, these keys are regenerated and never stored. The result of this is that no data can be retrieved because the cryptographic keys are sufficiently random that they could never be regenerated.
Apple has gone to extensive lengths to make sure that nobody's data can be compromised. This includes removing remote access features for administration, high-resolution scanning of the Private Cloud Compute node before it is sealed, and making sure that requests cannot be routed to specific nodes, which may allow someone to compromise data. Beyond this, when a Private Cloud Compute node is rebooted, the cryptographic keys that run the server are completely regenerated, so any previous data is no longer readable.
For even more detail, be sure to check out Apple's blog post called "Private Cloud Compute" available at https://security.apple.com/blog/private-cloud-compute.
General World Knowledge
Apple Intelligence is designed to work on your private data, but there may be times when you need to go beyond your own data and use general world knowledge. This could be something like asking for a recipe for some ingredients you have, or it could be a historical fact, or even to confirm some existing data.
Apple Intelligence is not capable of handling these types of requests. Instead, you will be prompted to send these types of requests off to third parties, like OpenAI's ChatGPT. When you are prompted to use one of these, you will need to confirm that you want to send your request and that your private information (for that specific request) will be sent to the third party.
At launch, only OpenAI's ChatGPT will be available. However, there will be more third-party options coming in the future. This type of arrangement is a good escape valve should you need to get some information that is not within your own private data. Now that we have covered what Private Cloud Compute is, let us look at what it will take to run Apple Intelligence.
Minimum Requirements
Apple Intelligence does require a minimum set of requirements in order to be used. Apple Intelligence will work on the following devices:
iPhone 16 Pro/Pro Max (A18 Pro)
iPhone 16/16 Plus (A18)
iPhone 15 Pro/Pro Max (A17 Pro)
iPad mini (A17 Pro)
iPad Pro (M1 and later)
iPad Air (M1 and later)
MacBook Air (M1 and later)
MacBook Pro (M1 and later)
Mac mini (M1 and later)
Mac Studio (M1 Max and later)
Mac Pro (M2 Ultra and later)
There are a couple of reasons why these are the devices that can be used. The first is that they require a neural engine. For the Mac, this was not present until 2020 when the first Macs with Apple Silicon were released. For the iPhone, the first Neural Engine appeared with the A11 Bionic chip on the iPhone 8, 8 Plus, and iPhone X. All iPhones since have included a Neural Engine, but that is just one requirement.
The second requirement is the amount of memory. The minimum amount of memory to run the on-device models is 8 gigabytes. The iPhone 15 Pro and iPhone 15 Pro Max are the first iPhones to come with 8GB of memory. All M1 Macs have had at least 8GB of memory.
Now, this is the minimum amount of memory. Not all features will work with only 8GB of memory. One example is a new feature for developers within Apple's Xcode app. With Xcode 16, developers will have the option of using Apple's Predictive Code Completion Model. When you install Xcode 16, there is an option that allows you to download the Predictive Code completion model, but only if your Mac has 16GB of memory or more. To illustrate this, if you have a Mac mini with 8GB of memory, you will get the following installation screen.
Similarly, let us say you have a MacBook Pro with 32GB of unified memory, you will get this installation screen.
As you can see, the Predictive Code Completion checkbox is not even an option on the Mac mini with 8GB of memory. And the Predictive Code Completion is a pretty limited amount of knowledge. Swift, while being a large programming language, is limited in scope, and that model does not work on 8GB.
It would not be presumptuous to think that this may be the case for various Apple Intelligence models going forward. Now that we have covered the minimum requirements, let us look at some of the use cases that Apple Intelligence can handle, starting with something called Genmoji.
Enabling Apple Intelligence
As outlined above, Apple Intelligence is available for compatible devices running iOS 18.1, iPadOS 18.1, or macOS Sequoia 15.1. However, Apple Intelligence is not automatically enabled. Instead, you will need to enable it. Apple Intelligence is activated on a per Apple Account basis. This only needs to be done once. Once activated, it will need to be enabled per device. To activate Apple Intelligence perform these steps:
Open Settings on iOS, or iPadOS, or System Settings on macOS Sequoia.
Scroll down to "Apple Intelligence".
Tap, or click, on "Apple Intelligence" to bring up the settings.
Tap, or click, on "Join Apple Intelligence Waitlist". A popup will appear
Tap on the "Join Apple Intelligence Waitlist" button to confirm you want to join the waitlist.
Once you do this, you will join the Apple Intelligence waitlist. It may take some time before you are able to access the features. Once your Apple Account has had Apple Intelligence activated on it, you will then get a notification on your device indicating that Apple Intelligence is ready.
At this point, you can click on the "Turn On Apple Intelligence" button, and a popup will appear that will allow you to enable the features. Once you have enabled Apple Intelligence on your device, you will be able to use the features.
Closing Thoughts on Apple Intelligence
Many Artificial Intelligence tools require sending your private data to a server in the cloud to be able to perform a particular task. Doing this has the potential to not only leak your private data, but your private data can possibly be used to train additional artificial intelligence models. This is an antithesis to the core values of Apple, so Apple has taken a different approach with their own artificial intelligence that they are calling Apple Intelligence.
Apple Intelligence is designed to work on your private data and maintain that privacy. The way that this is accomplished is through a service called Private Cloud Compute. Private Cloud Compute is a set of servers in Apple's own datacenter that are built on Apple Silicon, utilizing features like the Secure Enclave to maintain the integrity of the server. Beyond this, each time that a request has been completed, the previous keys are wiped, and the server is completely reset and reinitialized with no data being retained between reboots.
Apple Intelligence is designed to help you accomplish tasks that you need, like summarizing text, generating new emojis, creating images, and more.
Apple Intelligence will be a beta feature starting in late 2024, with some overall features not coming until 2025, and it will be English only at first. Furthermore, these features will not be available in the European Union, at least not at first.
Apple Intelligence will have some pretty stiff requirements, so it will not work on all devices. In fact, you will need to have an Apple Silicon Mac or an iPad with an M1 or newer, or an A17 Pro. For the iPhone, you will need a device with an A17 Pro, A18, or A18 Pro. These devices are the iPhone 15 Pro, iPhone 16/16 Plus, or iPhone 16 Pro/Pro Max to take advantage of the Apple Intelligence features.
This is merely an introduction to Apple Intelligence, There will be more articles in this series, so be sure to check out those articles.
Today, Apple announced the 7th generation iPad mini. This is the first update of the iPad mini since 2021, when the 6th generation iPad mini was released. The 7th generation iPad mini is a modest update, but it includes a number of improvements. The 7th generation iPad mini comes in four colors: Space Gray, Starlight, Purple, and Blue. The Blue replaces the previous Pink option, but the other three are the same.
The first change is the processor, which is an A17 Pro. This is the same chip as the iPhone 15 Pro and Pro Max, which means that it can run Apple Intelligence.
The second big change is the Apple Pencil support. The 6th generation supported the 2nd generation Apple Pencil, but the 7th generation iPad mini supports the Apple Pencil Pro. This means that it also supports Apple Pencil Hover, much like the iPad Pro and iPad Air. The iPad mini also supports the Apple Pencil with USB-C.
The 6th generation iPad mini included USB-C, as does the 7th generation, but the maximum speed has changed. Now, the 7th generation iPad mini is capable of transferring up to 10 Gigabits per second, which is double that of the 6th generation iPad mini; this is a great improvement.
The next big change is the storage sizes available. The 6th generation iPad mini was available in 64GB and 256GB models. The 7th generation iPad mini now starts at 128GB, but also comes in 256GB and 512GB models. The Wi-Fi version of these costs $499, $599, and $799 respectively. The Wi-Fi + Cellular models are $150 more, so $649 for the 128GB, $749 for the 256GB, and $949 for the 512GB.
The last set of changes include Wi-Fi 6E and Bluetooth 5.3, while the previous iPad mini supported Wi-Fi 6 and Bluetooth 5.0. The last change is that the cellular models no longer have a physical SIM slot, instead, it only supports eSIM.
You can pre-order the 7th generation iPad mini today and it will start arriving on October 23rd.
There is a quote that is often attributed to Microsoft co-founder Bill Gates that goes, "Most people overestimate what they can achieve in a year and underestimate what they can achieve in ten years." That is definitely true within the technology world. You might not think about it often, but the World Wide Web is now 30 years old. In the last 30 years of the World Wide Web, a tremendous change has occurred. We went from occasionally using a telephone line to connect to the internet to having mostly ubiquitous internet. Also during that time, the speeds have increased orders of magnitude.
If you were not around during that time, it might be hard to fathom how long it would take to download a 1 Megabyte file. If you were on a 14.4 modem, it would take 9 minutes and 56 seconds to download a 1 megabyte file, and that was with ideal speeds. Eventually, this would come down to 2 minutes and 21 seconds. Now, we have operating system updates that can easily be gigabytes in size, and these take mere minutes to download. At the fastest dialup speeds of 56.6 Kbps, it would take 39 hours, 15 minutes to download a single gigabyte, and you would never be able to finish the download.
One thing that is an absolute certainty about technology is that significant shifts do not happen all that often; instead, they tend to happen over time with incremental changes. Yet, when you look back, you can see that all of those little changes did end up becoming a larger and more significant change. One of those significant shifts happened in 2007 with the release of the original iPhone. The iPhone was not the first "smartphone", but it was definitely a game-changer. Some of the earliest smartphones, as we would classify them today, were the Ericsson R380, Kyocera 6035, Nokia 9210 Communicator, and the Handspring Treo 380. It is possible that you might not have heard of these devices, except for maybe the Treo, but you would know it under the Palm brand.
There are many different technologies that ultimately came together to make the iPhone even remotely possible. The technologies were reliable cellular connections, fast enough cellular data, and touch screen technology. These are actually the three primary things needed for the iPhone. Yes, this is an oversimplification, but the iPhone would not have been the iPhone if these did not exist.
I am fully aware that I am not like most people because I do upgrade my iPhone every year. If you compare the same model lines on a year-over-year basis, you might think something along the lines of "this is just an incremental update", and in some cases, this is true. However, the average length of time that people are keeping their cell phones is around 3 years, and a lot can change in just three years. Therefore, when most people upgrade, they are getting significant upgrades. As an example, let us say you are going from an iPhone 12 Pro Max to an iPhone 15 Pro Max, here is what you would get:
Titanium, meaning lighter phone
Dynamic Island
Action Button
ProMotion display
Always On display
Record Spatial Video
Photographic Styles
48MP main camera, compared to 12MP Main camera
5x optical zoom
Emergency SOS via Satellite
Crash Detection
Messages via Satellite
USB-C with USB 3 speeds
Higher peak brightness
3 generation newer processor
Apple Intelligence
That is a tremendous list of new features and improvements that occurred in just three years. Of course, the latest Pro Max, the iPhone 16 Pro Max, has its own set of enhancements, which we will cover presently. But before we do that, let us look at some of the features that have come with the previous iPhones.
Previous iPhones
When the original iPhone was released in 2007, it had 2G internet, 4GB or 8GB of storage, and only had a 3.5-inch screen. There was a 2-Megapixel camera, without a flash; it even used a 30-pin Apple-proprietary connector called the Dock connector. When introduced, it was only available on a single carrier, Cingular/AT&T, and was only available in the United States. Things have definitely changed since then. Now, iPhones have 5G internet, up to 1TB of storage, from 4.7-inch screens up to 6.9-inch screens, 48-Megapixel cameras, with multiple flashes (rear flash and the true-tone flash), and use Lightning or USB-C; they come on almost any wireless carrier, and are available across the world.
Almost every release of the iPhone has brought something new to the iPhone. The big changes for each year are:
2014: iPhone 6 / 6 Plus - Two screen sizes, Landscape icons on 6 Plus
2015: iPhone 6s / 6s Plus - 3D Touch, improved camera
2016: iPhone 7 / 7 Plus - water/dust resistance, 2x Camera on 7 Plus
2017: iPhone 8 / 8 Plus/ X - iPhone X, Face ID, Qi charging
2018: iPhone Xs / Xs Max - Added Max size, dual sim support
2019: iPhone 11 / Pro / Max Triple camera on Pro / Max
2020: iPhone 12 - Two model lines just different sizes, 5G, MagSafe
2021: iPhone 13 - Pro - ProMotion
2022: iPhone 14 - Satellite/Crash Detection, Pro/Pro Max - Dynamic Island
2023: iPhone 15 - USB-C, Titanium on the Pro line.
Those are just some of the big features for each year. Before we dive into the latest iPhone, let us take a quick look at my own purchase history of iPhones.
My Cell Phone History
I had a realization not that long ago, and it really hit me. That realization is that it is getting closer to being a quarter of a century since I got my first cell phones. Back then I was not nearly as prolific in upgrading as I am now. Prior to purchasing the original iPhone, I only owned three other cell phones: a Nokia 3310, a Samsung T637, and the Razr V3. If you were around in the 2000s, you would easily recognize the first and last of these. The Nokia 3310 is a classic, so much so that a re-release of an updated model of the 3310 was released in 2017. It was a candy bar-style phone, replete with T9 dialing (kids, ask your parents, or maybe explore the changes in iOS 18, of which T9 dialing is one). The Samsung T637 was also a candybar style phone, but it had a color screen. The Razr V3 was a clamshell phone. I used the Nokia for about four years, the Samsung for approximately two years, and I bought the Razr V3 in 2006. Needless to say, the Razr V3 was the last non-Apple phone I ended up purchasing.
When the iPhone was released on June 29th, 2007, I did not buy one the first day, but I did buy one the next day, on June 30th, 2007. I actually went back and forth and could not decide if I should get one. I can still recall that Saturday morning when I drove to my local Apple Store, waited for about an hour, and ended up being the 4th person in line to buy an iPhone that day. Since then, I have upgraded to a new phone each year. The first few years, it was almost a no-brainer, for two reasons. The first is that it was only a couple hundred dollars. Yes, that is a lot of money, but much cheaper than buying a phone today. The second reason was that the improvements year over year were significant improvements. In 2011, AT&T decided to no longer provide full discounts for upgrades, but they did offer partial discounts. This meant that the price of the iPhone was more expensive. It was still several hundred dollars for each device, but I still upgraded because there was enough reason to do so.
Starting in 2014 I decided that if I was going to buy a new phone each year, that I would pay full price. The reason I opted for this is because I did not want to have to be tied to a particular carrier, if I did not want to be. Furthermore, I did not want to be tied to a contract, so buying a phone outright was the only solution. It has been a decade now that I have been buying a new phone, outright, and once again I have purchased a new iPhone. This time it is the 512GB Black Titanium iPhone 16 Pro Max. Now, let us dive into my pre-order experience.
Pre-Order
Pre-ordering any iPhone for release day delivery is a mad rush, but Apple has made improvements over the years. One of the biggest is that you can pre-select which model you want to purchase and save this for pre-order time. This year I opted to get the 512GB Black Titanium iPhone 16 Pro Max. I chose this model because I do not like the gold/bronze of the Desert Titanium. I am also not a fan of the white titanium. The natural titanium would have been okay as well, but ultimately I chose the Black Titanium.
For the last three years I have gotten a 512GB model of the iPhone. This was needed because my photo library is rather large and I like to keep the originals on my iPhone. This is not strictly necessary, because I do keep the originals on my Mac Studio, as well as on iCloud Photo Library. My MacBook Pro and iPad do not have the originals.
The past few years Apple has been having pre-orders open at 5 a.m. Pacific Time, and this is a more agreeable time for me. The days of having to wake up at 12 a.m. Pacific Time were not the most fun, but I did do it, to make sure I could get a phone on release day. The big unknown for the Apple Store is when it will actually open up. As with previous years, I had four devices up and running. These were my iPhone, my iPad, my MacBook Pro, and my Mac Studio. On the iPhone and iPad I used the Apple Store app, while I had the Apple Store website open on my Macs.
It usually takes a couple of minutes after the actual pre-order time before the store opens, but by 5:02 a.m. it had still not shown up on any of my devices. I switched my iPhone to cellular, and lo and behold, the store popped right up. I had my phone pre-ordered, and the confirmation email in my inbox, by 5:03. I decided to track when the other devices did show up. On my iPad, the Apple Store app came up around 5:10 a.m., and my Macs did not have the store open until about 5:20 a.m.
When I pre-order, I tend to pick it up at my local Apple Store. This is so I can get an early time slot. The reason that I opt for this is because it is unknown when my iPhone would actually get delivered. Sometimes it is early in the afternoon, other times it may not be until 7:00 p.m. The unpredictability means it is easier for me to go and pick my iPhone up.
I did have one hiccup while ordering. The time slot that I chose was filled between the time I selected the time and the time that I checked out, so I had to choose a slightly later time slot. Even with this, the time slot was still early enough in the day that it was not a big deal. A friend of mine experienced the same thing when they ordered their phone for pickup at the same store I go to. I do not think there were that many people ordering phones for pickup at my local Apple Store that all slots would be filled, but you never know.
Overall, the pre-order process was not too bad. It surely beats waiting in line for more than 12 hours to buy a couple of iPhones. I still distinctly remember that day. It was a long one. Part of the issue was that Apple's point of sale (POS) system was under a lot of strain, and transactions were taking a long time, hence the delays. But that all seems to be in the past now. After the pre-order and picking up the device, comes the actual setup. So let us get to that next.
Setup
Having gone through the process of upgrading my phone every year, I am well-versed in how to move from one phone to another. For many years, I did an encrypted backup to my Mac and restored the new phone from there. This process had a couple of advantages. The first was that I could still use my old iPhone while the iPhone was restored. The second is that it was often faster than doing a direct device-to-device transfer.
This year, I opted to do a different approach. I used the new direct USB-C to USB-C connection. I used a 1-meter Thunderbolt 4 cable that I use to perform wired backups of my iPhone to my Mac. I opted for Thunderbolt because this cable can do 40 Gigabits per second, well above the 10 gigabits per second that the iPhone 15 Pro Max and iPhone 16 Pro Max are capable of performing. At least, that was the intention with using the cable.
When I unboxed the iPhone 16 Pro Max, it did the initial synchronization dance where you need to take your old phone and scan the glowing image that appears on your new phone. This is used to pair the two devices together. I performed this step, and then it went through the Appearance and Face ID options. Next came the eSIM transfer. For some reason, it could not activate the eSIM. I tried to initiate the transfer again, but it never showed the eSIM transfer screen. I did not want to erase the new phone and go through the entire setup again, so I just continued the setup.
The next step is to start the actual transfer, but this was not possible because another step was needed. I am a developer and I usually run iOS betas on my iPhone. Normally when the release candidate of the operating system is released, I install it and then disable any beta updates. Most years this is not a problem. But this year Apple did something different. Apple released the first beta of 18.1 in August. This included Apple Intelligence, so naturally, I installed the betas. Since I had the latest iOS 18.1 beta on my iPhone 15 Pro Max, the iPhone 16 Pro Max needed to be updated to the same release. The iOS update took about 15 minutes from start to finish.
After the iPhone 16 Pro Max rebooted, it reconnected to the transfer session. This step took a surprisingly long time, about 10 minutes to finish. Once the transfer started, I thought it would use the cable to perform the transfer. The transfer estimation screen indicated that it would take 2 hours to complete the transfer. At first, I thought that the cable was not being used, but it must have been used because the transfer did seem to go quite quickly. In fact, it took just about 70 minutes to complete.
Even though it took about an hour and 10 minutes, there were instances where it just sat at a certain amount of "remaining time" for quite a while. The transfer sat at "18 minutes" remaining for about 10 minutes. It then proceeded to go down to 5 minutes, then stopped at "2 minutes" remaining for another 15 minutes. It then crept up to 3 minutes for another few minutes. At this point, it finally did finish. Suffice it to say, progress bars and estimated times are not an easy thing to get to be 100% accurate. Ultimately though, it must have used the Thunderbolt cable, because there is no way that it would transfer more than 200GB in an hour if it was not using the cable. This was much faster than the last couple of years. In 2023, it took about 2 1/2 hours, the same for 2022.
Of course, after the data transfer is done, that is not the final step, far from it. After the transfer, the iPhone reboots and then you have to perform the following:
Log in to your Apple Account
Set up Apple Pay (can be done later)
Wait for apps to download
Wait for any songs to be re-downloaded
There were two things that I noticed during the transfer. The first is that the iPhone 16 Pro Max was being powered by the iPhone 15 Pro Max via the Thunderbolt cable that I was using, which is anticipated. However, it was not being powered before I updated to the iOS 18.1 beta. The second thing that I noticed is that both devices got extremely warm. Sadly, it is not uncommon for my iPhone 15 Pro Max to get a bit warm, but the iPhone 16 Pro Max also got warm. I ended up putting the iPhone 16 Pro Max on a little fan I have to help keep it cool.
After everything was finished, I did notice that there were a number of things that did not transfer properly. These included the aforementioned eSIM transfer, iMessage, and the Apple Watch.
eSIM Activation
As I mentioned, for some reason, the eSIM in my iPhone 16 Pro Max could not be transferred. I did not anticipate this being much of an issue because you can always use Settings to perform the transfer. This is what I ended up doing. This entire process of going through Settings to transfer the eSIM from my iPhone 15 Pro Max to my iPhone 16 Pro Max took about 5 minutes, but this seemed to have other ramifications.
iMessage
After the eSIM was transferred, I wanted to make sure that I could send messages. However, nothing was showing. So I went to Settings to verify that I was signed into iMessage, but it indicated I was not. I attempted to sign in, but I could not sign in. It just kept spinning and spinning. I decided to leave this until after all of my apps and songs finished re-downloading.
Home app
Another item which did not have anything was the Home app. Nothing was showing, and when I mean nothing, I mean absolutely nothing. There were none of my accessories, none of the controls, nothing was showing. Like iMessage, I left this until later.
Apple Watch
One thing that did not show at all during the transfer was my Apple Watch. Typically, this does transfer without any issue. Much like the eSIM transfer, I needed to complete this step manually. When I opened up the Apple Watch app, I saw this screen:
I attempted to finish the pairing. The first time it just sat there and did nothing. I then forced the Apple Watch app to quit and attempted to finish the pairing again. This time it was just spinning and spinning, much like when I tried to sign into iMessage. I am not sure what it was doing. I initially thought it might have been performing a backup before pairing the watch, but it just sat there, much longer than it should have taken.
Ultimately, I could have just skipped this entirely because I also purchased a new Apple Watch Series 10. It probably would have been more prudent to just set up that instead. But, I preferred to have everything else working before setting up the new Apple Watch, just in case there was an issue and I needed to erase and set up the new iPhone again.
The Fix
Knowing the first rule of any troubleshooting, I decided to reboot my iPhone 16 Pro Max. After it finished rebooting, guess what? iMessage showed all of my addresses, my Home accessories were showing again, and I was able to finish pairing my Series 9 Apple Watch. Thinking about this now, I am guessing there was actually an issue with my Apple Account not fully signing in. It signed in enough to redownload my apps and all of my songs, but obviously not enough to synchronize the remaining iCloud items. Regardless, I finished doing the setup of everything. Now that we have covered the setup, let us look at the actual device now, starting with the color.
Color
Apple's "Pro" and "Pro Max" line of phones generally only have a limited number of colors. In fact, this is only the second year for titanium; Apple has had three standard titanium colors: natural, white, and black. There is a fourth option: last year it was Blue Titanium, and this year it is Desert Titanium.
While I was waiting in line to pick up my iPhone, I had a conversation with the person behind me who was also buying a new iPhone. I had mentioned that the Desert was a bit too "gold" for me, and he agreed. He opted for the Natural because it was better than the White Titanium, and he wanted something different than the Black. I opted for the Black. I am not a fan of anything gold; I cannot articulate the exact reason, but it is just not for me. I thought about getting the Natural Titanium, but ended up getting Black Titanium.
Apple's "Space Gray", "Black," and "Jet Black" are all various shades of gray, with some obviously being darker than others. The Black Titanium iPhone 16 Pro Max is a dark color, much darker than the Blue Titanium on my iPhone 15 Pro Max. It is, of course, not the darkest iPhone color that Apple has had, not by a long shot. The phone that has the honor of being the darkest iPhone still belongs to the Jet Black iPhone 7. Which was a glossy black.
Next, let us look at two other changes, the Screen Size and Dimensions.
Screen Size and Dimensions
Since the introduction of the original iPhone, the size of the screen has steadily gotten larger. The first increase was in 2012, five years after the initial introduction, with the iPhone 5. The screen went from a 3.5-inch screen to a 4-inch screen. The next change was just two years later in 2014 when two new models were released, the iPhone 6 and iPhone 6 Plus. These had screen sizes of 4.7 and 5.5 inches, respectively. In 2017, when the iPhone X was released, it had a 5.85-inch screen. 2018 brought three new models; the iPhone XS with a 5.85-inch screen, the XS Max which had a 6.46-inch screen, and the iPhone XR with a 6.1-inch screen.
In 2020, the iPhone 12 line introduced 4 models. The 12 mini had a 5.4-inch screen, the iPhone 12 and 12 Pro both had a 6.1-inch screen, and the iPhone 12 Pro Max had a 6.7-inch screen. The iPhone 14 line in 2022 replaced the 5.4-inch mini with the 6.7-inch iPhone 14 Plus. So, from 2007 to 2023, there were a total of 9 different screen sizes:
3.5-inch
4-inch
4.7-inch
5.4-inch
5.5-inch
5.85-inch
6.1-inch
6.46-inch
6.7-inch
Now, with the iPhone 16 Pro and 16 Pro Max, we can add two more sizes, the 6.3-inch iPhone 16 Pro and the 6.9-inch iPhone 16 Pro Max. This means that the screen size has increased by 3.3% and 3% respectively. At the same time that the screen has increased, the dimensions of the iPhone 16 Pro Max have increased, but ever so slightly.
The iPhone 15 Pro Max was 159.9mm tall, 76.7mm wide, and had a depth of 8.25mm. Comparatively, the iPhone 16 Pro Max is 163.0mm tall, 77.6mm wide, and has a depth of 8.25mm. This means that the overall difference is 1.9% taller and 1.2% wider. This is so very minuscule, but then how does Apple get a 3% bigger screen, with such a small change in physical size? That is simple: the bezels around the screen are even smaller than on the previous iPhone 15 Pro Max. Smaller bezels mean that you will see even more screen, without needing to necessarily increase the physical size.
Even though the screen size has changed, the technology powering it is the same. The 16 Pro Max has a "Super Retina XDR Display" with a 120Hz ProMotion display and an Always-On Display. The iPhone 14 Pro Max and iPhone 15 Pro Max also had this same display.
The screen itself is not much different, at least not at first glance. It still has the 120Hz ProMotion display, which many users might not even really notice a difference between the 60Hz and 120Hz displays.
When you first look at the iPhone 16 Pro Max, the 6.9-inch screen is not a significant difference from the 6.7-inch screen on the iPhone 15 Pro Max. However, I actually ended up using the iPhone 15 Pro Max for a bit after I had gotten accustomed to the iPhone 16 Pro Max, and I did notice that the physical size difference is actually noticeable. When I used the iPhone 15 Pro Max again, it did seem a bit smaller in my hands. It was nothing like going from the 4-inch iPhone 5s to the iPhone 6 Plus, but it was still noticeable.
There is one significant change that might be quite useful in low-light situations, and that change is the fact that the minimum brightness for the screen has gone down to 1 nit. This means that you will be able to have less light coming out of your phone's screen, which can not only save battery power but also makes it less likely that you will hurt your eyes if you wake the screen in a dark room. Now that we have seen the changes with the screen, let us move onto another slight difference: the battery and weight.
Battery and Weight
The fact that the size of the device increased, albeit marginally, means that there is more room for a battery. In fact, according to Apple, the battery life in the 16 Pro Max has increased over the 15 Pro Max. The iPhone 15 Pro Max had 29 hours of video playback, 25 hours of video playback when streamed, and 95 hours of audio playback. The iPhone 16 Pro Max increases the numbers by 13%, 16%, and 10% respectively. This results in 35 hours of video playback, 29 hours of streamed video playback, and 105 hours of audio playback.
The iPhone 15 Pro Max had a battery size of 4,422 milli-Amp hours, or mAh. The extra space within the iPhone 16 Pro Max is filled with a battery, which can help lead to longer battery life. According to Tom's Guide, the iPhone 16 Pro Max has a 4,685 milli-Amp hour battery. This is a modest increase of 5.95%. Even though this is not a huge increase, it is still an increase and any improvement in the battery is a welcome change.
The weight of the iPhone 16 Pro Max has actually increased a bit. Last year with the iPhone 15 Pro Max, the weight went down due to the titanium frame. But due to the larger screen and increased dimensions and battery, the overall weight has increased.
11 Pro Max: 7.97 ounces (226 grams) - Stainless Steel
12 Pro Max: 8.03 ounces (228 grams) - Stainless Steel
13 Pro Max: 8.46 ounces (240 grams) - Stainless Steel
14 Pro Max: 8.46 ounces (240 grams) - Stainless Steel
15 Pro Max: 7.81 ounces (221 grams) - Titanium
16 Pro Max: 7.99 ounces (227 grams) - Titanium
This is a good representation of how the different metals make a true difference. Even with an increase of 0.2 inches in the screen, the overall weight is only up 6 grams, or 2.7%. The difference is not really noticeable in day-to-day usage, but if you are explicitly trying to notice the difference, you might be able to.
It is unlikely that you will ever actually experience the optimal battery numbers provided above, but they are still a good comparison so you can expect improved battery life. Some of this battery life is due to the actual battery, but some of it must be attributed to the processor. Let us cover that next.
A18 Pro
Once there were at least two sizes of iPhones available for purchase, one might suspect that there would be some bifurcation between the models, in terms of what is actually powering the device. However, this did not happen, at least not until 2022. When the iPhone 14 and 14 Pro lines were released, only the iPhone 14 Pro/Pro Max got a new processor, the A16 Bionic. The regular iPhone 14 and iPhone 14 Plus received the A15 Bionic, the same chip as the year before. With the iPhone 15 and 15 Pro, this happened again. The iPhone 15 and 15 Plus received the A16 Bionic that was previously in the 14 Pro/Pro Max. In the case of the 15 Pro and Pro Max, they received an entirely new chip, the A17 Pro. It should not be a surprise that Apple would not have the A17 in both the iPhone 15 and iPhone 15 Pro lines. The reason for this is that Apple sells nearly 100 million of the latest iPhone models every year, and manufacturing that many A17 Pros was just not possible.
The A17 Pro was a one-off System on a Chip, or SOC. The A17 Pro was the first 3-nanometer chip in an iPhone; however, it was using a process that TSMC called "N3". This process proved to be more expensive and had worse overall viable products (yields) using that process. Apple would not abandon using a 3nm chip; instead, the A18 Pro uses the second-generation 3nm process called N3E.
The N3E process has better yields, and from a technical perspective, is a better process. According to Apple, the Neural Engine of the A18 Pro is capable of 35 trillion operations per second (TOPS). This is the same as the A17 Pro, but the A18 Pro is faster due to increased memory bandwidth. Specifically, there is 17% more memory bandwidth, bringing the iPhone 15 Pro Max to around 479.2 gigabits per second, or 59.90 gigabytes per second. The reason that memory bandwidth is important is because each of the processors within the A18 Pro, the CPU, GPU, and Neural Engine, all have direct access to the memory, and when one of the processors performs the actions necessary, the next processor can perform its actions. The more memory bandwidth, the faster the process can complete.
Just like the A17 Pro compared to the A16 Bionic, the A18 Pro has some features that are not present in the A18. Most notably, there continues to be USB 3 speeds, of up to 10 gigabits per second, which was really useful when transferring data from my iPhone 15 Pro Max to my iPhone 16 Pro Max.
One of the areas where the improved memory bandwidth might be useful is when you are using the Camera, so let us look at that now.
Camera
I will be the first to admit that I am not a professional photographer, but I still enjoy taking photos. If anybody was to inquire as to why I buy a new iPhone every year, I would say that "I am not buying a new phone, I am buying a new camera and it just happens to have a phone". On the one hand, I am being a bit facetious, because I do use the iPhone for the apps more than the camera. At the same time, though, it is actually true. I do prefer to have the best camera I can get. It is Apple's intent to have the Pro and Pro Max be the same, except for the screen size. Whenever this is technologically feasible, they do this. But when it is not possible, the better camera features end up in the highest-end phone.
In 2014, with the iPhone 6, the best camera possible was on the iPhone 6 Plus. The iPhone 6 Plus had optical image stabilization within the camera. Similarly, in 2016 on the iPhone 7 Plus, you got a second camera, as compared to the single camera on the iPhone 7. With the iPhone 12 Pro Max, you got 2.5x optical zoom, as opposed to 2x on the 12 Pro. Lastly, the 15 Pro Max has a 5x optical zoom, while the 15 Pro only has a 3x optical zoom. This year, the iPhone 16 Pro and iPhone 16 Pro Max are back to parity in terms of the camera.
One of the new features, which is present on all of the iPhone 16 devices, is the Camera Control button.
Camera Control
It is not often that Apple adds additional physical buttons to the iPhone. Last year with the iPhone 15 Pro and Pro Max, Apple replaced the ringer/silent switch for the Action Button. The Action Button is a customizable button that allows you to assign a variety of actions to perform when the button is pressed. These actions could be to toggle ringer/silent, flashlight, opening the camera, toggling an accessibility option, or even running a shortcut, amongst other options.
The Camera Control is the first all-new control on the iPhone that did not replace an existing one. Looking back at the original iPhone, all of the same buttons were present. An on/off button, volume up, volume down, and ringer/silent switch. As mentioned, the ringer/silent button was replaced by the Action button. The Camera Control is a dedicated button, and as the name suggests, specifically designed for the camera.
The Camera Control is a dual-action button. It is both a capacitive button as well as a pressure-sensitive one. The pressure sensitivity means that you can press lightly or heavily, and have different actions occur. The capacitive portion means that you swipe and move between controls.
Selecting a Camera app
Apple's own Camera app is quite capable and is the most used Camera app on the iPhone. However, there are a variety of other camera applications available on iOS. The Camera Control allows you to select which camera app you would prefer to be used when using the button. This can be done by going to Settings -> Camera -> Camera Control.
Once you open the settings page, you can choose which camera app you want to use from the provided list. This list is provided from the apps you already have installed. Some examples might include Camera, Halide, Instagram, Magnifier, and Code Scanner, just to name a few. You can also choose whether to open the camera app using a single click (the default), or a double-click.
Next, let us cover how to use Camera Control.
Using Camera Control
Camera Control provides you with an easy way of being able to open your selected camera app and then control various aspects of the camera's functions. As mentioned above, Camera Control is a dual-featured button. The pressure-sensitive button that allows you to click and the capacitive button allows you to drag your finger along to adjust the current value. The default app for use is Apple's Camera app.
To open the Camera app, simply press the Camera Control button once, or double-click if you modified the setting. The default control to adjust is the zoom. If you swipe your finger across the Camera Control, the current value will be changed. If you swipe left, it will zoom out, and swiping right will zoom in. This is very much like zooming in on the Camera app itself.
You can lightly double-click on the Camera Control button, and you will be able to select the feature to switch to. For Apple's Camera app, you can use from the following:
Exposure
Depth
Zoom
Cameras
Photographic Styles
Tone
To select an item, swipe to your preferred item, and then lightly click once and your selection will be confirmed. You can then use the camera. There is one thing to note. If you press firmly on the Camera Control button, it will become a shutter and it will take a photo with your given parameters. If you continue to press, a video capture session will start.
As I was using the Camera Control button, I was attempting to figure out why Apple would include a dedicated button specifically for the Camera. I think there are a couple of reasons for this decision. The first is that the Camera is one of the most used features of the iPhone, so having a dedicated button makes sense. If Apple only wanted to provide quick access to open the camera app, this is already covered by the Lock Screen control as well as the Action Button, so I think there is more.
The second reason, and the more likely reason, is that the Camera app can be a bit complicated for some users. All of the features of the Camera app are hidden behind buttons, and having all of the standard options available from a single location may allow users to take even better photos with all of the tools at their fingertips.
The last reason, as I can surmise, is that having the Camera Control button does make the iPhone 16 Pro Max feel a bit more like a standard digital camera. This is enhanced by the fact that the Camera Control button is placed approximately in the same place that someone would place their index finger when using a digital camera. Furthermore, having all of the controls at your fingertips does make it easier to adjust the settings if needed, without needing to look at the full screen.
Being able to change settings for the camera without needing to move your finger can be quite useful when you are framing your photo, and having to change something could be distracting. Now, you can change settings without needing to move your eyes too far.
It should be noted that not every single feature is available within the Camera Control button. For instance, you cannot change the current function. What this means is that if you want to switch from a Photo to a Video, or even a panoramic photo, you will still need to swipe along the screen as you would do previously.
Apple does have a support guide that will show you how to Use the Camera Control on all of the iPhone 16 models. Let us now look at another improvement, which is camera-related, and that is the Ultra Wide Camera.
Ultra Wide Camera
Going all the way back to the original iPhone, each iPhone has had at least one camera. Particularly in the early days, the camera was not of the best quality, but it was there. In fact, from 2007 to 2015, there was only a single rear camera available on all iPhones. The iPhone 7 Plus, introduced in 2016, changed this. The iPhone 7 Plus was the first device to have a dual camera system, while the iPhone 7 kept the single "Main" camera.
At the time, Apple identified the two cameras as the "Main" camera and the "Telephoto" camera. The dual camera system on the iPhone 7 Plus introduced a feature common today: an easy way to switch between multiple lenses just by tapping on a button.
The next big change was in 2019, with the iPhone 11 Pro and Pro Max. These two devices were the first to introduce the Triple Camera system that has since become common on the Pro and Pro Max phones. The three rear cameras were all 12-megapixel lenses, with various aperture sizes. Apple labeled these as the "Main", "Telephoto", and "Ultra Wide" camera.
The 11, 12, and 13 Pro and Pro Max devices all had the triple camera system. Each of the cameras was 12 megapixels. Over these three devices, the aperture size went from ƒ/2.4 on the iPhone 11 Pro Max, the 12 Pro Max had the same aperture, and the 13 Pro Max improved to an aperture size of ƒ/1.8. As a note, the larger the aperture, the more light that can be let in.
Apple's camera system is a combination of both hardware and software. The triple camera system on the Pro and Pro Max has remained unchanged from the iPhone 14 Pro Max to the iPhone 15 Pro Max. THe iPhone 16 Pro Max largely follows this, except for one aspect but more on that presently. The "Main" camera is a 48 megapixel ƒ/1.78 aperture, while the Telephoto is a 12 megapixel ƒ/2.8 aperture lens. The Ultra Wide camera has an aperture of ƒ/2.8.
Starting with the iPhone 14 Pro Max, the main camera became a 48-megapixel camera. This is 4x the number of pixels previously. The 15 Pro Max kept the same configuration as the 14 Pro Max. Comparatively, the ultra-wide cameras have all been 12 megapixels. The iPhone 16 Pro Max has an improved ultra-wide camera; it is now a 48-megapixel sensor.
The 48-megapixel sensor is an improvement in a couple of different ways. The first is that the images are larger; specifically, they have a resolution of 8064 x 6048 pixels. This means that you can get more pixels, which means more information, which means better quality. Therefore, should you compress an image, like creating a JPG, more information will be retained, which should create an overall better image.
The larger ultra-wide does have some additional improvements, specifically with macro photography. This is because the ultra-wide lens is used when taking macro photos, specifically you can have up to 48 megapixel macro photos, which improves the overall quality.
Dolby Vision
There is always something that separates the "Pro" line of phones from the standard line of devices. Some of these include a 120Hz ProMotion display, the Always-On display, Night Mode Portraits, the 5x optical camera lens, LiDAR scanner, and this year, Dolby Vision 4K video at 120 frames per second.
For over a decade, video recording has been a primary feature of the iPhone. In 2015, with the release of the iPhone 6s/6s Plus, Apple added 4K video recording at 25 fps or 30 fps. The iPhone 7 and 7 Plus added two more options, 24 fps and 60 fps.
You might initially think "it can't be that much better", but actually, it is. Similar to the ultra-wide camera improvements, having video record at 120 frames per second is a significant improvement. Doubling the number of frames results in smoother video. As you can see from the two items below:
You are able to record at 120 frames per second in a standard video, but you can also do the same in a Slo motion video, which is a nice improvement in quality for those types of videos as well.
As you might suspect, taking video at twice the number of frames per second can increase the size of the video. As an example, the original size of the windmill videos above are substantially different. The iPhone 15 Pro Max 60 fps video is 204.7 MB and the iPhone 16 Pro Max 120 fps is 383.2 MB. Below are two more videos; this one is of a train that was going by. The file size comparison is 209.9MB as compared to 391.3MB. So, they use quite a bit more space.
There is one last camera-related item to cover, and that is Photographic Styles.
Photographic Styles
Photos can be an intensely personal thing. What makes it personal could be the subject of the photo, the location, or even just the feeling that a particular photo evokes. Some photographers prefer full-color photos, while others prefer black and white, and yet some prefer sepia-toned photos. Some may prefer a light bokeh effect, while others prefer more of a bokeh effect. Once you gather all of someone's preferences together, it could be considered someone's own personal style.
Alongside the introduction of iPhone 15 Pro and Pro Max, a new feature was introduced called Photographic Styles. Photographic Styles are a way of being able to automatically apply a standard look to all of your photos, specifically one that you like.
With the iPhone 15 Pro Max, you had five different styles: Standard, Rich Contrast, Vibrant, Warm, and Cool. Each of these had its own setting for Tone and Warmth. You could adjust the value if you so chose, between -100 and 100 for either tone or warmth.
Once you selected a Photographic Style, it would be applied automatically to all of your photos; however, you could choose a different photographic style on a photo-by-photo basis. Yet, there was one thing that was absolutely true. Regardless of whichever style, once you have taken a photo with a photographic style applied, you could not change it. If you wanted to change the photographic style, you would need to retake the photo.
The iPhone 16 line of phones takes Photographic Styles to a whole new level. There are a number of enhancements, the first being a whole new range of styles. The new list of Photographic Styles includes:
Standard
Cool Rose
Neutral
Rose Gold
Gold
Amber
Vibrant
Natural
Luminous
Dramatic
Quiet
Cozy
Ethereal
Muted B&W
Stark B&W
Each of these individual photographic styles has its own values for Tone and Color, again it ranges from -100 to 100 for each.
The second enhancement is that under iOS 18 with the iPhone 16 line of phones, Photographic Styles are now modifiable after you take a photo. This means that you do not need to worry if a photo does not look like you wanted, you can adjust it after the fact. It should be noted that only photos that were taken with the iPhone 16 line of phones can be adjusted. Photos not taken on an iPhone 16 device will use the older Photographic Styles.
What this new approach means is that if you took a photo with a Cool Rose style, but now you want to make it a Muted B&W style, you can do so. You can change the Photographic Style by using the following steps:
Open Photos.
Locate the photo which you want to adjust the Photographic Style on.
Open the photo.
Click, or tap, on the Edit button.
Click, or tap, the "Styles" button.
Select the new style that you wish to apply.
There are a couple of things to mention, and keep in mind is that these newer photographic styles are only available on iPhone 16 devices. Any other devices will use the older photographic styles and cannot be adjusted after the fact. The second thing to keep in mind is that this is completely non-destructive, meaning you can easily change your mind at any point and adjust the photo as necessary.
These new photographic styles are a great improvement over the previous photographic styles. The improved selection means that you can more easily find the right photographic style for any of the photos taken with an iPhone 16 Pro Max. Now, there is one last item to discuss. Let us talk about power, specifically MagSafe charging.
MagSafe Charging
As has been the case since the iPhone 12, all new iPhones support MagSafe charging. MagSafe is Apple's take on the Qi charging standard. MagSafe expands upon the standard by including magnets to ensure the iPhone is properly aligned. The iPhone 16 Pro Max supports MagSafe charging and Qi charging. The iPhone 16 Pro Max also supports faster charging via the updated MagSafe charging puck, provided you have a 30-watt or higher power brick connected. The faster charging allows you to charge up to 50% within 30 minutes. This is a 33% improvement in speed over the previous charging puck, which would charge to 50% in 45 minutes.
I bought one of the new MagSafe pucks, and it has a couple of changes. The first is that the cable is braided, which has become the standard for all of Apple's new cables. This does improve the feel should you need to plug or unplug the MagSafe puck. The other change is a very subtle one: the size. The actual size of the charging puck has gotten smaller, not by a lot, but it is absolutely smaller. The reason that I know this is that I have a Twelve South Forte stand that I purchased when Apple announced StandBy mode in June of 2023. The Forte stand is designed to put the MagSafe puck into a specially designed receptacle, which you can then connect your iPhone to the MagSafe puck. The issue comes with the fit. The new MagSafe charging puck more easily falls out of the stand when you attempt to take the iPhone off the puck.
What I have determined is that if you lift up the iPhone and lean it towards the top of the stand, it does not fall out. However, if you try to pull your phone down, it will pull the MagSafe puck out of the receptacle. I hope Twelve South creates an insert that will work for the newer MagSafe puck, and if they do, I will have no problem paying for the new insert. I am not sure if they are going to put forth the effort to do create one or not.
Ultimately, this is not a problem in most situations because it is not likely that you will be using that specific stand, but it is an issue for me. Therefore, it is something to be cognizant about if you plan on purchasing one of the new MagSafe charging pucks.
Obligatory Benchmarks
I have been doing reviews on my website for quite a while now, and when it comes to devices like the iPhone, iPad, and Mac, I always end up putting in some benchmarks. These are designed to provide a comparison to comparable devices. The devices that I compare within these reviews are ones that I have on hand at the time of the review. This review is no exception. These benchmarks have been done using the Geekbench 6 on each of the platforms below.
Device
Chip
CPU Single Core
CPU Multi-Core
GPU (Metal)
iPhone 16 Pro Max (2024)
A18 Pro
3497
8581
32822
12.9-inch iPad Pro (2024)
M4
3585
12603
55769
iPhone 15 Pro Max (2023)
A17 Pro
2749
6713
27661
14-inch MacBook Pro (2023)
M2 Max
2707
15148
127761
Mac Studio (2022)
M1 Max
2439
12825
103224
6th generation iPad (2021)
A15 Bionic
2157
5285
20183
Mac mini (2020)
M1
2394
8810
34575
When looking at all of these benchmarks, they all make sense. The M4 is the device that has the highest results, and it is the most powerful device. Next is the iPhone 16 Pro Max. The remainder are all in order according to when they were released.
One of the latest benchmarking utilities is a tool called Geekbench AI, formerly Geekbench ML. According to Geekbench AI developer Primate Labs, Inc.:
Geekbench AI is a cross-platform AI benchmark that uses real-world machine learning tasks to evaluate AI workload performance. Geekbench AI measures your CPU, GPU, and NPU to determine whether your device is ready for today's and tomorrow's cutting-edge machine learning applications.
So, here are those benchmarks. I did not do the CPU calculations on these devices because while AI can run on the CPU, quite often it is handled by the GPU or Neural Engine.
Device
Chip
Neural Engine Single Precision
Neural Engine Half Precision
Neural Engine Quantized
GPU Single Precision
GPU Half Precision
GPU Quantized
iPhone 16 Pro Max (2024)
A18 Pro
4528
32797
45495
5785
6721
6128
12.9-inch iPad Pro (2024)
M4
4700
37154
53228
9077
10961
9681
iPhone 15 Pro Max (2023)
A17 Pro
3845
23866
33304
4762
5663
5127
14-inch MacBook Pro (2023)
M2 Max
4178
23241
25530
12645
13840
12635
Mac Studio (2022)
M1 Max
3811
14996
14799
10060
11174
9500
6th generation iPad (2021)
A15 Bionic
3154
16471
17791
2717
2925
2902
Mac mini (2020)
M1
3228
13885
14120
4105
5728
4340
As is the case with the CPU and GPU benchmarks, the Geekbench AI benchmarks follow the same order, with the M4 iPad having the highest results. When you compare the iPhone 16 Pro Max to the iPhone 15 Pro Max, there is a substantial improvement. The Neural Engine's single-precision result is 17% better, while the half and quantized are around 37% each. Similarly, the GPU results are 18%, 21%, and 19% respectively. This is a decent year-over-year improvement.
A good comparison is actually to compare the original M1 with the A18 Pro. When you compare these, the improvements made in just four years are not too bad either. For the GPU, single precision is 40.93%, half precision is 17.1%, and quantized is a 41.2% improvement. Now, the biggest gains are with the Neural Engine. The Neural Engine's single-precision result has improved 40.2% over the M1, which is a good change. The half and quantized are an even larger improvement. These are 236.2% and 322.2%. These two are incredible improvements, and just in four years.
Closing Thoughts
The iPhone 16 Pro Max may initially seem like an incremental update over the iPhone 15 Pro Max, and in some aspects, it is. However, there are a number of improvements. Some of these are hardware, and some are software. In terms of hardware, the iPhone 16 Pro Max now sports a 6.9-inch screen. This is a 0.2-inch increase, but the physical size of the device has also had a modest increase to accommodate the new screen size. The device could have been bigger, but Apple reduced the bezels even further to help minimize the size change. The battery life of the iPhone 16 Pro Max has increased to 33 hours for video, 29 hours for streamed video, and up to 105 hours for audio playback. Part of the battery life increase is due to the nearly 6% larger battery, but that is not the only reason.
The iPhone 16 Pro Max is powered by the A18 Pro. This brings the iPhone 16 line back into having the same general line of processors. The A18 Pro has 17% more memory bandwidth than the A17 Pro. While this increased bandwidth may require incrementally more battery, the increased bandwidth means that tasks can be accomplished even faster, which ultimately reduces battery usage. Beyond this, the A18 Pro uses the new "N3E" manufacturing process, which is TSMC's latest 3-nanometer process. This process results in 32% less power for the same speeds.
The A18 Pro has enabled a whole new camera system, the Fusion Camera system. This system allows you to take a variety of photos, and the best camera will be used. There are a bunch of camera-related changes with the iPhone 16 Pro Max. The first is the new Camera Control. This new button is more than just a shutter button, although it can be used as such, the Camera Control is used to select various features of the camera. This includes the specific camera to use, the zoom, exposure, depth, Photographic Style, or tone. The Camera Control is not just a standard button; it is also capacitive, meaning you can slide your finger to adjust values. Camera Control is not limited to Apple's camera app, you can use any supported third-party app as your default.
When you do use it on the iPhone 16 Pro Max, you will likely see some significant enhancements, particularly with the Ultra-Wide camera. The iPhone 16 Pro Max has a 48 Megapixel sensor, this is 4x the previous 12 Megapixel camera on the iPhone 15 Pro Max. This will improve not only our RAW photos but also all photos that you take with the Ultra-Wide camera. In particular, macro photos are improved because they can now be 48 Megapixels because taking a macro photo will use the Ultra-Wide camera.
The iPhone 16 Pro Max is capable of using the Photographic Styles. For photos taken with the iPhone 16 Pro Max, you can take a photo with a Photographic Style already applied, or you can apply one, or even choose a different one, after the fact. This can be done on both iOS 18, iPadOS 18, as well as on macOS Sequoia. Only the photos taken with an iPhone 16 Pro Max, or any other iPhone 16 device, can use these Photographic Styles, but being able to change them after the fact is a great upgrade.
Beyond the new Ultra-Wide camera, the iPhone 16 Pro Max is capable of recording in 4K Dolby Vision at 120 frames per second. This is double the previous frame rate of 60 frames per second. Additionally, slow-motion videos can also be recorded at 120 frames per second.
There are a few smaller changes to the iPhone 16 Pro Max, including Wi-Fi 7, the first Apple device to support the latest Wi-Fi standard. The second is that you can record in SpatialAudio due to the new microphone array, which has wind noise reductions applied. You can also take Spatial Photos.
If you have an iPhone 13 or older, you cannot go wrong with any of the iPhone 16 line of devices. The iPhone 16 Pro Max continues to have the largest screen, and supports the best camera system that Apple has offered to date. The iPhone 16 Pro Max will support Apple Intelligence, which has not yet been released, but will be over the course of the year. While the iPhone 16 Pro Max may be a less substantial update, as compared to the iPhone 15 Pro Max, it is a fantastic update for anybody using an older iPhone.
It might be hard to believe, but it has been just over a decade ago since the Apple Watch was announced. I have owned an Apple Watch since May 1st of 2015. In the almost 9.5 years of owning an Apple Watch, I have seen it go from being an "iPhone Replacement" to primarily being used as a Health and Fitness device. It can be argued that the original Apple Watch was released a bit too early. The original Apple Watch was a bit slow, relying too heavily on the iPhone. Even today, the Apple Watch still requires an iPhone to even begin to be paired, but once paired, the Apple Watch is able to do a lot more on its own.
I am not a typical user when it comes to some of my devices. What I mean by this is that I upgrade my iPhone and Apple Watch each year. Given that I have owned an Apple Watch not long after it was released, it means that I have owned 9 different Apple Watches to date. I have purchased the aluminum version of each Apple Watch, in the largest screen size available.
In 2022, when the Apple Watch Ultra was announced, I thought about getting that version, but decided to stick with the aluminum model. Last year when Apple announced the Apple Watch Ultra 2, I again thought about getting it, but there was no dark version. Apple did finally announce the Black Apple Watch Ultra 2, and not the Ultra 3. Besides new bands and the new color, the Apple Watch Ultra 2 still has the S9 System in Processor (SIP), so I opted not to buy the Ultra 2. Therefore, once again, I have gotten the latest Aluminum Apple Watch. This time the Apple Watch Series 10.
You might think that it is somewhat straightforward to compare a device to its immediate predecessor, and sometimes it is, and sometimes it is not. It depends on what new features there are on the Apple Watch. This year I will be reviewing the Apple Watch Series 10 and comparing it to the Series 9, when appropriate. With that, let us get started on the review of the Series 10, starting with the color.
Color
The Apple Watch has come in a wide variety of colors over the years, including Space Black, Silver, Rose Gold, and PRODUCT(RED), just to name a few examples. Much like the "Pro" and "Pro Max" iPhones, the Apple Watch has come in a unique color since 2019, with the release of the Series 6. For the Series 6, it was Gold; Series 7 was Green; Series 8 was PRODUCT(RED); Series 9 was Pink; and now the Series 10 has its own color set. The available colors are Silver, Gold, and Jet Black.
I am not one to have a bright watch, so for me, I prefer more subtle colors. For all Apple Watches from the original to the Series 6, I opted for Space Gray; for the Series 7, 8, and 9, I chose Midnight. For the Series 10, I have opted for Jet Black. As was the case in past years, I opted for the darker color. I do not like the color Gold, so that one was out. Silver is a good choice because it is an entirely neutral option, but it is a bit too bright for my personal liking, so that meant Jet Black.
If you remember the iPhone 7, you may remember that there was also a Jet Black option for that phone, which was a darker color than the standard Black that was offered. The Jet Black Apple Watch Series 10 takes some of its cues from the Jet Black iPhone. Of course, the first is the name and the fact that both are a black color. The second aspect is the glossy nature of the Apple Watch Series 10.
All Apple Watches will reflect some light from the device, but the previous aluminum Apple Watches have all had more of a matte finish. Not the Jet Black Apple Watch Series 10, nope, it is very glossy, and more light will be noticeable.
Overall, having a glossy black Apple Watch has not been a problem. I have not really noticed the glossy aspects, unless I happen to be really looking for a reflection. Having the darker color means that it might go with more clothing than previous models.
Physical Characteristics
The Apple Watch is primarily a screen, a battery, and a system-on-a-chip. Yes, there are health sensors and buttons, but the primary interaction method is the screen, the digital crown, and the side button. Each version of the Apple Watch does make its own enhancements and tradeoffs in order to provide the best experience for users. The Apple Watch Series 10 has its own set of physical changes. Let us look at each, starting with the size.
Physical Size
The original 2015 Apple Watch came in two sizes, 38mm and 42mm. There was no functional difference between the two models, just a size difference. Three years later in 2018, the Apple Watch Series 4 increased the size of the screen options to 40mm and 44mm, about a 5% increase. In 2021, the Apple Watch Series 7 increased the size again, this time to 41mm and 45mm, this time it was approximately a 2.5% increase. This made the Series 7 approximately 7.8% larger than the original Apple Watch.
Again, three years after the last size change, there is another one. The Apple Watch Series 10 has increased to 42mm and 46mm.
Each screen size increase has been noticeable, but none of them have been jarring. The same can be said going from 45mm to 46mm. The absolute increase is merely a 2.22 percent increase. This is a minor increase over the 45mm. However, the 45mm was only a small increase over the 44mm. Even with the modest increase, the Series 7 provided a new option, a full keyboard. The increase to 46mm actually allows for some enhancements. This includes one additional line of text in apps like Mail and Messages, which may not seem like a lot, but it can actually provide more screen space.
Just like the increase from 44mm to 45mm being a minor increase, if you look at the comparison of the 46mm to the original 42mm, it is a more substantial increase. The difference is approximately a 9.52% increase in size, which is a substantial increase. The smaller Apple Watch model has received a similar size increase over the last ten years, specifically it has been 10.5%.
One of the items that is directly related to the size is the weight of the Apple Watch. So let us look at this next.
Weight
Each Apple Watch is an opportunity for Apple to adjust the internals. One of the benefits of increasing the physical size of the screen is that it provides a bit more area for housing components of the Apple Watch. If you have more area, you can possibly spread out components.
Even with the increase in size of the screen of the Apple Watch over the years, the depth of the Apple Watch has remained largely the same. The original Apple Watch was 10.5mm in depth, the Series 2 and 3 increased to 11.4mm in depth, and the Series 4 to Series 9 all went back down to 10.7mm in depth. The Apple Watch Series 10 changes this dramatically. In fact, the Series 10 is the thinnest Apple Watch that Apple has created. In fact, it is 9.4% smaller on your wrist than previous models. The Apple Watch Series 10 is 9.7mm in depth, and this is absolutely noticeable.
You might think that this could be an optical illusion. In order to indicate that it is indeed a difference, here is a screenshot of the same photo, with the level indicator:
With the decrease in depth, the weight of the Series 10 has also decreased. The Series 10 has gone down to 36.4 grams, which is down from 38.7 grams on the Series 9. This is a modest savings in terms of weight, and you may not notice the difference on a day-to-day basis, but any reduction in weight is a welcome change nonetheless. Let us now look at the screen on the Apple Watch Series 10.
Screen
The Apple Watch has maintained the same pixel density of 326 pixels per inch. This has been the pixel density since the introduction of the Apple Watch Series 4 in 2018. This remains true for the Series 10. One of the bigger changes is the type of technology behind the screen. The Apple Watch Series 4 introduced a new technology, a low-temperature polycrystalline oxide, or LTPO, screen. The Series 4 did not utilize all of the features of this technology.
The benefit of the LTPO screen is that it can reduce the amount of power used, which is made possible by adjusting the refresh rate. This screen technology is what makes the Always-On feature plausible. Specifically with the ability to have refresh rates down to 10Hz. The Series 10 includes an LTPO3 display, which is the third generation of the LTPO display. The change with this version is that it can go down to 1Hz, which can reduce battery consumption even further, particularly useful for a watch that has an Always-On display. The screen technology is not the only change.
If you are wearing your Apple Watch, you might not consciously think about it, but there are many times throughout the day when you actually end up looking at your watch, at some sort of angle. This could be when you are typing and you want to catch the current time, or it could be glancing at a notification that just came in. It could also be while you are exercising and want to see the current heart rate or split time. The Apple Watch Series 10 has been designed to allow you to see the screen while viewing it at an angle.
The Apple Watch Series 10 has what Apple is calling a "Wide‑angle OLED" display. This does not necessarily mean that according to Apple, the viewable angle of the Series 10 is the same, but the screen should be 40% brighter when viewing the Series 10 at an angle. This means that it should be easier to see when you are not directly looking at it.
When I first started using the Apple Watch Series 10, I did not think that there was a big difference when viewing the Apple Watch at an angle. However, while I was working on this review, I did end up noticing that the screen is indeed brighter when viewing the Apple Watch at an off angle. It might not seem like it is all that useful, but I can see how it might be helpful for those instances when you really do need it.
On the topic of features that you might not use that often, there is another one, the speaker.
Speaker
The Apple Watch has always had a speaker; however, you have only been capable of using it in limited situations. The speaker can be used for notification sounds, making noise while ejecting water through the speakers, and you can even use it for phone calls. With the Series 10, there is another usage: listening to music.
Typically, you would not want to listen to any sustained audio on a speaker unless that is the intended use case of the device, like the Beats Pill. The reason you want to avoid doing so is because when you use the speaker on the Apple Watch or on the iPhone for that matter, it requires a lot more energy to play audio through the speaker. This makes sense because it takes a lot of power to move the speaker, which transmits the audio.
I went looking for the actual speaker on the Series 10 and went to compare it to the Series 9, and I was a bit surprised by what I saw. On the Series 10, the speakers look just like a standard set of speakers, being a series of holes that are placed to produce the best sound. What was more interesting is the Series 9. Instead of having speaker holes, there is a slot.
If you shine a light on the slot, you will see there is a grate, but I find it intriguing that it was actually a slot. It is not that I did not know it was a slot, but I do not typically pay much attention to the speaker because I do not use it much at all since I always have my Apple Watch set to be silent.
Playing music from the Apple Watch
The manner in which you actually play music may not be the most intuitive. In order to play a song on the Series 10, use the following steps:
Open the Music app on your Apple Watch.
Tap on the "…" icon in the upper right corner.
Tap on the "AirPlay" button.
Scroll down to the bottom of the list.
Tap on "Control Other Speakers & TV".
Tap on "Apple Watch".
Tap on the "Play" button to begin playing music.
As mentioned, the steps for being able to play music through the speaker can be a bit obscure, but it does indeed work. Now, let us talk about the actual experience.
Speaker Experience
I tested the speaker by playing a song that I synchronized to my watch. That song is "Good Luck, Babe" by Chapell Roan. As you might have guessed, it was not the most bass-heavy result. You cannot expect a lot of bass out of such a small speaker, but even without the bass, any music that you do play is actually listenable.
I have been trying to come up with a plausible use case for this, and it finally occurred to me that this could be useful if you would like to listen to music while going to sleep but you do not want to use your phone.
It is not likely that many will use the speaker for playing music, but when you need it, it can be a useful thing to have. Do not forget to be considerate of others while in public; not everybody may want to listen to the same music. As mentioned above, it takes a lot of power to play music through the speaker. If you do happen to use the music playing option often, you may find yourself needing to charge your Apple Watch more. On the topic of the battery, let us cover that next.
Battery
Given the physical size and the various features of the Apple Watch, it is not expected to have multi-day battery life. For most, they end up charging every day. There are some users who will partially charge their Apple Watch multiple times per day. Even with all that the Apple Watch does, Apple states that it does get "All-day battery". Specifically, according to their claims, the Apple Watch gets 18 hours of battery life. In fact, every single Apple Watch has indicated it gets 18 hours of battery life.
Given that I am close to an Apple Watch charger most of the time, I typically charge my watch a couple of times a day. Normally, for my Apple Watch reviews, I do not change my charging habits. However, I decided to test Apple's claims of battery life. Since I use the Sleep Tracking features overnight, I end up charging just before going to bed.
I charged my watch and I removed it from the charger at 7:57 p.m. At that point, it was at 96%. I went through my day as normal, exercised once, and about 23 hours later, the battery level was still at 22%. This is well beyond the 18-hour battery claims, which is definitely a good thing.
There are ways of extending your battery life, like turning off the Always-On display and putting your Apple Watch into "Theatre Mode", just to name a couple. However, the options that you have are limited, and these will not stop the Apple Watch from performing actions in the background.
Last year, with the introduction of the Series 9, Apple introduced a whole new feature, Low Power Mode. When you put your Apple Watch into Low Power Mode, background tasks will be reduced so that your battery will last even longer. This can be particularly useful if you are not able to charge right away but want to still use your Apple Watch. I did not test Low Power mode, so I do not know if Apple's claims of 36 hours of battery life are accurate, but even if it is not 36 hours, it is likely close to it.
I was thinking about the fact that I still had 22% battery power remaining after 23 hours of usage, but I wonder if my battery level would have been lower if one of the apps was still available. More on that a bit later. Let us look at another battery-related item: charging.
Charging
When the Apple Watch was introduced, Apple indicated that the Apple Watch had magnetic charging. The physical size of the Apple Watch has meant that it might not be plausible to use standard chargers, like a Qi charging puck. Therefore, a special charger was needed. The inability to use a standard charging puck can be particularly true depending on the type of band that you have on your Apple Watch. As an example, if you have a Solo Loop attached to your Apple Watch, it is not possible to get the Apple Watch around a standard Qi charging puck.
Instead, you would need a way of charging. For these situations, you need a way of being able to charge while keeping the band connected. Therefore, Apple opted to create a custom charging puck, one that fit perfectly on the bottom of the Apple Watch. The Apple Watch Charger originally had a USB-A connection, but it has since been converted to USB-C.
The charging puck had one significant benefit: it is magnetic. In fact, Apple's MagSafe charging for the iPhone was likely influenced by the Apple Watch magnet, as well as MagSafe introduced on the MacBooks back in 2006. Having a magnetic charger makes it much easier to be assured that your Apple Watch will be charged when you need it. However, if your Apple Watch battery is low, it may take some time to get a sufficient charge. Starting with the Apple Watch Series 7 in 2021, Apple offered an additional option: fast charging.
According to Apple's statistics, fast charging on the Series 7, 8, and 9 can charge from 0% to 80% in 45 minutes. This is not a bad amount of time at all. Being able to get a sufficient charge in 45 minutes is probably plenty of time for most people. Yet, there is room for improvement.
The Apple Watch Series 10 improves on this by allowing fast charging to occur from 0% to 80% in 30 minutes. This is a significant 33% faster charge time, but there is a caveat. In order to get the 30-minute fast charging, you will need a 20-watt power adapter, or higher, in order to be able to get the maximum fast charging with the Series 10. While it may not always be needed, fast charging to 80% within 30 minutes can be useful when you really do need to charge your Apple Watch quickly.
One of the reasons that you might need to make sure that you have sufficient charge on your Apple Watch is so you can do some sleep tracking, and there is a new feature to look at: Sleep Apnea.
Sleep Apnea
Even though the Apple Watch was originally intended to be an "iPhone replacement", complete with all sorts of apps, it has not turned out that way; and that is a good thing. Instead, the Apple Watch is best used for notifications, health, and fitness tracking. Over the years, new health features have been added including the Blood Oxygen sensor, ECG app, and Sleep Tracking, just to name a few. Well, another item can be added to that list, Sleep Apnea.
If you have an Apple Watch Series 9, Apple Watch Ultra 2, or an Apple Watch Series 10, you can get some information regarding whether or not you possibly experience Sleep Apnea. According to the Mayo Clinic, "Sleep apnea is a potentially serious sleep disorder in which breathing repeatedly stops and starts.". Apple has some information about Sleep Apnea as well. According to Apple:
Sleep apnea is a common and treatable disorder that negatively impacts people's health and quality of life. Despite advances in public awareness about the importance of sleep, most cases of sleep apnea remain undiagnosed. Apple Watch can track Breathing Disturbances during sleep and provide notifications of possible sleep apnea if Breathing Disturbances values reach a level associated with moderate to severe sleep apnea.
There are actually two components to the Sleep Apnea feature. The first is a measurement for "Breathing Disturbances". This is the main aspect of the Sleep Apnea feature. The second portion is the actual notification. With iOS 18, you may possibly get a notification indicating that you might be experiencing Sleep Apnea, but only if the number of disturbances can be classified as "Elevated" over the course of 30 days. As is the case with most of the Apple Watch health features, the data will appear in the Health app on your iPhone.
You might suspect that the Sleep Apnea feature might be powered by the Blood Oxygen sensor, but it is not. In fact, Apple has developed an accelerometer-based algorithm. The algorithm detects the number of breathing disturbances that occur during the evening. It takes into account all three axes of movement of the accelerometer and uses these to determine how many disturbances you have experienced over the course of the evening.
It will take some time before the data can be collected, but it will be collected. It should be mentioned that "…neither component is intended or cleared for use by people who are already diagnosed with sleep apnea." Therefore, if you already have Sleep Apnea, this feature may not be worth enabling. If you do want to set up Sleep Apnea, you can use the following steps:
Open the Health app on your iPhone.
Tap on your Avatar in the upper corner.
Locate "Sleep Apnea Notifications".
Tap on "Set Up".
Here you will be able to follow the steps to enable Sleep Apnea notifications. You can read a lot more about the methodologies and research in the Sleep Apnea Notifications on Apple Watch PDF on the Apple Health website. While on the topic of health, let us talk about a missing feature, the Blood Oxygen Sensor.
Blood Oxygen Sensor
In 2020, medical device company Masimo sued Apple for infringing patents related to the Blood Oxygen sensor in the Apple Watch. The final outcome came in December of 2023, when the International Trade Commission ordered to cease imports of the Series 9 Apple Watch, as well as the Apple Watch Ultra 2. When the order was made, sales for these devices ceased. Apple appealed and was able to begin selling the devices again, but not until United States Customs and Border Protection was able to determine the changes made by Apple were sufficient. Ultimately, they were. When the devices went back on sale, there was a change: the blood oxygen sensor app was no longer functional.
It has now been more than 9 months after that change was made, and the blood oxygen sensor is still banned in the United States. What does this mean for the Apple Watch Series 10? Apple Watch Series 10 devices are subject to the same restriction, which means that the Blood Oxygen Sensor app is non-functional. As far as anyone can tell, the hardware remains in the device, so if there is a resolution, the app can be re-enabled, but as of this writing, the Blood Oxygen Sensor app is still not enabled.
The fact that there has been no resolution in 9 months is absolutely a failure on Apple's part. That is not to imply that Apple is the party at fault in the dispute, but Apple is the one who controls the design and functionality of the Apple Watch. Many may see the Blood Oxygen Sensor as unnecessary, but removing a previously functional feature is a downgrade. As Apple's own Vice President of Johnny Srouji stated in an interview with Geekerwan "Our goal is to build the best product delivering the absolute best user experience".
The interview is about the iPhone 16 and iPhone 16 Pro, but it is not a stretch to say that it applies to the Apple Watch as well. Therefore, not having the Blood Oxygen Sensor, in my opinion, does not make it the best product that Apple could make, at least not in the United States. If you are outside of the United States, you are not affected by this ban. In order for Apple to be able to claim that the Apple Watch Series 10 is the best product, it should include the full capabilities of the Blood Oxygen Sensor.
Closing Thoughts
If you are wearing an Apple Watch Series 7, or older, the Apple Watch Series 10 is a great upgrade. The larger screen size may hardly be noticeable, but any increase that allows for more information to be shown on the screen is a welcome one. When you are using the Apple Watch Series 10, you may notice that you can see better when your Apple Watch is at an angle. This is due to the new Wide-angle OLED display on the Series 10.
If you are like me, and prefer a darker Apple Watch, you cannot go wrong with the Jet Black Series 10. This is a darker color than the Series 9 Midnight, and the Jet Black even has a gloss finish to it. While you are looking at the glossy Jet Black, and if you look closely, you may notice that the speaker arrangement is different and this is because the speaker can be used to listen to music. It may not be useful all of the time, but there may be those times when it can come in handy.
Should you find yourself in a situation where you are pressed for time and your Apple Watch is low on battery, you may be in luck. You can now charge from 0 to 80% in about 30 minutes, this is an improvement from the 45 minutes it previously took to get this amount of charge.
If you live in the United States, the decision to upgrade may be a tough one. If you are upgrading from a Series 4 or Series 5, you absolutely should upgrade, not only because of the larger screen, but you will instantly notice the speed increase. Since the Series 4 and 5 did not have a Blood Oxygen sensor, you will not notice any difference. But, if you are upgrading from a Series 6 or newer, and if the Blood Oxygen Sensor is important, you will need to decide if losing the Blood Oxygen sensor is worth the trade-off.
There are numerous occasions throughout a day when you might want to consume some sort of media. This could be a song, a podcast, audiobook, or even a movie. If you are perusing social media, you may come across a video that was shared by someone you follow. If you are in a place where it would not disturb others, you might be able to listen using the speakers on your device. However, that is not likely the case. Quite often there are others about and unless they are watching the content on your screen at the same time, you may want to use headphones.
There are a variety of different headphones types, in ear, on ear, and over the ear. The model of headphones ultimately determines the type of headphone. For me, a vast majority of the time I use a pair of Beats Studio Pros to listen to most of my audio content. I use these at home and at work. The Beats Studio Pros are over the ear headphones, so they are a bit bulky, which is fine at home and work, but does not work while I am out and about. Instead, in those instances, like when I am out grocery shopping, or going out for a walk, I use a pair of AirPods.
It is quite possible that you are aware, but in case you are not, Apple has been in the headphone game for a long time. In fact, they created the "iconic" white headphones that many identify as Apple, back in 2001 when they introduced the original iPod. In the intervening 23 years, they have produced a wide variety of different headphones.
From 2001 to 2014, each new version of the headphones released by Apple, excluding the iPhone Bluetooth headset, connected to devices via a wire. For a majority of the time, Apple utilized a standard 3.5mm, or 1/8-inch, headphone jack. In 2016, Apple removed the headphone jack from the iPhone 7 and iPhone 7 Plus. You could use a pair of Lightning EarPods, which were just Apple's EarPods but with a lightning cable. There was another option, called AirPods.
AirPods History
In 2016, at the same event that Apple introduced the iPhone 7 and iPhone 7 Plus, Apple introduced a new set of headphones, this time without wires, and they called them AirPods. AirPods are bluetooth headphones that work with any bluetooth-enabled device, but when they are paired with an Apple device they get a few extras. The original AirPods took the shape of the Apple EarPods and basically cut off the wires.
The AirPods offer an easy way to pair the AirPods, which was a tremendous improvement. The pairing was as simple as "Open the Case" and then hit the "Connect" button. Seriously, that was all it took. This was a vast improvement over standard Bluetooth pairing, which may not work that well.
One of the extra features is the fact that your AirPods will appear on ALL of your devices, meaning there is no need to pair your AirPods with each device you wanted to use. Furthermore, you could easily switch between devices by simply using Control Center and selecting the AirPods.
Apple sold the original AirPods until 2019, when they introduced the 2nd generation AirPods. These improved on the previous model by including Siri announcements, provided additional talk time, and even included a wireless charging case that could be purchased.
At the same time that Apple introduced the 2nd generation AirPods, they also introduced a new set of AirPods, the AirPods Pro. The AirPods Pro introduced a new shape, one that went in ear but also created a seal. The reason for this seal was needed was for a new feature, called Active Noise Cancellation.
A mere two years later, in October of 2021, Apple introduced the 3rd generation AirPods. These took inspiration from the AirPods Pro in shape, but did not have active noise cancellation.
In September of 2022, Apple introduced an updated pair of AirPods Pro, the AirPods Pro 2. These had improved battery life, Bluetooth 5.3, and even included a USB-C charging case.
Now, Apple has released a new set of AirPods, the AirPods 4. I bought a pair of AirPods 4 with Noise Cancellation in hopes of having them fit a bit better than the 3rd generation ones. As a note, I will use AirPods 4 with ANC throughout this post, as a shortened form of the official name, "AirPods 4 with Active Noise Cancellation". Let us now get into the actual device, starting with the pairing experience.
Pairing
As is the case with all of Apple's AirPods. pairing your AirPods to an iPhone, iPad, or Mac, is extremely straightforward process. You can simply perform the following steps:
Open the case.
Watch the popup appear on your iPhone/iPad/Mac.
Tap, or Click, on the "Connect" button.
That is it, you are done. Apple has developed all AirPods to make it very very easy to pair your devices. This has always been a nice touch and is so much easier than traditional Bluetooth headphones. It is good to see that the out of box experience for the AirPods has remained the same, because it really is quite elegant. Now, let us move onto the case.
The Case
All AirPods cases have the same general look: a flip top to gain access to the actual AirPods, a charging port at the bottom, a pairing button on the back, and an LED light, on the front or inside, to indicate whether the AirPods are charging. The AirPods 4 with ANC have the same; however, the actual case is a bit smaller, most noticeably in the width. The 3rd generation AirPods case is 2.14 inches (54.40mm) wide. The AirPods 4 with ANC case is 1.97 inches (50.1mm) wide. The height and depth of the two models are very close as well. The 3rd generation AirPods are 1.83 inches tall (46.40mm), and have a depth of 0.84 inches (21.38mm). Comparatively, the AirPods 4 with ANC is 1.82 inches (46.2mm) tall, and has a depth of 0.83 inches (21.2mm). These two are very similar in weight as well. The 3rd generation AirPods case is 1.34 ounces (37.91 grams), and the AirPods 4 with ANC is 1.22 ounces (34.7 grams). This is an 8.5% reduction in weight, which should be somewhat noticeable.
One of the big changes made with the 3rd generation AirPods is the fact that there were two case options. One that allowed you to charge only via a Lightning cable, and another that also offered charging via MagSafe, in addition to Lightning. The MagSafe charging can actually use either any Qi-compatible charger, or you can even use the Apple Watch charger.
The 3rd Generation AirPods that I purchased was a Lightning connection, but it also had the option of charging via MagSafe. However, I almost exclusively used an Apple Lightning Dock for charging. The reason I used this is twofold. First, I already purchased the Lightning Dock, and it was just sitting there. Secondly, I prefer to charge my AirPods overnight, and since I charge my iPhone using a MagSafe adapter, I used the dock for my AirPods.
The AirPods 4 with ANC modify the charging case a tad bit. The first is that you can use a USB-C cable, instead of the previous Lightning cable. The second change is that the MagSafe now supports charging via the Apple Watch Charger. Overall, this is a most welcome change, but it does have one particular downside.
The changes needed to allow the Apple Watch charger to work have made it so that the case for the AirPods 4 with ANC no longer stays in place when you use Apple's MagSafe charging puck. When placed in a horizontal configuration, the AirPods will indeed charge. However, if I put the charging puck vertically, the case does not stay put. This did work with the 3rd generation AirPods.
The fact that the AirPods 4 with ANC supports charging with an Apple Watch charger means that, for me, I will likely be using this method, because for me it is an overall better option. I use the Sleep Tracking features on my Apple Watch, so I do not need the Apple Watch charger overnight; therefore, I should be able to use the Apple Watch Charger instead of needing to plug in the AirPods.
The last feature of the case is the speaker. This is an absolute necessity if you often misplace your AirPods. The reason is that you no longer need to really listen for the sounds when you request that a sound is played within the Find My app. The sound is quite loud and should make it a lot easier to find a pair of missing AirPods. Let us switch gears to look at the actual design of the AirPods themselves.
AirPod Design Changes
When you open up the AirPods 4, you will notice that they look very similar to the 3rd generation AirPods, yet they are subtly different. In fact, they are a mix of the AirPods 2nd Generation and the 3rd generation AirPods. The shape is a bit smaller, but it does still resemble the 3rd generation AirPods.
Another change is the sensor that is used to detect whether the AirPods are in your ears or not. With the 3rd generation AirPods, there was a skin detector. This changes with the AirPods 4 with ANC so that it is now an optical detector. I am sure there are some technical differences between the two, but for all practical purposes, they function the exact same. Therefore, the technology used does not matter all that much.
There are some other changes, like the vents. Instead of being on the back side of the AirPod, as they were on the 3rd generation AirPods, now with the AirPods 4 with ANC, the vents are on the portion of the AirPods that go in your ear, but they do not actually go inside of your ear canal.
These changes are needed because the entire audio system was reworked. Apple's AirPods site states:
Enjoy every note, beat, and vibe. The entirely new acoustic architecture uses an Apple-designed low-distortion driver powered by a custom high dynamic range amplifier. Put simply, you hear music in exceptional detail, with deeper bass and crystal-clear highs.
I can attest that the bass is definitely improved with the new AirPods 4 with ANC. The way that I tested this was by connecting the left AirPod of the AirPods 4 with ANC to my iPhone 16 Pro Max and connecting the right AirPod from my 3rd generation AirPods to my iPhone 15 Pro Max. I then chose a few different songs and listened to them all the way through, completely in sync. The three songs that I chose were:
In order to make sure everything was the same, I did not use Spatial Audio. I had the audio mode on the AirPods 4 with ANC set to "Off". I could easily hear the new AirPods 4 with ANC more clearly at lower volumes than I could with the 3rd generation AirPods.
Ultimately, the new design really does make an improvement. I also think there was an additional item coming into play with why the AirPods 4 with ANC sounded better, and that is the fit. Let us get a sense of that now.
AirPod Fit
When Apple introduced the 3rd generation AirPods, they mentioned that they took thousands of scans of various ear shapes to be able to create a single shape that fits as many people's ears as possible. All of those scans resulted in the shape seen for the 3rd Generation AirPods.
The 3rd Generation AirPods had a different shape that was designed to have sound go directly into the ear canal. The new shape was designed to fit in more people's ears, but it was not the best fit for everyone. Myself being a prime example. Within my review of the 3rd generation AirPods I stated:
Even though the 3rd generation AirPods do indeed fit in my ears, they do not sit as comfortably as the 2nd generation. The 2nd generation AirPods seem to just have an overall better fit where they rest easily on my ears. Meanwhile, the 3rd generation AirPods tend to stick out a bit more. The 3rd generation still fit in my ears, just not as well. As a tip, be sure to twist them forward after you have put them in your ears; this should help them fit a bit better.
I have been trying to use the 3rd generation AirPods lately, and I will admit that I kept forgetting about the last line in the quote above: "As a tip, be sure to twist them forward after you have put them in your ears; this should help them fit a bit better.". I have been forgetting that you need to twist them forward after you have placed them in your ears. Here is a graphic of what I mean:
Now, the AirPods 4 have refined the shape of the AirPods, yet again. If you look at a side-by-side comparison of the 3rd generation AirPods and the AirPods 4 with ANC, you will see that the AirPods 4 have a slightly more narrow shape. Here are the two directly next to each other.
Overall, for me, the AirPods 4 with ANC have a much better fit. If the 3rd generation AirPods did not fit that well for you, it may be worth checking out the AirPods 4 with ANC. Now, let us move to another feature, Siri Announcements.
Siri Announcements
One of the features introduced with the 2nd Generation AirPods was the ability for Siri to announce messages and phone calls. When the feature was introduced, there was no way to indicate what action to take, if any, but now, that changes.
Let us say that you are in a public place and your hands are full and you receive a phone call. Siri will make an announcement saying "Phone call from Johnny Appleseed. Answer it?". When this happens, you have a couple of options. One is to verbally say "No". The second option is to just let it ring and go to voicemail. While you could just let it go to voicemail, this is not the optimal solution. Now, there is another option.
Back in June at Apple's World Wide Developer Conference 24, they announced a new feature coming to the AirPods Pro 2. That feature is the ability to handle a Siri Announcement without looking at your phone, saying anything, or even tapping on your AirPods. You can now make a selection by simply nodding or shaking your head.
When you nod your head, it answers "Yes", similarly, if you shake your head, it means "No". This is made possible through the movement of the accelerometer within the AirPods 4 with ANC. This feature requires an H2 chip, which the AirPods 4 with ANC do indeed have. The AirPods Pro 2 also have this option, and they have an H2 chip.
I do not get that many phone calls, so it is unlikely that I will use this feature often, but for those that do receive a lot of phone calls, this could become an invaluable feature. Let us now hear a bit about the available Audio Modes on the AirPods 4 with ANC.
Audio Modes
Many of Apple's headphones have various types of audio modes that you can use, depending on the audio being played. You can either use Noise Cancellation Mode, Transparency Mode, or even have all audio modes off. There are use cases for each audio mode, so let us hear about each in turn, starting with Active Noise Cancellation.
Active Noise Cancellation
There are many situations where you may want to reduce a lot of background noise in order to hear your audio better. This could be because you are in an office with others, or you may be on an airplane and you want to mask the engine noise, or it could just be that you are out in public and there is a lot of ambient noise. Whatever the reason for wanting to do so, it is there. Many headphones have the ability to block out this noise using a feature called Active Noise Cancellation.
Typically, in order to get Active Noise Cancellation, you need to be able to make a seal with your ear, at least of a seal of some sort. With the Beats Solo Pros and the Beats Studio Pros, a seal can easily be created because these are over-the-ear headphones. The same applies with the AirPods Pro, but because AirPods Pro are in-ear headphones, they are designed to provide you with a good seal. This is done by using the Ear Tip Fit Test, which will help you select the proper tips for your ears so you can get the best seal possible.
Something that you do not generally see is Active Noise Cancellation in an "open-ear" headphone. The AirPods are an open-ear headphone. However, Apple has made this a reality. The way that Active Noise Cancellation functions is by listening to the noise that is outside of your headphones, calculating the exact opposite wavelengths, and then blending both your audio and the cancelled wave into what you are listening to. By doing this, a significant amount of background noise is removed.
The AirPods 4 with ANC are not my first headphones with Noise Cancellation. In fact, they are the third set I have owned. The first two sets were the Beats Solo Pros and the Beats Studio Pros, both of which are "over-the-ear" headphones. The difference with the AirPods 4 with ANC is that they are the first in-ear headphones that I have owned with Noise Cancellation.
One of the issues that I have had with ANC is that when it is enabled, it feels like my head is pressurized. This is quite an uneasy feeling for me, and I cannot handle the sensation for too long. Another issue that I have experienced is that there is a significant hiss when attempting to use ANC. I have experienced the hiss on both of my Beats headphones that have this feature. This may just be an issue with the Beats, because I do not think this is normal behavior. There is one particular time that I actually do enable Noise Cancellation, and that is when I am vacuuming.
When I am vacuuming, I am almost always listening to a podcast or audiobook, but also sometimes music, and the noise from the vacuum can be quite loud. So in order to protect my hearing, I enable noise cancellation. I deal with the pressure issues and the background hiss because it is only for a short time, and I would rather deal with that and protect my hearing than not do so.
I tested out the Noise Cancellation of the AirPods 4 with ANC by vacuuming, and I was pleasantly surprised by the results. The Noise Cancellation on the AirPods 4 with ANC works quite well. The Active Noise Cancellation did block out a vast majority of the noise from the vacuum, very much like when I use my Beats Studio Pros for vacuuming.
Noise Cancellation mode is not the only mode; there is another mode to cover, Transparency Mode.
Transparency Mode
There are those instances when you really do need to be able to hear everything in your surroundings. In fact, you may even want the sound amplified. This is where "Transparency Mode" can be useful. With Transparency Mode, the noise in your surroundings is amplified so you can hear even more noise.
Transparency Mode was not available on my 3rd Generation AirPods, but the feature is available on the Beats headphones that I have owned, so it is not a new feature, but new in the AirPods 4 with ANC. I rarely use Transparency Mode, because there are not that many instances when I need to hear even more around me. Should there ever be an instance when I want to use it, it will be good to have the ability to do so.
While Transparency Mode is not something I will use that often, I can see it being useful for those that do. But, there is another feature that I can say is quite useful, that feature is called Adaptive Audio.
Adaptive Audio
It is quite likely that you have had to adjust the volume of the audio multiple times in a short amount of time. The reason that you may need to adjust the volume is likely due to the ambient noise that is going on around you. What would be great is if your headphones could automatically adjust the audio for you. Guess what, the AirPods 4 with ANC can absolutely do this with a feature called Adaptive Audio.
Adaptive Audio is a feature that you can selectively enable. The way that you enable Adaptive Audio is configured depends on the system. Let us look at the ways to enable it, starting with iPhone or iPad:
iPhone / iPad
Connect your AirPods 4 with ANC to your iPhone or iPad
Open Settings.
Directly beneath your Apple Account, tap on your AirPods 4 with ANC.
Tap on the "Adaptive" option to enable Adaptive Audio.
macOS
Connect your AirPods 4 with ANC to your Mac.
Open System Settings.
Scroll down to the bottom and select your AirPods 4 with ANC.
Click on the "Noise Control" dropdown.
Select the "Adaptive" option to enable Adaptive Audio.
It is entirely understandable that you might be skeptical about just how well Adaptive Audio works, but from my experience, it does work extremely well. As an example, I went out for a walk with my AirPods 4 with ANC, and unbeknownst to me, there was a train going by, but there was. I was not immediately aware of this because I did not hear the train due to the Adaptive Audio removing the noise from the train. It should be noted that it did not remove all noise; if you listened super closely, I was able to hear it. In order to verify that it was working, I slightly pulled out one of my AirPods so I could easily hear the train. I did not remove the AirPod entirely but pulled it just enough to verify that it was working. Now, Active Noise Cancellation could accomplish the same thing.
Later during my walk, I experienced exactly how well adaptive audio was working. I live in a suburban area, so there are constantly cars driving by. When the cars were driving by, the volume of my audio would adjust ever so slightly, and the noise cancellation would kick in to block out most of the sound from the cars. Adaptive Audio does not only function outside your home; it can also work inside your home. As an example, if one of your appliances is loud, say your dishwasher or even your washing machine, you can use Adaptive Audio to filter out that audio and still hear your audio.
I am not sure if I will end up using Adaptive Audio when I am at home because, as previously mentioned, I typically use my Beats Studio Pros, but I will definitely be using Adaptive Audio when I am out and about. There is another feature to discuss. This one is called Conversation Awareness.
Conversation Awareness
There may be situations where you want to be able to interact with others without needing to manually adjust the volume on your AirPods. Now, with Conversational Awareness, this can be done automatically on your behalf. The way that this functions is by recognizing when you are talking to someone, and the audio that you are listening to will automatically duck or be reduced so that you can communicate with the individual, and then your audio will return back to its previous volume.
I do not have many opportunities to test this feature, given that I us ually remove my AirPods when I talk to someone, but you can test its functionality by simply talking out loud. The feature does function as expected. One thing to note is that it will take approximately 5 seconds for the audio to return back to its previous volume. It is a good feature, and should I ever be in a situation where it will be needed, it will be a useful feature to have.
Closing Thoughts
The AirPods 4 with ANC blends features of the AirPods 3rd generation with the features of the AirPods Pro. The AirPods 4 with ANC include noise cancellation, which is the first time that the feature has been available in an open-ear headphone from Apple. The Active Noise Cancellation will not be nearly as good as the AirPods Pro, but that is to be expected, since truly excellent Active Noise Cancellation requires a good seal with your ear, and you cannot get that with the AirPods 4 with ANC. Even without a solid seal, the Active Noise Cancellation on the AirPods 4 with ANC is still really good and will be sufficient for most people in many situations.
Beyond the Active Noise Cancellation, the new Adaptive Audio feature works tremendously well. Adaptive Audio will automatically adjust the level of noise cancellation depending on the external noise for where you are. This is done automatically for you, and you do not even need to adjust the volume of the audio, because that will be handled as well, if it is needed.
For many, the 2nd generation AirPods were a fantastic fit in their ears, while the 3rd generation AirPods did not fit well at all. The AirPods 4 with ANC might fit in your ears a bit better. For me, they are not as good as the 2nd generation AirPods, but they are definitely an improvement on the 3rd generation. If you fall into this group, then definitely give the AirPods 4 with ANC a look, because they could be what you are looking for. This shape is made possible because the AirPods 4 with ANC has a new shape, one that blends the shape of the AirPods 2 with that of the 3rd generation AirPods, but in a more svelte shape. This means that the actual portion of the AirPod is a bit smaller and should fit in even more people's ears than the 3rd generation did.
The AirPods themselves are not the only change; the case has received a solid update as well. One of the big changes is the addition of a speaker to the case. Now you can more easily find your AirPods 4 with ANC because the speaker magnifies the sound, and it is very clear and easy to hear. The other changes with the case include USB-C, instead of Lightning. If you prefer to charge your AirPods via MagSafe, the case now also supports charging via both MagSafe as well as by using an Apple Watch charger, which can make it more convenient.
Overall, the AirPods 4 with ANC are just about the perfect mix of the standard AirPods and the AirPods Pro. You get Noise Cancellation in an open-ear design. The new shape improves the bass and overall sound of the AirPods 4. They are a solid choice if you want to upgrade your AirPods, or if you want to purchase AirPods for the first time.
Each year Apple releases new versions of its operating systems. This year’s releases of macOS Sequoia, iOS 18, iPadOS 18, and watchOS 11 are full of features including customization options, a new way to manage passwords, and some changes for messages.
On iOS 18 and iPadOS 18 there are new customization options for your Home Screens, including placing apps wherever you would like, a new dark mode for icons, and you even have the ability to tint all of your icons. Within Messages you can add some effects to your text, including adding bold, italics, underline, or strikethrough. But that is not all. You can also add motion effects that will give even more animation to your text. For those that communicate with those on another platform, RCS is now supported as well.
watchOS 11 adds some new features, like time and location-based widgets appearing on the Smart Stack. Along with this, there is now a Vitals app that you can use to get a quick glance at some overnight metrics like heart rate, blood oxygen, and sleep duration. Fitness is an import aspect of the Apple Watch, you can now pause your rings and maintain your streaks. This is a good addition for those times when life gets in the way and you want to keep your streaks going.
Apple has introduced their take on artificial intelligence and this can help you create your own emoji, rewrite text, and even generate images. For anything that needs to be sent to the cloud, it will go to Apple’s Private Cloud Compute platform.
These are just some of the topics covered in macOS Sequoia, iOS 18, iPadOS 18, and watchOS 11 for Users. There is bound to something for everyone, no matter what your level of expertise.