Today, Pixelmator has announced that it has agreed to be acquired by Apple. From the brief posting:
Today we have some important news to share: the Pixelmator Team plans to join Apple.
We’ve been inspired by Apple since day one, crafting our products with the same razor-sharp focus on design, ease of use, and performance. And looking back, it’s crazy what a small group of dedicated people have been able to achieve over the years from all the way in Vilnius, Lithuania. Now, we’ll have the ability to reach an even wider audience and make an even bigger impact on the lives of creative people around the world.
Regarding any immediate changes, the post states:
Pixelmator has signed an agreement to be acquired by Apple, subject to regulatory approval. There will be no material changes to the Pixelmator Pro, Pixelmator for iOS, and Photomator apps at this time. Stay tuned for exciting updates to come.
My Thoughts
This could be huge in many respects. I suspect there are two possible things that we can see. The first is that once the deal closes, I suspect that many of Pixelmator’s features could be incorporated into Apple’s own Photos app. Furthermore, I could see Apple utilizing Pixelmator as a means of testing out early Apple Intelligence features, particularly within the Photomator app, given that the purpose of that app is to allow you to edit your photos in a non-destructive manner. By using this approach, they could test out new AI features faster before incorporating them into the main Photos app.
The second outcome is a bit different. There are other companies, particularly Adobe, which have artificial intelligence photo enhancement tools already incorporated into their products. Apple likely needs something that can compete. While Apple could absolutely build something, it would take some time. It would be faster to acquire an existing product, and Pixelmator is likely that product.
I can honestly see Pixelmator and Photometer quickly become the new “Image Playgrounds” apps. It is undoubtedly an undertaking to incorporate Apple’s image generation tools into Pixelmator and/or Photomator, but that would definitely be much more of an expense than to build out their own app entirely. I could then easily see Apple providing these two apps for free with basic features, but then having the subscriptions for Pixelmator and/or Photomator for the basis of more advanced photo features powered by Apple Intelligence.
Undoubtedly, it will be interesting to see how Apple incorporates the apps into their own product suite, or what they end up doing with Pixelmator in the long run.
Technology is consistently entertaining new crazes. Some examples include blockchain, subscription juicers, netbooks, 3D televisions, hyperloop, and "hoverboards", just to name a handful of examples. All of these were going to be "the next big thing", but none of these have panned out as the inventors intended.
There has been a term bandied about that people think may be the end-all for computers. Said term is "Artificial Intelligence", or "AI". The term "AI" can mean a variety of different things, depending on whom you ask. However, when most use the term AI, what they are expecting is a fully conscious and sentient entity that can think, act, and rationalize as a human would. This is called "Artificial General Intelligence". Today's technology is nowhere even close to being able to come to this reality. It is not yet known whether or not Artificial Intelligence will actually live up to its ultimate expectations.
Apple is not known for being jumping on bandwagons and being the first to create new categories of technology; they typically leave that to others. However, if there is a technology that they can put their own spin on, they might do so. At their World Wide Developer Conference 24, they introduced one of these types of technologies, called "Apple Intelligence".
Apple Intelligence is not a single item; in fact, it goes against the grain of other AI assistants and only works on your data. Apple Intelligence consists of a variety of tools to help you accomplish a specific task. When introduced, Apple indicated that the initial features of Apple Intelligence would be released over the course of the iOS/iPad 18 and macOS Sequoia releases.
The items that comprise Apple Intelligence include: Writing Tools, Image Generation, and Personalized Requests. Initially, Apple wanted to have the first items available with iOS 18; however, during the beta, Apple realized that the features would not be far enough along for an initial iOS/iPadOS 18.0 and macOS Sequoia (15.0) release, so they were pushed to iOS/iPadOS 18.1 and macOS Sequoia 15.1.
Not every device that can run iOS 18.1, iPadOS 18.1, or macOS Sequoia 15.1 is able to support Apple Intelligence. To be able to run Apple Intelligence you need to have one of the following devices:
iPhone 16/Plus (A18)
iPhone 16 Pro/Pro Max (A18 Pro)
iPhone 15 Pro/Pro Max (A17 Pro)
iPad mini (A17 Pro or later)
iPad Air (M1 or later)
iPad Pro (M1 or later)
Apple Silicon Mac (M1 or later)
The reason that these devices are the minimum is a combination of needing 8GB of memory, as well as a neural engine.
This article is part of an on-going series that covers the features of Apple Intelligence, as they become available. This article focuses on the Apple Intelligence feature called “Hide Distracting Items", within Safari.
The Modern Web
It is hard to imagine today's modern world without the internet. If we did not have the internet, It is entirely plausible that modern society would look incredibly different without the internet. When the internet began is was used merely as a means of sharing information, mostly by the U.S. government and universities. Of course, this would not last, and not long after the internet was created, regular users began joining the internet.
When non-academics and non-government people joined the internet, they began to communicate over bulletin-board systems, creating their own webpages, and their own sites. If you were online in the 1990s, it was a common refrain to hear "do not put your credit card into a site on the internet". Today, though, it is commonplace to do just that.
Running a website is not free, it msut be paid for in some manner. There are a variety of ways of supporting a website. Sometimes, it is with a direct payment, and other times sites are supported with donations. However, the most common method of websites generating revenue is through ads. But ads are not the only items you will encounter while on the web.
Distracting Items
There are those sites that care for their visitors and actually attempt to minimize the distractions that their visitors encounter. But, there are an increasing number of sits that will absolutely bombard you with a variety of items. This can include:
Ads
Autoplay videos
Sign up for a newsletter
Cookie popups
Third-Party Sign in
Use the app
And these are just a few. It is quite possible that you might encounter one, or all, of these on a site, and they can be quite distracting. With iOS 18.1, iPadOS 18.1, and macOS Sequoia 15.1, there is a new feature that can help, at least within Safari.
Hiding Distracting Items
With Safari in iOS 18.1, iPadOS 18.1, and macOS Sequoia you can now use a feature called "Hide Distracting Items". The "Hide Distracting Items" feature is designed to, as the name indicates, hide distracting items on various webpages. This is not the same as a Content Blocker, but it can work in a similar manner.
Hide Distracting Items requires that you indicate which items are distracting, but this is a pretty straightfoward process. To enable Hide Distracting Items perform the following steps:
Open Safari.
Navigate to the website where you want to hide items.
Tap, or click, on the Square and three lines in the URL bar. This should bring up a menu.
Tap, or click, on "Hide Distracting Items".
When you tap on Hide Distracting Items an overlay will be shown. This overlay will highlight various elements on the page. You can click on the "Hide" to confirm that you want to hide the element. You can click on any number of items that you would like to hide and they should be hidden.
Once you have completed selecting the items that you want to hide, be sure to click, or tap, on the "Done" button to save your changes. You can also tap, or click, on "Cancel" to not save your changes.
Showing All Hidden Items
In the event that you accidentally end up hiding too many items and you have saved the changes, you can show all of previously hidden items by using the following steps:
Open Safari.
Navigate to the page you want to show the hidden items on.
Tap, or click, on the Square and three lines in the URL bar. This should bring up a menu.
Tap, or click, on the "Show Hidden Items" button.
Once you click on "Show Hidden Items", all previously hidden items will be shown. It should be noted that this will show ALL previously hidden items, not just from the latest session, but any element you hid. It is not an ideal situation to have to show all hidden items, but it is quite useful should you accidentally hide too many items.
Caveats
The Hide Distracting Items feature is pretty simple to use, but it is not always 100% correct. As an example, you could be attemping to hide a rather egrious ad on a webpage, only to have another ad appear in its place. This happens because of the nature of the Hide Distracting Items feature. It will do its best to consistently hide the items, but if the element id changes between page loads, it might not always hide the element.
Closing Thoughts on Hide Distracting Items
The modern web is chalkful of ads, popups, and just general distractions. It has not always been this way, but many are reluctant to pay for content, and instead of paying with money, you pay with attention and data. Apple has added a new feature to help with the former item; this is called Hide Distracting Items.
With "Hide Distracting Items" you can hide any element on a website. This could be an ad, a popup, or any other distracting item. This works in most situations, but it is foolproof and sometimes items that you have hidden will appear again. If you do manage to accidentally hide some elements on a webpage, you can undo all of them in one fell swoop.
Even though the feature does not work 100% of the time, it does work a majority of the time, so it may be worth exploring for those sites that are egregious with their ads and popups.
Be sure to check out all of the other articles in the series:
Technology is consistently entertaining new crazes. Some examples include blockchain, subscription juicers, netbooks, 3D televisions, hyperloop, and "hoverboards", just to name a handful of examples. All of these were going to be "the next big thing", but none of these have panned out as the inventors intended.
There has been a term bandied about that people think may be the end-all for computers. Said term is "Artificial Intelligence", or "AI". The term "AI" can mean a variety of different things, depending on whom you ask. However, when most use the term AI, what they are expecting is a fully conscious and sentient entity that can think, act, and rationalize as a human would. This is called "Artificial General Intelligence". Today's technology is nowhere even close to being able to come to this reality. It is not yet known whether or not Artificial Intelligence will actually live up to its ultimate expectations.
Apple is not known for being jumping on bandwagons and being the first to create new categories of technology; they typically leave that to others. However, if there is a technology that they can put their own spin on, they might do so. At their World Wide Developer Conference 24, they introduced one of these types of technologies, called "Apple Intelligence".
Apple Intelligence is not a single item; in fact, it goes against the grain of other AI assistants and only works on your data. Apple Intelligence consists of a variety of tools to help you accomplish a specific task. When introduced, Apple indicated that the initial features of Apple Intelligence would be released over the course of the iOS/iPad 18 and macOS Sequoia releases.
The items that comprise Apple Intelligence include: Writing Tools, Image Generation, and Personalized Requests. Initially, Apple wanted to have the first items available with iOS 18; however, during the beta, Apple realized that the features would not be far enough along for an initial iOS/iPadOS 18.0 and macOS Sequoia (15.0) release, so they were pushed to iOS/iPadOS 18.1 and macOS Sequoia 15.1.
Not every device that can run iOS 18.1, iPadOS 18.1, or macOS Sequoia 15.1 is able to support Apple Intelligence. To be able to run Apple Intelligence you need to have one of the following devices:
iPhone 16/Plus (A18)
iPhone 16 Pro/Pro Max (A18 Pro)
iPhone 15 Pro/Pro Max (A17 Pro)
iPad mini (A17 Pro or later)
iPad Air (M1 or later)
iPad Pro (M1 or later)
Apple Silicon Mac (M1 or later)
The reason that these devices are the minimum is a combination of needing 8GB of memory, as well as a neural engine.
This article is part of an on-going series that covers the features of Apple Intelligence, as they become available. This article focuses on the Apple Intelligence feature called “Clean Up".
Photo Editing History
There is an old adage that goes "a picture is worth 1000 words", however the original quote is by newspaper editor Arthur Brisbane, who said "Use a picture. It's worth a thousand words". The quote is from 1911, back when newspapers were the prime method of obtaining news and with the limited amount of space, you could easily put a picture in place of 1000 words. The sentiment of either quote is that it would take about a thousand words to adequately describe a scene, when a single photo would be able to convey the same thing.
Fast forward to today and everything has changed. Written text is still important, but it has been supplanted not only by photos, but also by video. We are nearly two centuries from when the first photograph, "View from the Window at Le Gras" was taken. We have come a long, long way from then. Today's technology can easily take multiple pictures per second when you are using burst mode on a camera.
When film cameras became popular, you would take a photo in the hopes that you would get a usable photo. You would not know right away, because you would need to send your film off to be developed and processed. Once the film was processed, there was typically not a lot that you could do with the photo. That is not to say that some people did not manipulate photos, because of course they did, but it was a skill and not something easily accomplished.
Nearly 35 years ago, there was a new piece of software released. That software is called Photoshop. It is quite likely that you have heard of Photoshop, but in case you have not, Photoshop is software created by Adobe that will allow you to not only create images, but also the ability to edit photos. It is this latter functionality that many use the software for. Photoshop is not an easy piece of software to use, at least not for the average user. There are millions who are quite proficient with the software (the author of this post is absolutely not one of them).
While it is no longer necessary to hope that you got a good photo, there may still be instances when you may want to make some modifications to a photo, but you have the skills to use an app like Photoshop. For these situations, you can use a feature within Photos called "Clean Up".
Clean Up
Clean Up is a new tool that can be used to remove various items from a photo. The Clean Up tool can be found within the editing functions of the Photos app. To access the Clean Up tools, perform the following steps:
Open the Photos app.
Locate the photo that you want to use Clean Up on.
Click on the "Edit" button.
Click on the "Clean Up" button to bring up the Clean Up tools.
Once you bring up a photo, you will have a sidebar that says "Clean Up". Here you will have a single option: the size of the brush. You can adjust the size of the brush by clicking and dragging along the slider. The further right you go, the bigger the brush.
When you bring up a photo for editing, you may notice some items flashing. These flashing objects indicate what Photos thinks you may want to remove. Sometimes, it is correct; other times, it may not be. Let us look at an example.
In the photo below, you will see that it contains a car, some garbage cans, a white car, and a folded chair. In the screenshot, you will see that the garbage cans and car are automatically selected.
If you double-click on any of the highlighted items, they will be removed and their background will be replaced. Here is an example of what that might look like.
-- INSERT SCREENSHOT OF GARBAGE CANS "CLEANED UP" --
Now, you may initially think "Oh, that's pretty good", and at first blush it might be. However, if you look at it closer, it does not work all that well. As an example, the grass has been expanded onto the street. At the same time, the street has been expanded onto the grass. This is not accurate at all.
The thing that I think is the most aggravating is the fact that you can easily see that there is a curb that is circling around behind the garbage cans, yet it is completely removed from the area that you can easily see. It is somewhat understandable that the area behind the garbage cans, that is not seen, is filled in improperly, but the area that is shown should not really be touched.
Let us look at another example.
In this second photo, you can see a squirrel just chilling on the railing of a deck. Let us say that you want to remove the backing of the chair in the lower portion of the photo. It is the area that is highlighted.
Now, if you remove the chair, you will get something like this:
This is an infinitely better photo. The stiles of the railing on the deck are correct, and it does look very close to what you might expect. The only item that I noticed was that the filled-in area along the far right of the photo is not correct. However, it does make sense given that it does not have any information to fill in that area, besides the dirt at the top of the railing.
Closing Thoughts on Clean Up
Clean Up is a good idea and a tool that can provide mixed results. In some cases, the results are good and acceptable. However, there are also those instances where it does not work all that well. Ultimately, it depends on the image and what you are trying to clean up as to whether the proper item(s) will be removed. Hopefully, Apple is able to improve the way that this functionality works and have it function as expected.
Be sure to check out all of the other articles in the series:
Technology is consistently entertaining new crazes. Some examples include blockchain, subscription juicers, netbooks, 3D televisions, hyperloop, and "hoverboards", just to name a handful of examples. All of these were going to be "the next big thing", but none of these have panned out as the inventors intended.
There has been a term bandied about that people think may be the end-all for computers. Said term is "Artificial Intelligence", or "AI". The term "AI" can mean a variety of different things, depending on whom you ask. However, when most use the term AI, what they are expecting is a fully conscious and sentient entity that can think, act, and rationalize as a human would. This is called "Artificial General Intelligence". Today's technology is nowhere even close to being able to come to this reality. It is not yet known whether or not Artificial Intelligence will actually live up to its ultimate expectations.
Apple is not known for being jumping on bandwagons and being the first to create new categories of technology; they typically leave that to others. However, if there is a technology that they can put their own spin on, they might do so. At their World Wide Developer Conference 24, they introduced one of these types of technologies, called "Apple Intelligence".
Apple Intelligence is not a single item; in fact, it goes against the grain of other AI assistants and only works on your data. Apple Intelligence consists of a variety of tools to help you accomplish a specific task. When introduced, Apple indicated that the initial features of Apple Intelligence would be released over the course of the iOS/iPad 18 and macOS Sequoia releases.
The items that comprise Apple Intelligence include: Writing Tools, Image Generation, and Personalized Requests. Initially, Apple wanted to have the first items available with iOS 18; however, during the beta, Apple realized that the features would not be far enough along for an initial iOS/iPadOS 18.0 and macOS Sequoia (15.0) release, so they were pushed to iOS/iPadOS 18.1 and macOS Sequoia 15.1.
Not every device that can run iOS 18.1, iPadOS 18.1, or macOS Sequoia 15.1 is able to support Apple Intelligence. To be able to run Apple Intelligence you need to have one of the following devices:
iPhone 16/Plus (A18)
iPhone 16 Pro/Pro Max (A18 Pro)
iPhone 15 Pro/Pro Max (A17 Pro)
iPad mini (A17 Pro or later)
iPad Air (M1 or later)
iPad Pro (M1 or later)
Apple Silicon Mac (M1 or later)
The reason that these devices are the minimum is a combination of needing 8GB of memory, as well as a neural engine.
This article is part of an on-going series that covers the features of Apple Intelligence, as they become available. This article focuses on the Apple Intelligence feature called "Writing Tools".
Writing Tools
As you might have been able to surmise, the written word is one of the most common forms of communication. This may have started out as handwritten, but now, most of today's writing is in electronic form. Often, this is via a messaging service, like SMS, iMessage, WhatsApp, or a countless number of other messaging services. These work well for shorter messages, but for longer forms of work, there are other applications. One example is a word processor. Word Processing applications have been around since the mid-1970s and have come a long way since then.
When modern computers first came about, they were quite limited and truly for the hobbyists. However, as they gained traction within enterprises, their utility became more apparent. The first word processing software was called "Electric Pencil” and first went on sale in 1976. The first popular word processing application was "WordStar" created by MicroPro International.
WordStar became the market leader but was not the only word processing application available. In the mid-1980s, WordPerfect started gaining traction and became quite popular during the 1980s and 90s. Of course, as you might have surmised, WordPerfect had challengers, specifically one, who still dominates the market today, that is, of course, Microsoft Word.
If you were to attempt to create a word processor today, you would have a lot of work ahead of you. This is not just because it would be a difficult task, because it would be, but also because of the sheer number of features that one would expect. Some of these features you might be able to get right from the operating system, like printing, formatting (bold, italics, underline, strikethrough, etc.), and open/save dialog boxes. However, the remaining feature would be needed. One of those features would be spelling and grammar, which are staple features of any word processing application.
Spelling correction, along with autocorrect and grammar checking, has been integrated into word processors since 1992, when Microsoft added it to Microsoft Word. While Microsoft Word was the prominent word processing app on the Mac, it is not the only one. Apple introduced its own word processor as part of the iLife suite. This app is called Pages.
Pages has become an ever-present application that works across Apple's platforms, including macOS, iOS, iPadOS, and visionOS. As you might expect, Pages does include the ability to perform spelling and grammar checking. These work quite well, but this may not cover all situations. For other situations, the new "Writing Tools" may become useful. Let us look at those next.
Writing Tools is a set of functions that allows you to perform a number of actions. These actions include:
Proofreading
Rewriting
Summarization
Key Points
List Creation
Table Creation
Writing Tools is available system-wide in any application that supports Apple's standard controls. This is a boon in that the features are available across the operating systems. This means that you can easily use the features not only in Apple's own apps, but also in third-party apps. Before we dive into each function, let us look at how to access Writing Tools.
Invoking Writing
The way that you invoke Writing Tools is quite straightforward. Simply perform the following steps:
Select the block of text you want to use Writing Tools on.
Right-click on the text.
Hover over the "Writing Tools" menu option. Alternatively,
Select the tool that you want to use.
Let us look at each of the tools in turn, starting with Proofreading.
Proofreading
When you select the "Proofread", the highlighted text will be checked for both spelling and grammar. When the check is complete, there will be a popup that will show you the changes that have been made, with said changes underlined in red. The popup toolbar will also have a button with three lines and a left arrow. This button will allow you to easily switch between the original text and the replaced text.
The total number of changes will be shown in a toolbar, so you know whether or not anything has been changed. Along with this, you can also switch between the individual changes, which will allow you to review each change individually. If you like the changes, you can click on the "Done" button; however, if you do not like the changes, you can click on the "Revert" button, and the changed text will be reverted.
Writing Styles
There may be occasions when you want to adjust the tone of some text. This could be because your writing style is a bit relaxed and you need something a bit more professional, or it could be that you think the text needs to be a bit more user-friendly. There is a feature designed just for this type of situation. You can convert text into three different styles: Friendly, Professional, or Concise.
The manner in which this is accomplished is similar to using Writing Tools; you perform the following steps:
Select the block of text that you want to convert.
Right-click on the text.
Select the "Writing Tools" menu item.
Select the writing style you want to use.
Just like Proofreading, you will be able to see the changes made and flip back and forth between the versions. Writing Tools is able to perform a few more actions, like List Creation.
Create a List
Being able to proofread and change the writing style of the text is quite useful. Yet, there may be times when you wish to be able to change some text around. As an example, you may have some steps that you initially thought might be concise enough to have in a paragraph, but then realize it would be better to have it as a numbered list. Let us say that you have the following text as instructions:
Select the text you want to convert, right-click on the text to bring up the menus, click on the "Writing Tools" menu item, select the "Make List" option.
This would be easy enough to follow, but it would look better as a numbered list. To accomplish this, you can actually use the above steps and it should result in something like this:
Select the text
Right-click on the text
Bring up the menus
Click on "Writing Tools"
Select "Make List"
Now, this is not exactly what was intended. Therefore, you would need to convert it to a numbered list. If you use Notes, this is easy enough to accomplish by going to "Format" -> "Numbered List", and it will be converted for you. This is currently a limitation of Apple Intelligence, it can only make bulleted lists. I hope that there will be a future option to select the type of list to create.
Summarization
When you create a large body of text you may also want to be able to quickly provide a brief overview. You can easily write out a brief summary. This approach might work well for a couple of pages, but if you have a 10-page item, it might be nicer to have it summarized for you. This is entirely possible to do with Writing Tools. To summarize some text, perform the following steps:
Select the text you want to summarize.
Right-click on the text to bring up the menu.
Select the "Writing Tools" menu item.
Select "Summarize".
I performed a test using my introduction article about Apple Intelligence. That article is just over 4700 words and 228 paragraphs. Apple Intelligence reduced the entire article down to the following:
Artificial Intelligence (AI) aims to create machines that can think and act like humans, but current technology is far from achieving this. AI systems use neural networks to process data and make decisions, with training methods like supervised and reinforcement learning helping them learn and improve. Despite its potential, AI has yet to meet the idealistic depiction of fully conscious machines, and its use cases vary from automated cleaning to image generation.
Artificial Intelligence (AI) is a tool that can be used for both positive and negative purposes. Large Language Models (LLMs) and Image Generators are two examples of AI technologies that can be used for various tasks, including generating text and images. Apple has been working on its own AI technologies, known as Apple Intelligence, which prioritizes privacy by processing requests on-device or on Apple’s Private Cloud Compute platform.
Apple’s Private Cloud Compute service protects user data through target diffusion, which anonymizes requests and prevents replay attacks. Apple Intelligence, powered by Private Cloud Compute, will be available on select devices starting in late 2024, with some features not available until 2025.
Apple Intelligence requires Apple Silicon Macs, iPads with M1 or newer, and iPhones 15 Pro or Pro Max or newer.
Given everything that I wrote in that article, I do not think that the summary is all that good. It is missing some key information, but then again, maybe it is that I would choose a different set of summary text.
Table Creation
From time to time, you may have some data in a format that would look better in a table. Here is an example of some data that was used within my iPhone 16 Pro Max review.
Device Chip CPU Single Core CPU Multi-Core GPU (Metal) iPhone 16 Pro Max (2024) A18 Pro 3497 8581 32822 12.9-inch iPad Pro (2024) M4 3585 12603 55769 iPhone 15 Pro Max (2023) A17 Pro 2749 6713 27661 14-inch MacBook Pro (2023) M2 Max 2707 15148 127761 Mac Studio (2022) M1 Max 2439 12825 103224 6th generation iPad (2021) A15 Bionic 2157 5285 20183 Mac mini (2020) M1 2394 8810 34575
If I attempted to create a table from the data, this is what was previewed:
As you can see, Apple Intelligence completely missed the mark. It added a column that was not present, the header row seemed to be duplicated, and the first row of data was ignored. When it was not formatted properly, I thought that maybe replacing the tabs with commas might allow it to be formatted properly, but it was the same result.
I then thought that maybe there were too many rows, so I opted to only use three rows of data. When I did that, I got the following popup:
The fact that the table could not be created properly, and that it does not seem to understand that the text I have is in English, means that, at least as of this writing, the "Make Table" functionality is not helpful or useful in any way.
---
Closing Thoughts on Writing Tools
The new Apple Intelligence Writing Tools can be useful in some situations, but not all. If you need to proofread a block of text, Writing Tools will accomplish the task. The same goes for making a list, provided that you want a bulleted list, and not a numbered one.
Writing Tools is able to rewrite a block of text using one of three styles, friendly, professional, or concise, depending on your needs.
Writing Tools is available in any application that uses Apple's standard controls, like Pages, Notes, and even Xcode. However, it is not limited to Apple's own apps; any third-party app that uses a text field should also have access to Writing Tools.
Apple Intelligence should be available on iOS 18.1, iPadOS 18.1, and macOS Sierra 15.1, on any device that has an M1, or newer, as well as the iPhone 15 Pro/Pro Max, iPhone 16/Plus/Pro/Pro Max.
Be sure to check out all of the other articles in the series:
Today Apple has unveiled the final new release related to the Mac, this time the MacBook Pro. As expected the new MacBook Pros have the M4, M4 Pro, and the newly unveiled M4 Max.
Display and Camera
At the top of the display is the notch and within the notch is the camera. There is a new 12 Megapixel Center Stage camera. Center Stage is intended to keep you and everyone else around you in frame as much as possible. This camera also supports Desk View, so you can display what is happening on your physical desktop while in a FaceTime call.
The display on the MacBook Pro is a Liquid Retina XDR display. It has always come with a glossy finish, but that now changes. There is now a Nano Texture option. Much like the other Nano Texture displays, this is designed to reduce glare in bright light situations. This will cost an extra $150, but if you are frequently in areas with bright light, it might be worth looking at.
M4, M4 Pro, and M4 Max
The MacBook Pros are powered by Apple Silicon and can be configured with three different processors, the M4, the M4 Pro, and the M4 Max. There are a few configuration options for each model.
M4
The M4 comes in 10-Core CPU and 10-Core GPU model. This can be configured with 16GB, 24GB, or 32GB of memory. The base model comes with 512GB of storage and this can be configured with either 1TB or 2TB of storage. The maximum memory bandwidth for the M4 is 120 gigabits per second.
According to Apple, the MacBook Pro with M4 delivers:
- Up to 7x faster image processing in Affinity Photo when compared to the 13‑inch MacBook Pro with Core i7, and up to 1.8x faster when compared to the 13-inch MacBook Pro with M1.
- Up to 10.9x faster 3D rendering in Blender when compared to the 13‑inch MacBook Pro with Core i7, and up to 3.4x faster when compared to the 13‑inch MacBook Pro with M1.
- Up to 9.8x faster scene edit detection in Adobe Premiere Pro when compared to the 13‑inch MacBook Pro with Core i7, and up to 1.7x faster when compared to the 13‑inch MacBook Pro with M1.
M4 Pro
The M4 Pro comes in two variants. The first is a 12-Core CPU, 16-core GPU version or a 14-Core CPU. and a 14-Core CPU with a 20-Core GPU version. Both models come with 24GB of unified memory, and can be configured with 48GB. The M4 Pro models come with 512GB of storage, and can be configured with 1TB, 2TB, or 4TB of storage. The maximum memory bandwidth for the M4 is 273 gigabits per second.
According to Apple, the MacBook Pro with M4 Pro delivers:
- Up to 4x faster scene rendering performance with Maxon Redshift when compared to the 16-inch MacBook Pro with Core i9, and up to 3x faster when compared to the 16-inch MacBook Pro with M1 Pro.
- Up to 5x faster simulation of dynamical systems in MathWorks MATLAB when compared to the 16-inch MacBook Pro with Core i9, and up to 2.2x faster when compared to the 16-inch MacBook Pro with M1 Pro.
- Up to 23.8x faster basecalling for DNA sequencing in Oxford Nanopore MinKNOW when compared to the 16-inch MacBook Pro with Core i9, and up to 1.8x faster when compared to the 16-inch MacBook Pro with M1 Pro.
M4 Max
The M4 Max is a new chip not released until today. Much like the M4 Pro, the M4 Max comes in two variants. The first is a 14-Core CPU with 32-Core GPU version. This can only be configured with 36GB of unified memory. This memory has a maximum bandwidth of 410 gigabits per second, which is nearly 3.5x more memory bandwidth than the M4, and 1.5x more memory than the M4.
The second variant is a 16-Core CPU with a 40-Core GPU. This starts at 48GB of unified memory, but can be configured with 96GB or 128GB. The memory in this model is 546 gigabits per second, which is 4.5x the memory in the M4, 2x that of the M4 Pro, and 1.33x more memory bandwidth than the 14-Core M4 Max version.
Both M4 Max variants come with 1TB of storage, but can be configured for 2TB, 4TB, or even 8TB of storage, depending on needs.
And the MacBook Pro with M4 Max enables:
- Up to 7.8x faster scene rendering performance with Maxon Redshift when compared to the 16-inch MacBook Pro with Intel Core i9, and up to 3.5x faster when compared to the 16-inch MacBook Pro with M1 Max.
- Up to 4.6x faster build performance when compiling code in Xcode when compared to the 16‑inch MacBook Pro with Intel Core i9, and up to 2.2x faster when compared to the 16‑inch MacBook Pro with M1 Max.
- Up to 30.8x faster video processing performance in Topaz Video AI when compared to the 16‑inch MacBook Pro with Intel Core i9, and up to 1.6x faster when compared to the 16-inch MacBook Pro with M1 Max.
Connectivity and Ports
Similar to the M4 Mac mini, there is a difference in ports with the M4 and the M4 Pro, not in the number, but the USB-C ports. For the M4, you get three Thunderbolt 4 ports, up to 40 Gigabits per second, and the M4 Pro and M4 Max devices come equipped with three Thunderbolt 5 ports up to 120 gigabits per second. This is the same setup as the Mac mini with M4 and M4 Pro.
The number of displays supported varies depending on the M4 version. The M4 and M4 Pro can support up to two external displays up to 6K at 60Hz over Thunderbolt, or one display up to 6K at 60Hz, and one display up to 4K at 144Hz over HDMI. The HDMI is also capable of supporting one display at 8K resolution at 60Hz, or one display 4K at 240Hz, both of these are over HDMI.
The M4 Max can have up to four external displays, three displays up to 6K with 60Hz over Thunderbolt, and one at 4K up to 144Hz on HDMI. Alternatively, you can have two external displays up to 6K resolution at 60Hz, and one external display up to 8K resolution nat 60Hz, or one display up to 4K at 240Hz on the HDMI port.
Along with the Thunderbolt ports, you also get an SDXC card reader, a dedicated HDMI port, and a 3.5mm headphone jack.
The Wi-Fi in all models is Wi-Fi 6E and support for Bluetooth 5.3 is also included.
Pricing and Availability
The M4 MacBook Pro comes in the same two sizes of 14-inch and 16-inch. The pricing differs for each model and chip. For the 14-inch you can get an M4 model starting at $1599. The M4 Pro model starts at $1999, and the M4 Max starts at $3199.
The 16-inch starts at $2499 for the M4 Pro with 14-Core CPU, 20-Core GPU, 24GB of unified memory, and 512GB of storage. The 16-inch M4 Max version starts at $3499 for a 14-core CPU with a 32-Core GPU, 36GB of unified memory, and 1TB of storage.
All of the M4-line of MacBook Pros are available to order today and will be available starting November 8th.
Closing Thoughts
The MacBook Pros continue to be the workhorses of the Apple laptops. Many users do a ton of work on these devices and now with M4 processors they should be able to accomplish even more than before. The new M4 Max adds even more horsepower to the laptops and are welcome upgrades. The line up is a bit strange, but for today’s modern Apple, it is makes sense because it is not too dissimilar to the iPhone Pro line of devices. If you have an Intel-based MacBook Pro, now would be a great time to update your MacBook Pro.
Technology is consistently entertaining new crazes. Some examples include blockchain, subscription juicers, netbooks, 3D televisions, hyperloop, and "hoverboards", just to name a handful of examples. All of these were going to be "the next big thing", but none of these have panned out as the inventors intended.
There has been a term bandied about that people think may be the end-all for computers. Said term is "Artificial Intelligence", or "AI". The term "AI" can mean a variety of different things, depending on whom you ask. However, when most use the term AI, what they are expecting is a fully conscious and sentient entity that can think, act, and rationalize as a human would. This is called "Artificial General Intelligence". Today's technology is nowhere even close to being able to come to this reality. It is not yet known whether or not Artificial Intelligence will actually live up to its ultimate expectations.
Apple is not known for being jumping on bandwagons and being the first to create new categories of technology; they typically leave that to others. However, if there is a technology that they can put their own spin on, they might do so. At their World Wide Developer Conference 24, they introduced one of these types of technologies, called "Apple Intelligence".
Apple Intelligence is not a single item; in fact, it goes against the grain of other AI assistants and only works on your data. Apple Intelligence consists of a variety of tools to help you accomplish a specific task. When introduced, Apple indicated that the initial features of Apple Intelligence would be released over the course of the iOS/iPad 18 and macOS Sequoia releases.
The items that comprise Apple Intelligence include: Writing Tools, Image Generation, and Personalized Requests. Initially, Apple wanted to have the first items available with iOS 18; however, during the beta, Apple realized that the features would not be far enough along for an initial iOS/iPadOS 18.0 and macOS Sequoia (15.0) release, so they were pushed to iOS/iPadOS 18.1 and macOS Sequoia 15.1.
Not every device that can run iOS 18.1, iPadOS 18.1, or macOS Sequoia 15.1 is able to support Apple Intelligence. To be able to run Apple Intelligence you need to have one of the following devices:
iPhone 16/Plus (A18)
iPhone 16 Pro/Pro Max (A18 Pro)
iPhone 15 Pro/Pro Max (A17 Pro)
iPad mini (A17 Pro or later)
iPad Air (M1 or later)
iPad Pro (M1 or later)
Apple Silicon Mac (M1 or later)
The reason that these devices are the minimum is a combination of needing 8GB of memory, as well as a neural engine.
This article is part of an on-going series that covers the features of Apple Intelligence, as they become available. This article focuses on the Apple Intelligence feature called "Summarization".
Summarization
Communication is an important part of human society. As humans, we have become quite adept at creating ways of communicating. There are effectively two types of communication: asynchronous communications and synchronous, or real-time, communications. Asynchronous communications could be something like newspapers, magazines, and for something more modern, email, and even social media. Real-time communications can include things like text messages, iMessages, WhatsApp, and Google Chat, just to name a handful.
There are those communications that are more informational and more than likely a one-way. The prime example of this is notifications from an app. This can be a notification about an email, a new podcast episode, or even just a notification about a new post from one of your friends.
With the amount of text that everyone comes across each day, it can easily become overwhelming. For notifications, you can just disable all notifications for an app within the Settings app on iOS and iPadOS, or System Settings on macOS, but this is not always a viable solution depending on your needs.
There are a number of areas where you can get summaries. This includes notifications and email. Let us start with notifications.
Summarizing Notifications
Sometimes, it would be great to be able to get a brief synopsis of the notifications that you have received. Now with Apple Intelligence, you can actually have this occur. Below is the summarized post from Ivory from my friend, Barry:
"Sequoia and Time Machine backups issues, one SSD stopped working, the other slow."
Here is the original text:
"Have you had any issues with Sequoia and Time Machine backups? I have two SSD's that used to alternate backups but one has stopped working and the other takes forever to run the "cleaning up" portion of the backup at the end."
This is a pretty good summary of the original text. When I saw this message, I immediately tapped to see the entire message. This is not the only example of summarization. Here is another example from Overcast:
"No episode today; return on Friday, October 10th; Google's Play Store remedies discussed."
The way that this seems to work is by summarizing the titles of the podcast. In most cases, this might be okay, but this is missing some key details, in particular, which podcast does not have an episode today. Now, later in the day, after additional episodes were downloaded, this was the summary:
"Stratechery discusses Google's Play Store remedies; Rebound Prime episode bootleg available"
As you could have surmised, this is a much better summary of the notifications that I received for the various podcasts I subscribe to.
Now, it should be noted that this is with iOS 18.1, which means that developers do not have access to any sort of application programming interface, or API, for suggesting anything to Apple Intelligence, so this is strictly what Apple's own models think is the proper summary.
Another tidbit to note is that each app will be summarized on its own. Therefore, you will get a different summary for your iMessage conversations, Instagram posts, and Overcast podcast notifications. That is not the only summarization that you can get; you can also get summaries of emails.
Email Summaries
Everyone has received a rather long email, and you may want a short summary of the email. Mail on iOS 18.1, iPadOS 18.1, and macOS Sierra 15.1 will handle this for you automatically. When you view your list of emails, you will see a summary directly below the sender and subject line.
While each email is automatically summarized, you can also get a longer summary within the email message. The way that you can do this is by using the following steps:
Open Mail.
Locate the email message that you would like to summarize.
Scroll up to the top of the email message.
Click on the "Summarize" button.
Once you click on this, Apple Intelligence will then analyze the email message and then provide a summary directly above the email.
Here are three different summaries of Justin Robert Young's "Free Political Newsletter."
From September 30th, 2024: "The article discusses the possibility of an October Surprise in the upcoming election, categorizing potential surprises into four types: policy surprises, opposition dumps, acts of God, and legal surprises. It also highlights James Carville's opinion that swing states are likely to move as a block, rather than splitting evenly."
From October 4th, 2024: "The article discusses the possibility of an October Surprise in the upcoming election, categorizing potential surprises into four types: policy surprises, opposition dumps, acts of God, and legal surprises. It also highlights James Carville's opinion that swing states are likely to move as a block, rather than splitting evenly."
From October 7th, 2024: "Democratic ads focus on healthcare and portray Kamala Harris as caring, while Republican ads portray her as frivolous and unserious. The GOP Senate map is favorable, but the party may not have the funds to play in all the states they could win."
All of these are decent summaries of the email messages. As you might suspect, you can only summarize a single email message at a time. You cannot summarize multiple emails, and this makes sense because the emails could be a variety of different topics. Now, the items above were decent examples, but not all emails are great for summarization. Here is what each of Audible's Daily Deal Emails results in:
"Today's Daily Deal is $2.99 and ends at 11:59 PM PT. Offer is not transferable, cannot be combined with other offers, and sale titles are not eligible for return."
Now, honestly, these are completely useless because the title is never displayed. The reason for this is because the emails from Audible never include the title within the email. Instead, the data is not shown until it is downloaded.
To Preview or Not to Preview
Mail provides you with the ability to control whether or not each message preview should be summarized or not. By default, this feature is enabled, but you can change it if you do not want any previews. The method by which you accomplish this depends on the operating system. You can use the steps below to change the setting.
On macOS
Open the Mail app.
Click on the "Mail" menu item.
Click on Settings.
Click on the "Viewing" tab.
Uncheck "Summarize Message Previews".
On iOS/iPadOS
Open Settings.
Scroll down to "Apps".
Tap on Apps to open up the apps list.
Scroll down to, or search for, Mail.
Tap on Mail to open its settings.
Under Message List, tap the toggle for "Summarize Message Previews".
These are pretty straightforward steps to change whether Mail summarizes message previews within the message list. This is not the only Apple Intelligence item related to Mail. Mail has a couple of other features, including smart replies and priority messages. Let us look at both, starting with Smart Replies.
Smart Replies in Mail
When you receive an email, you may want to write a reply, but may not always be able to come up with the right words. It could be helpful to have an appropriate reply generated for you. This is possible with a new feature called "Smart Replies". Smart Replies are designed to create a reply to an email on your behalf. This is done by looking for any questions within the email and then generating a contextual response.
As an example, I looked at an email that I got from Patreon for an episode of "The Morning Stream" with Scott Johnson and Brian Ibbott. Live listeners generate possible titles during the show, and sometimes topics can also generate titles. Within this particular episode, one of the titles was "Is it too early for a Chicken Big Mac?". The mail app on iOS provided two possible responses within the Quick Type bar, "Yes" and "No". If I clicked on one of these, it would provide an appropriate response.
For " Yes", it was "Yes, it is too early for a Chicken Big Mac. I'll have to wait until later in the day to enjoy one." For "No", it created "No, it's never too early for a Chicken Big Mac." For any TMS listeners, the answer is always "No, it's never too early for a Chicken Big Mac". This is just one example of how it might be used. Here is another example.
Recently, I went to a book signing for John Scalzi's Starter Villain at my local bookstore. I received the confirmation for the event, and the mail provided two options for replying.
The first option was "I'll be there", and the generated response was "I'll be there tonight. I'm looking forward to meeting John Scalzi and getting my book signed." The second option was "Can't make it", and the generated response for this was "Hi, Unfortunately, I won't be able to make it to the event tonight. Thanks…"
Both of these are appropriate, and for the "I'll be there" option, it absolutely took contextual clues from the email to provide an appropriate response. Obviously, your mileage will vary given that each email is different. I tested a bunch of emails, and some did not provide any smart reply options, so you may not always see suggestions. There is one last feature: Priority emails.
Priority Messages
A lot of people receive a tremendous amount of email in the course of a day. I am not one of these people. The emails that I receive are generally just informational emails, like from Patreon, bills, or even newsletters. It is not often that I get a personal email sent to me. However, there are those that get a lot of emails. For these individuals, it might be crucial to see the most important emails. Now, with iOS 18.1, iPadOS 18.1, and macOS Sierra 15.1, this is a feature that you can utilize.
Much like Smart Replies and Summarization, Priority Inbox is enabled by default, including on the "All Inboxes" mailbox, if you have more than one configured mail account. You can configure each inbox for Priority Messages by performing the following steps:
Open the Mail app.
Click on the inbox you want to configure for Priority.
Click on the "…" icon in the upper right corner.
Uncheck "Show Priority".
If you have Priority inbox enabled, Mail will attempt to bring the most important messages to the top of your inbox. This is useful to make sure that you see the items that you really need to see. Now, it should be noted, that this is not Mail Categorization. That is not available in iOS 18.1, iPadOS 18.1, or macOS Sequoia 15.1. Mail Categorization will be available in a future update.
Closing Thoughts on Summarization and Mail
You can easily get a quick summary of notifications. This could be a series of messages from a group chat, notification about new podcast episodes, or even notifications. Each summary is grouped by app, and these summarizations will be updated as new notifications come in. But these are not the only summaries that you can receive. Mail will automatically provide a summary for you. These summaries are shown below the sender and email subject and are typically only a line long. If you want a slightly longer summary, you can get this by clicking on the "Summarize" button above the email.
Mail will automatically organize your emails to show "Priority Messages". Priority Messages are those messages that Mail thinks are the most important to you. While it is enabled by default, you can configure this behavior on a per-inbox basis.
Be sure to check out all of the other articles in the series:
Today Apple has unveiled a new Mac mini that has the M4. This is not just a spec bump, but it includes a couple of new features, chief amongst them is a new form factor.
Form Factor
The Mac mini was introduced in 2005, and was a smaller version of the Mac, hence the name Mac mini. The Mac mini was 6.5 inches wide, had a 6.5 inch depth, and was 2 inches tall. This remained the form factor until 2011 when a new Unibody version was introduced, one that eliminated the internal disc drive. This Mac mini was physical larger at 7.7 inches wide, 7.7-inches in depth, and only 1.4 inches tall. All Mac minis introduced since 2011 have had the exact same physical footprint, including the M1 and M2 Mac minis. This all changes with the M4.
In 2022 Apple introduced a whole new machine, the Mac Studio. This took some of the design elements from the Mac mini but expanded them. The M1 and M2 Mac Studio were 7.7-inches wide, had a 7.7 inch depth, but was significantly taller at 3.7 inches.
The M4 Mac mini takes some design cues from the Apple TV. The M4 Mac mini is 5 inches wide, has a 5 inch depth, and is only 2 inches tall. This means that it is smaller than the previous Mac mini, but still a bit larger than an Apple TV. Before we dive into the ports, let us look at the processor.
M4 and M4 Pro
The Mac mini has come with a variety of processors. The previous M2 Mac mini was available in both M2 and M2 Pro variants. The same continues for the M4 Mac mini, with the M4 and M4 Pro. The M4 consists of a 10-core CPU, with 4 performance cores and 6-efficiency cores, and a 10-Core GPU. According to Apple, the M4 Mac mini is significantly faster than the M1 Mac mini. Specifically,
When compared to the Mac mini with M1, Mac mini with M4:
- Performs spreadsheet calculations up to 1.7x faster in Microsoft Excel.
- Transcribes with on-device AI speech-to-text up to 2x faster in MacWhisper.
- Merges panoramic images up to 4.9x faster in Adobe Lightroom Classic.
The M4 Pro has tow configurations, a 12-core version with 8 performance cores, and 4 efficiency cores with a 16-Core GPU. The other M4 Pro option is a 14-core CPU, with 10 performance cores and 4 efficiency cores and a 20-core GPU. From Apple’s press release:
When compared to the Mac mini with M2 Pro, Mac mini with M4 Pro:
- Applies up to 1.8x more audio effect plugins in a Logic Pro project.
- Renders motion graphics to RAM up to 2x faster in Motion.
- Completes 3D renders up to 2.9x faster in Blender.
All M4 and M4 Pro models have a 16-core Neural engine for machine learning and Apple Intelligence tasks.
Ports
The M4 Mac mini has a total of 7 ports, an ethernet jack, an HDMI port, and 5 USB-C ports. Of these ports, two are on the front, much like the Mac Studio, and three are on the back. The two on the front are USB-C with USB 3 speeds up to 10 gigabits per second. The three ports on the back are Thunderbolt/USB 4 ports. For the M4 models, these are Thunderbolt 4 ports, which can delivery data up to 40 Gigabits per second. The M4 M4 Pro devices are Thunderbolt 5 ports, which can deliver a whopping 120 Gigabits per second. The USB portion can deliver up to 40 Gigabits per second.
The difference in Thunderbolt ports does mean that there is a difference in DisplayPort compatibility. The Thunderbolt 4 ports support DisplayPort 1.4 while the Thunderbolt 5 ports support DisplayPort 2.1. The HDMI port on either model can support one display with 8K resolution at 60Hz, or 4K resolution at 240Hz.
By default the ethernet port is a gigabit port, but you can opt for a 10-gigabit per second port for $100 more. The Mac mini has long had a headphone jack this is still present on all models of the M4 Mac mini.
Pricing and Availability
The M4 Mac mini starts at $599 for 16GB of unified memory and 256GB of storage. You can configure the M4 models with 24GB or 32GB of memory, and up to 2TB of storage.
The M4 Pro Mac mini starts at $1399 for a 12-core CPU and 16-core GPU, 24GB of unified memory, 512GB of storage. You can configure the M4 Pro Mac mini with 48GB or 64GB of unified memory, and 1TB, 2TB, 4TB, or 8TB of storage.
The M4 Mac mini is available for pre-order today and will be available for delivery and in store on Friday November 8th.
Closing Thoughts
While other devices have received a redesign specifically for the lower power usage of Apple Silicon, the Mac mini was not one of them. The Mac mini has finally received its redesign. The smaller form factor takes cues from both the Mac Studio and Apple TV. The M4 and M4 Pro should be great upgrades from anyone who has an Intel Mac, and if you are upgrading from the M1, it will still be a solid update.
Technology is consistently entertaining new crazes. Some examples include blockchain, subscription juicers, netbooks, 3D televisions, hyperloop, and "hoverboards", just to name a handful of examples. All of these were going to be "the next big thing", but none of these have panned out as the inventors intended.
There has been a term bandied about that people think may be the end-all for computers. Said term is "Artificial Intelligence", or "AI". The term "AI" can mean a variety of different things, depending on whom you ask. However, when most use the term AI, what they are expecting is a fully conscious and sentient entity that can think, act, and rationalize as a human would. This is called "Artificial General Intelligence". Today's technology is nowhere even close to being able to come to this reality. It is not yet known whether or not Artificial Intelligence will actually live up to its ultimate expectations.
Apple is not known for being jumping on bandwagons and being the first to create new categories of technology; they typically leave that to others. However, if there is a technology that they can put their own spin on, they might do so. At their World Wide Developer Conference 24, they introduced one of these types of technologies, called "Apple Intelligence".
Apple Intelligence is not a single item; in fact, it goes against the grain of other AI assistants and only works on your data. Apple Intelligence consists of a variety of tools to help you accomplish a specific task. When introduced, Apple indicated that the initial features of Apple Intelligence would be released over the course of the iOS/iPad 18 and macOS Sequoia releases.
The items that comprise Apple Intelligence include: Writing Tools, Image Generation, and Personalized Requests. Initially, Apple wanted to have the first items available with iOS 18; however, during the beta, Apple realized that the features would not be far enough along for an initial iOS/iPadOS 18.0 and macOS Sequoia (15.0) release, so they were pushed to iOS/iPadOS 18.1 and macOS Sequoia 15.1.
Not every device that can run iOS 18.1, iPadOS 18.1, or macOS Sequoia 15.1 is able to support Apple Intelligence. To be able to run Apple Intelligence you need to have one of the following devices:
iPhone 16/Plus (A18)
iPhone 16 Pro/Pro Max (A18 Pro)
iPhone 15 Pro/Pro Max (A17 Pro)
iPad mini (A17 Pro or later)
iPad Air (M1 or later)
iPad Pro (M1 or later)
Apple Silicon Mac (M1 or later)
The reason that these devices are the minimum is a combination of needing 8GB of memory, as well as a neural engine.
This article is part of an on-going series that covers the features of Apple Intelligence, as they become available. This article focuses on the Apple Intelligence feature called "Typing with Siri".
Siri
Siri is Apple's personal assistant. Back in 2010 Apple acquired a voice assistant called Siri. In 2011, with the release of iOS 5 and Mac OS X 10.7 Lion, Siri became integrated into the operating system. When integrated with the operating system, Siri could perform a few more actions and over time you have been able to perform even more actions with Siri, like getting information about the weather, asking who was in a particular movie, or even getting the latest sports scores. Beyond being able to perform more actions and get more information from Siri.
Siri has expanded to more than just the iPhone and the Mac. You can use Siri on your Apple Watch, Apple TV, as well as on the HomePod. In order to use Siri with these devices you can either hold down a particular button, or you can use the phrase "Hey Siri" to activate Siri. This has been the wake word since 2011. Last year, in 2023 with the release of iOS 17, iPadOS 17, and macOS Sonoma, Apple provided the ability to use the word "Siri" instead of "Hey Siri". This was a boon, but this may not be the only way to interact with Siri.
Type to Siri
One of the limitations of Siri is that you need to use your voice to use Siri. This may work in a variety of situations, like while at home, while driving, or even in any area where you are alone. However, you may not want to use voice interactions but still may want to use Siri. There is a new way of using Siri, by typing to it.
The way that you use "Type to Siri" differs depending on the operating system. To use Type to Siri on iPhone and iPad you simply double-tap on the home indicator. On a Mac, use the keyboard combination Globe + S. If you have a keyboard connected to your iPad, you can also use the same keyboard combination.
It is different on macOS. By default the keyboard shortcut is to hit either of hte "command" keys twice. But this is not enabled by default. Before you can type to Siri, you will need to enable it. On macOS this can be done by using the following steps:
Open System Settings.
Click on "Apple Intelligence & Siri" to bring up the Apple Intelligence & Siri settings.
Enable the "Siri" toggle.
Once enabled, you can use press either of the command keys, twice in a row. However, you may want to have the same key combination as on iOS and iPadOS. This can be done by selecting the appropriate "Keyboard Shortcut" option within the Apple Intelligenice & Siri setting. The system options are:
Globe + S
Press Left Command Key Twice
Press Right Command Key Twice
Press Either Command Key Twice
Custom
If you select "Custom", you will need to enter in the keyboard combination that you want to use. It is best to avoid any existing system key combinations, otherwise you might become confused. Now, let us look at actually using Type to Siri.
Using Type to Siri
Once you bring up Type to Siri you will have a text box where you can enter in your request. After tapping the "send" button or hitting the enter key your request will be sent to Siri. Instead of your result being spoken out loud, your result will be shown on the screen. As you type, Siri will provide suggestions for items that you may want to do.
Suggested Actions
As an example, if you start typing "Create", you may get something like "Create a new note". Another example, if you type "Play", you may get suggestions for playing certain music playlists. For me, it was "Play New Music - 2024/09", "Play Heavy Rotation playlist", and "Play Guilty as Sin? by Taylor Swift". Each of these have been playlists, or songs, that I have been playing a lot lately.
The suggestions I got are from my iPhone. When I tried the same thing on my MacBook Pro I got "Open Playgrounds", "Play the news", and "Play some music". Similarly, on my iPad Pro I got "Play my voicemail", "Play my Audiobook", and "Open Playgrounds".
The different responses make complete sense because it is being processed locally and it is contextual to what you do on that device. Becaue I do not play music on my iPad Pro, Siri did not suggest that as an option. To be honest, I am a bit confused as to why it would suggest to "Play voicemails", when there is no phone app on the iPad.
Results
Just as when you use your voice with Siri you can perform more thn just the suggested actions. You can type the same requests that you would normally say. My go to example is asking the tongue twister "How much wood would a woodchuck chuck, if a woodchuck could chuck wood?". Siri, naturally responded with:
About as much ground as a groundhog could hog if a groundhog could hog ground.
How about another tongue twister?
These are just a couple of examples of what you can do when you type to Siri. This may not seem like a big deal, but being able to use your keyboard with Siri is a huge shift in how and when you might use Siri. You are no longer required to use your voice, which means that this can be used in almost ANY situation, which is something that many have wanted since Siri was introduced.
Closing Thoughts on Siri
Now, you do not need to be self-conscious about using Siri in public, because you do not need to say anything, you can simply type your request and have Siri show you the results. When you make a request, suggestions will be shown and you can simply type in your request and Siri will provide you the answer.
Siri will be getting even more features later, but this is the current new feature for Siri, at least as of iOS 18.1, iPadOS 18.1, and macOS Sequoia 15.1.
This post is just one in a series about Apple Intelligence, There will be more articles in this series, so be sure to check out those articles.
Today Apple unveiled a new iMac, one powered by the M4. While it might seem like a small update from the M3, there are a number of improvements, including the M4, ports, and colors, just to name a few items.
M4
The 24-inch iMac is powered by the M4 chip. This comes in two processor configurations, an 8-core CPU with 8-Core GPU model, and a 10-Core CPU with 10-Core GPU model. According to Apple, the M4 iMac is up to 1.7x faster for daily productivity and up to 2.1x faster for graphics editing and gaming; at least when you compare it to the M1 iMac.
Display
The size of the iMac has not changed, but there is a new option, a nano-texture display option. This is a similar display as on the iPads and on the Apple Studio Display. This is an option and will cost $200 more. This option is only available on the
Beyond this, there is a new 12Megapixel Center Stage camera. This should provide even better quality, because this camera is capable of providing Desk View, which is the ability to show your desk while in a video call, the previous iMac could not provide you this functionality.
Colors
The 24-inch iMac has come in a variety of colors. The available colors have been updated. There are seven options:
Silver
Blue
Purple
Pink
Orange
Yellow
Green
Unlike like the previous model, all of the colors are available for any processor choice. There is a difference depending on the model, and that is with the ports. To go with this, are new color-matched accessories, including the Magic Keyboard with Touch ID, Magic Trackpad, and Magic Mouse. These all now have USB-C cables, instead of the previous lightning. Beyond the port change, the design and port locations have not changed at all.
Ports and Connectivity
Depending on the processor, you will either get two or four ports. The 8-Core CPU model has two thunderbolt/USB 4 ports. The 10-core CPU models have four thunderbolt 4 ports. All of the iMacs have Wi-Fi 6E and Bluetooth 5.3. The four thunderbolt four ports means that you can have up to two 6K external displays, which is an improvement over the M3 model, which only supported one external 6K monitor.
Pricing
There are actually four different configuration options available. These starting configuration options are:
8-Core CPU with 8-Core GPU, 16GB of unified memory, and 256GB of storage - $1299
10-Core CPU with 10-core GPU, 16 GB of unified memory, and 256GB of storage - $1499
10-Core CPU with 10-core GPU, 16 GB of unified memory, and 512GB of storage - $1699
10-Core CPU with 10-core GPU, 24 GB of unified memory, and 256GB of storage - $1899
You can configure the 10-Core models with up to 32GB of unified memory and up to 2TB of storage. The 10-Core models also come with Ethernet, whereas the 8-core model is Wi-Fi only, but you can add Ethernet to that model for $30.
Closing Thoughts
You can pre-order the new iMac today and they will be available starting on Friday, November 8th. If you are looking for a new iMac, now is the time to upgrade, particularly if you have an Intel machine, or want to upgrade from an M1 iMac.
Technology is consistently entertaining new crazes. Some examples include blockchain, subscription juicers, netbooks, 3D televisions, hyperloop, and "hoverboards", just to name a handful of examples. All of these were going to be "the next big thing", but none of these have panned out as the inventors intended.
There has been a term bandied about that people think may be the end-all for computers. Said term is "Artificial Intelligence", or "AI". The term "AI" can mean a variety of different things, depending on whom you ask. However, when most use the term AI, what they are expecting is a fully conscious and sentient entity that can think, act, and rationalize as a human would. This is called "Artificial General Intelligence". Today's technology is nowhere even close to being able to come to this reality. It is not yet known whether or not Artificial Intelligence will actually live up to its ultimate expectations.
The term "Artificial Intelligence" can garner a number of thoughts, and depending on who you ask, these can range from intrigue, worry, elation, or even skepticism. Humans have long wanted to create a machine that can think like a human, and this has been depicted in media for a long time. Frankenstein is an example where a machine is made into a human and then is able to come to life . Another great example is Rosie from the 1960s cartoon The Jetsons. In case you are not aware, The Jetsons is a fictional animated tv show that depicts the far future where there are flying cars, and one of the characters, Rosie, is an robot that can perform many household tasks, like cleaning and cooking.
We, as a society, have come a long way to creating modern "artificial intelligence", but we are still nowhere close to creating a robot that is anywhere close to human. Today's modern artificial intelligence falls into a number of categories, in terms of its capabilities, but it is still a long way off from being the idealistic depiction that many expect artificial intelligence to be.
Artificial Intelligence comes in a variety of forms. This includes automated cleaning robots, automated driving, text generation, image generation, and even code completion. There are many companies that are attempting to create mainstream artificial intelligence, but nobody has done so that we know of.
Apple is one of those companies, but they are taking a different approach with their service called Apple Intelligence. Apple Intelligence is Apple's take on artificial intelligence. Apple Intelligence differs in a number of ways from standard "artificial intelligence". This includes the use of on-device models, private cloud computing, and personal context. Before we delve into each of those, let us look at artificial intelligence, including a history.
Artificial Intelligence
Artificial intelligence is not a new concept. You may think that it is a modern thing, but in fact, it harkens back to World War II and Alan Turing. Turing is known for creating a machine that could crack the German Enigma codes. In 1950, Turing released a paper which was the basis of what is known as the "Turing Test". The Turing Test is one where a machine is able to exhibit intelligent behavior that is indistinguishable from a human.
There have been a number of enhancements to artificial intelligence in recent years, and many of the concepts that have been used for a while have come into more common usage. Before we dive into some aspects of artificial intelligence, let us look at how humans learn.
How Human Brains Operate
In order to be able to attempt to recreate the human brain in a robot, we first need to understand how a human brain works. While we have progressed significantly in this, we are still extremely far from fully understanding how a human brain functions, let alone even attempting to control one.
Even though we do not know everything about the brain, there is quite a bit of information that we do know. Human brains are great at spotting patterns, and the way that this is done is by taking in large amounts of data, parsing that data, and then identifying a pattern. A great example of this is when people look at clouds. Clouds come in a variety of shapes and sizes, and many people attempt to find recognizable objects within the clouds. Someone is able to accomplish this by taking their existing knowledge, looking at the cloud, determining if there is a pattern, and if there is one, identifying the object.
When a human brain is attempting to identify an object, what it is doing is going through all of the objects (animals, plants, people, shapes, etc.) that they are aware of, quickly filtering them, and seeing if there is a match.
The human brain is a giant set of chemical and electrical synapses that connect to produce consciousness. The brain is commonly called a neural network due to the network of neural pathways. According to researchers, humans are able to update their knowledge. In a technical sense, what is happening is that the weights of the synaptic connections that are the basis of our neural network brain are updated. As we go through life, our previous experiences will shape our approach to things. Beyond this, it can also affect how we feel about things in a given moment, again, based upon our previous experiences.
This approach is similar to how artificial intelligence operates. Let us look at that next.
How Artificial Intelligence Works
The current way that artificial intelligence works is by allowing you to specify an input, or prompt, and having the model create an output. The output can be text, images, speech, or even just a decision. All artificial intelligence is based on what is called a Neural Network.
A Neural Network is a machine learning algorithm that is designed to make a decision. The manner in which this is done is by processing data through various nodes. Nodes generally belong to a single layer, and for each neural network, there are at least two layers: an input layer and an output layer.
Each node within a neural network is composed of three different things: weights, thresholds (also called a bias), and an output. Data goes into the node, the weights and thresholds are applied, and an output is created. A node requires the ability to actually come to a determination and is based on training, or what a human might call, knowledge.
Training
Humans have a variety of ways of learning something that can include family, friends, media, books, TV shows, audio, and just exploring. Neural Networks cannot be trained this way. Instead, neural networks need to be given a ton of data in order to be able to learn.
Each node within a neural network provides an output, sending that to another node, which provides its output, and the process continues until a result is determined. Each time that a result is determined, a positive or negative correlation is determined. Much like a human, the more positive connections that are made, the better, and eventually, the positive correlation between an answer and the result will push away the negative connections. Once it has made enough positive correlations (gotten the right answer), it will eventually be trained.
There are actually two types of training: Supervised Learning and Reinforcement Learning.
Supervised Learning is the idea of feeding a training model so that it can learn the rules and provide the proper output. Typically, this is done using two methods: either classification or regression. Classification is pretty simple to understand. Let us say that you have 1000 pictures, 500 dogs, and 500 cats. You provide the training model with each photo individually and you tell it the type of pet for each image.
Reinforcement learning is similar, but different. In this scenario, let us say you have the same 1000 pictures, again 500 dogs and 500 cats. But instead of telling the model what is what, you let it determine the similarities between the items and as it continues to get them right, that will reinforce what it already knows.
Inference
Inference, in reference to artificial intelligence, is the process of applying a training model to a set of data. The best way to test a model is to provide it with brand-new data to try and infer the result with this brand-new data.
Artificial Intelligence works by taking the input of the new data, applying the weights, also known as parameters, that are stored in the model and applying them to the actual data.
Inference is not free, it does have a cost, most particularly when it comes to energy usage. This is where optimizations can be useful. As an example, Apple will utilize the Neural Engine as much as possible for its on-device inference. The reason for this is because the Neural Engine is optimized to perform inference tasks, while minimizing the amount of energy needed.
Artificial Intelligence Use Cases
No tool is inherently good or inherently bad, the tool is the tool. It is how it is used that determines whether it is a positive usage or a negative use. Artificial Intelligence is no different in this. Artificial intelligence can have a wide range of possible use cases. Current artificial intelligence is capable of performing actions related to detecting cancer, synthesizing new drugs, detecting brain signals in amputees, and much more. These are all health-related, but that is where many artificial intelligence models are thriving, at least at the moment, but that is not all that is possible.
Not all artificial intelligence usage is positive. There are many who will want to make what are called "Deep Fakes". A deep fake is a way of taking someone and either placing them in a situation where they never were, or even making them say something that they have never said. This is not new, not by a long shot. Since the inception of photos, there have always been manipulations. This is designed to influence someone into thinking a particular way. As you might guess, this can have detrimental effects because it distorts reality. While there are those who want to use these for nefarious purposes, there can be some positive use cases for this type of technology.
Back in 2013, country music artist Randy Travis suffered a stroke and, as a result, now suffers from aphasia, which, according to the Mayo Clinic, is "a disorder that affects how you communicate." This effectively left him unable to perform. However, in May of 2024, a brand-new Randy Travis song was released using artificial intelligence that used two proprietary AI models to help create the song. This was done with full permission from Randy Travis himself, so there is no issue there.
Let us look at a couple of different approaches used, including Large Language Models and Image Generators.
Large Language Models
Large language models, or LLMs, are those that are able to generate language that a human would understand. To quote IBM:
"In a nutshell, LLMs are designed to understand and generate text like a human, in addition to other forms of content, based on the vast amount of data used to train them. They have the ability to infer from context, generate coherent and contextually relevant responses, translate to languages other than English, summarize text, answer questions (general conversation and FAQs), and even assist in creative writing or code generation tasks." - Source: IBM.
LLMs can be used for generating, rewriting, or even changing the tone of text. The reason that this is possible is because most languages have pretty rigid rules, and it is not a complex task to calculate the probability of what the next word would be in a sentence.
The way that an LLM is trained is by consuming vast amounts of text. It then recognizes patterns from this data and then it can generate text based upon what it has learned.
Image Generation
One of the uses of modern artificial intelligence is the ability to create images. Similar to LLMs, there are image generation models that have been trained on a massive number of images. This data has been used to train the models which are used for the actual image generation. Depending on the model, you may be able to generate various types of images, ranging from cartoons to completely realistic ones.
Image generation models use a technique called Generative Adversarial Networks, or GANs. The way that a GAN works is using two different algorithms, the generator, and the discriminator, that work in tandem. The generator will output a bunch of random pixels as an image and then send it over to the discriminator. The discriminator, which has knowledge of millions of pictures of what you are trying to generate, will provide a result, which is basically a "Yes" or "No". If it is a 'no', then the generator will try again and again.
This back and forth is what is called an "adversarial loop" and this loop continues until the generator is able to generate something that the discriminator will say matches the intended type of image.
The training for GANs is quite interesting. It starts with an image and then purposely introduces noise into the image, and it does so again, and again, and again. This process reiterates a large number of times. This noisy data is what becomes the basis for the generator.
All of this is a good base for looking at what Apple has in store for its own artificial intelligence technologies, so let us look at that now.
Apple and Artificial Intelligence
You might think that Apple is late to the artificial intelligence realm, but in fact, Apple has been working with artificial intelligence for many years; it has just been called something else. Some of the areas where Apple has been using artificial intelligence have been with Photos, Siri, Messages, and even auto-correct.
Apple Intelligence
As mentioned above, Apple Intelligence is Apple's take on artificial intelligence. Apple Intelligence differs from standard artificial intelligence in that Apple intelligence is designed to work on YOUR information, not on general knowledge. The primary benefit of working on your data is that your data can remain private. This is done using on-device models.
On-Device Requests
A vast majority of Apple Intelligence requests will be performed on your device. There are a number of examples of this, including things like:
"Find me pictures of [someone] while in London."
"When is Mom's flight landing?"
Apple has been doing a lot of research with machine learning to be able to run on-device. This has meant that the machine learning models have needed to be kept the same in terms of quality but need to be able to be used on devices with limited amounts of memory. Limited, of course, is relative. We are not talking like 1GB of RAM, but more like 8GB.
The reason that Apple wants to be able to do much of the processing on your device is twofold. The first is response time. By having devices handle requests, they can be almost instantaneous. This is quite beneficial for those times when you may not have connectivity. Beyond this, sending all of your requests to the cloud would end up providing some sort of delay, even with a direct connection and incredibly fast connection speeds.
The second reason is privacy. Privacy is a big part of Apple's core beliefs. When using your own device and processing the request on the device, that means that nobody else will get access to your data, not even Apple. Instead, only you will have access to your data, which is great for your own peace of mind.
Even though as much as possible will be done on your own devices, there may be instances when your device is not able to handle your request locally. Instead, it may need to be sent to the cloud. This can be needed for larger models that require additional memory or processing to be done. If this is needed, it is handled automatically by sending it to Apple's Private Cloud Compute platform. Let us look at that next.
Private Cloud Compute
Nobody wants their data to get out of their control, yet it does happen from time to time. Apple takes data privacy seriously and has done a lot to help keep people's data private. This is in contrast to other artificial intelligence companies, who have no compunction to take user data and use it to train their machine learning models.
Apple has been working on reducing the size and memory requirements for many machine learning models. They have accomplished quite a bit, but right now there are some machine learning models that require more tokens, which means more memory, than devices are capable of having. In these instances, it may be necessary to use the cloud to handle requests.
Apple has 1.2 billion users, and while not all of the users will utilize Apple Intelligence immediately, Apple still needs to scale up Apple Intelligence to support all users who will be using it. In order to make this happen, Apple could just order as many servers as they want, plug them in, and make it all work. However, that has its own set of tradeoffs. Instead, Apple has opted to utilize their own hardware, create their own servers, and make things as seamless as possible for the end user, all while protecting user data.
Private Cloud Compute is what powers online requests for Apple Intelligence. Private Cloud Compute runs in Apple's own data centers. Private Cloud Compute is powered by a series of nodes. Each of these nodes uses Apple Silicon to process requests. These are not just standard Macs; they have been heavily customized.
Nodes
Each Private Cloud Compute node undergoes significant quality checks in order to maintain integrity. Before the node is sealed and its tamper switch activated, each component undergoes a high-resolution scan to make sure that it has not been modified. After the node has been shipped and arrives at an Apple data center, it undergoes another verification to make sure it still remains untouched. This process is handled by multiple teams and overseen by a third party who is not affiliated with Apple. Once verification has been completed, the node is deployed, and a certificate is issued for the keys embedded in the Secure Enclave. Once the certificate has been created, it can be used.
Request Routing
Protecting the node is just the first step in securing user data. In order to protect user data, Apple uses what is called "target diffusion". This is a process of making sure that a user's request cannot be sent to a specific node based on the user or its content.
Target diffusion begins with the metadata of the request. This information strips out user-specific data as well as the source device. The metadata is used by the load balancers to route the request to the appropriate model. In order to limit what is called a "replay attack", each request has a single-use credential which is used to authorize requests without tying it to a specific user.
All requests are routed through an Oblivious HTTP, or OHTTP, relay, managed by a third-party provider, which hides the device's source IP address well before it ever reaches the Private Cloud Compute node. This is similar to how Private Relay works, where the actual destination server never knows your true IP address. In order to steer a request based on source IP, both Apple's Load Balancer as well as the HTTP relay would need to be compromised; while possible, it is unlikely.
User Requests
When a user's device makes a request, it is not sent to the entire Private Cloud Compute service as a whole; instead, pieces of the request are routed to different nodes by the load balancer. The response that is sent back to the user's device will specify the individual nodes that should be ready to handle the inference request.
When the load balancer selects which nodes to use, an auditable trail is created. This is to protect against an attack where an attacker compromises a node and manages to obtain complete control of the load balancer.
Transparency
When it comes to privacy, one could say, with confidence, that Apple does what they say they are doing. However, in order to provide some transparency and verification, Apple is allowing security researchers the ability to inspect software images. This is beyond what any other cloud company is doing.
In order to make sure there is transparency, each production build of Apple's Private Cloud Compute software will be appended to a write-only log. This will allow verification that the software that is being used is exactly what it claims to be. Apple is taking some additional steps. From Apple's post on Private Cloud Compute:
Our commitment to verifiable transparency includes:
1. Publishing the measurements of all code running on PCC in an append-only and cryptographically tamper-proof transparency log.
2. Making the log and associated binary software images publicly available for inspection and validation by privacy and security experts.
3. Publishing and maintaining an official set of tools for researchers analyzing PCC node software.
4. Rewarding important research findings through the Apple Security Bounty program.
This means that should an issue be found, Apple will be notified before it can become an issue, take actions to remedy the issue, and release new software, all in an attempt to keep user data private.
Privacy
When a request is sent to Apple's Private Cloud Compute, only your device and the server can communicate. Your data is sent to the server, processed, and returned to you. After the request is complete, the memory on the server is wiped so your data cannot be retrieved. This includes wiping the cryptographic keys on the data volume. Upon reboot, these keys are regenerated and never stored. The result of this is that no data can be retrieved because the cryptographic keys are sufficiently random that they could never be regenerated.
Apple has gone to extensive lengths to make sure that nobody's data can be compromised. This includes removing remote access features for administration, high-resolution scanning of the Private Cloud Compute node before it is sealed, and making sure that requests cannot be routed to specific nodes, which may allow someone to compromise data. Beyond this, when a Private Cloud Compute node is rebooted, the cryptographic keys that run the server are completely regenerated, so any previous data is no longer readable.
For even more detail, be sure to check out Apple's blog post called "Private Cloud Compute" available at https://security.apple.com/blog/private-cloud-compute.
General World Knowledge
Apple Intelligence is designed to work on your private data, but there may be times when you need to go beyond your own data and use general world knowledge. This could be something like asking for a recipe for some ingredients you have, or it could be a historical fact, or even to confirm some existing data.
Apple Intelligence is not capable of handling these types of requests. Instead, you will be prompted to send these types of requests off to third parties, like OpenAI's ChatGPT. When you are prompted to use one of these, you will need to confirm that you want to send your request and that your private information (for that specific request) will be sent to the third party.
At launch, only OpenAI's ChatGPT will be available. However, there will be more third-party options coming in the future. This type of arrangement is a good escape valve should you need to get some information that is not within your own private data. Now that we have covered what Private Cloud Compute is, let us look at what it will take to run Apple Intelligence.
Minimum Requirements
Apple Intelligence does require a minimum set of requirements in order to be used. Apple Intelligence will work on the following devices:
iPhone 16 Pro/Pro Max (A18 Pro)
iPhone 16/16 Plus (A18)
iPhone 15 Pro/Pro Max (A17 Pro)
iPad mini (A17 Pro)
iPad Pro (M1 and later)
iPad Air (M1 and later)
MacBook Air (M1 and later)
MacBook Pro (M1 and later)
Mac mini (M1 and later)
Mac Studio (M1 Max and later)
Mac Pro (M2 Ultra and later)
There are a couple of reasons why these are the devices that can be used. The first is that they require a neural engine. For the Mac, this was not present until 2020 when the first Macs with Apple Silicon were released. For the iPhone, the first Neural Engine appeared with the A11 Bionic chip on the iPhone 8, 8 Plus, and iPhone X. All iPhones since have included a Neural Engine, but that is just one requirement.
The second requirement is the amount of memory. The minimum amount of memory to run the on-device models is 8 gigabytes. The iPhone 15 Pro and iPhone 15 Pro Max are the first iPhones to come with 8GB of memory. All M1 Macs have had at least 8GB of memory.
Now, this is the minimum amount of memory. Not all features will work with only 8GB of memory. One example is a new feature for developers within Apple's Xcode app. With Xcode 16, developers will have the option of using Apple's Predictive Code Completion Model. When you install Xcode 16, there is an option that allows you to download the Predictive Code completion model, but only if your Mac has 16GB of memory or more. To illustrate this, if you have a Mac mini with 8GB of memory, you will get the following installation screen.
Similarly, let us say you have a MacBook Pro with 32GB of unified memory, you will get this installation screen.
As you can see, the Predictive Code Completion checkbox is not even an option on the Mac mini with 8GB of memory. And the Predictive Code Completion is a pretty limited amount of knowledge. Swift, while being a large programming language, is limited in scope, and that model does not work on 8GB.
It would not be presumptuous to think that this may be the case for various Apple Intelligence models going forward. Now that we have covered the minimum requirements, let us look at some of the use cases that Apple Intelligence can handle, starting with something called Genmoji.
Enabling Apple Intelligence
As outlined above, Apple Intelligence is available for compatible devices running iOS 18.1, iPadOS 18.1, or macOS Sequoia 15.1. However, Apple Intelligence is not automatically enabled. Instead, you will need to enable it. Apple Intelligence is activated on a per Apple Account basis. This only needs to be done once. Once activated, it will need to be enabled per device. To activate Apple Intelligence perform these steps:
Open Settings on iOS, or iPadOS, or System Settings on macOS Sequoia.
Scroll down to "Apple Intelligence".
Tap, or click, on "Apple Intelligence" to bring up the settings.
Tap, or click, on "Join Apple Intelligence Waitlist". A popup will appear
Tap on the "Join Apple Intelligence Waitlist" button to confirm you want to join the waitlist.
Once you do this, you will join the Apple Intelligence waitlist. It may take some time before you are able to access the features. Once your Apple Account has had Apple Intelligence activated on it, you will then get a notification on your device indicating that Apple Intelligence is ready.
At this point, you can click on the "Turn On Apple Intelligence" button, and a popup will appear that will allow you to enable the features. Once you have enabled Apple Intelligence on your device, you will be able to use the features.
Closing Thoughts on Apple Intelligence
Many Artificial Intelligence tools require sending your private data to a server in the cloud to be able to perform a particular task. Doing this has the potential to not only leak your private data, but your private data can possibly be used to train additional artificial intelligence models. This is an antithesis to the core values of Apple, so Apple has taken a different approach with their own artificial intelligence that they are calling Apple Intelligence.
Apple Intelligence is designed to work on your private data and maintain that privacy. The way that this is accomplished is through a service called Private Cloud Compute. Private Cloud Compute is a set of servers in Apple's own datacenter that are built on Apple Silicon, utilizing features like the Secure Enclave to maintain the integrity of the server. Beyond this, each time that a request has been completed, the previous keys are wiped, and the server is completely reset and reinitialized with no data being retained between reboots.
Apple Intelligence is designed to help you accomplish tasks that you need, like summarizing text, generating new emojis, creating images, and more.
Apple Intelligence will be a beta feature starting in late 2024, with some overall features not coming until 2025, and it will be English only at first. Furthermore, these features will not be available in the European Union, at least not at first.
Apple Intelligence will have some pretty stiff requirements, so it will not work on all devices. In fact, you will need to have an Apple Silicon Mac or an iPad with an M1 or newer, or an A17 Pro. For the iPhone, you will need a device with an A17 Pro, A18, or A18 Pro. These devices are the iPhone 15 Pro, iPhone 16/16 Plus, or iPhone 16 Pro/Pro Max to take advantage of the Apple Intelligence features.
This is merely an introduction to Apple Intelligence, There will be more articles in this series, so be sure to check out those articles.