Today Apple announced a new iPhone for the iPhone 16 lineup, the iPhone 16e. At first glance, you might think it might be the replacement for the iPhone SE, but I think it is really the replacement for the iPhone 14. The iPhone 16e continues to be the entry-level device. The iPhone 16e shares many features of the iPhone 14, but not all of them.
Screen Size
The iPhone 16e has a 6.1-inch screen. This is a big jump over the previous iPhone SE’s screen size of 4.7 inches, but it is the same as the iPhone 14. The increased screen size also means that there is no longer a Home Button. In its place is more screen, and it now utilizes the same approach as all iPhones introduced after the iPhone X: it uses swipe gestures to navigate.
The screen itself is an OLED Super Retina XDR display. This means that it supports the P3 color space, has 800 nits of brightness, supports HDR, and True Tone.
Processor
Whether you look at the iPhone 14 or the iPhone SE, both of them have the same processor, the A15 Bionic. This is a bit older and would otherwise be able to perform everything that Apple might expect. But now the iPhone 16e has an A18 in it, just like the iPhone 16 and iPhone 16 Plus. There is one slight difference, the iPhone 16e only has a 4-core GPU, whereas the iPhone 16 has a 5-core GPU. For most users and in most use cases, this will not be noticeable, but it is a difference.
Colors
The iPhone 16e comes in two colors: Black and White. Both of these are matte finishes. These two colors replace the previous Midnight, Starlight, and (PRODUCT) Red. It is possible that additional colors may be added in the future, but for now, these are the only two options. This does mean that Apple does currently sell any (PRODUCT)RED devices.
Apple Intelligence
As has been rumored and expected, the iPhone 16 does support Apple Intelligence. This is due to the A18 chip and the 8GB of memory in the device. The iPhone 16e supports all of the Apple Intelligence features, like Writing Tools, ChatGPT integration, and Visual Intelligence.
Missing Features
Even though the iPhone 16e has the latest features, like the Action Button, it does not have everything that was present on the iPhone 14. For some users, they are willing to make that trade-off, yet for others, these missing items may be deal breakers. The items that are not present on the iPhone 16e are MagSafe, Dynamic Island, and a second camera. Beyond this, there is no Camera Control button, even though it is present on all other iPhone 16 models.
Storage and Price
The iPhone 16e takes the place of the iPhone 14, which means it starts at $599 for 128GB, $699 for 256GB, and $899 for the 512GB version. You can pre-order it this Friday, February 21st, and pre-orders will start arriving on Friday, February 28th, 2025.
Cases
As they have done in the past, Apple has also released some silicone cases, this time in five colors. These colors are Black, White, Fuchsia, Lake Green, and Winter Blue. These are $39 each, and again, do not support MagSafe.
Closing Thoughts
For an entry level phone the iPhone 16e does have some compromises, but it still has a number of features that the current phones have. This includes Emergency SOS, Crash Detection, USB-C, and 5G, just to name a few. For those who need an upgrade, the iPhone 16e could be a great update at a reasonable price. It is indeed more expensive than the previous iPhone SE, but it is the same price as the iPhone 14.
If you are price conscious, and are willing to make a couple of trade offs in terms of features, then the iPhone 16e might be a good choice.
I have said it before, and I will continue to say it: I know building software is not easy; there will be bugs and issues that may crop up from time to time. Even though I am well aware of bugs, that does not mean that I do not get irritated by them from time to time. The latest issue is one that I noticed with the "Heavy Rotation Mix" within my Apple Music library.
In case you are not aware, the Heavy Rotation Mix is a playlist that, as the name indicates, has the songs that you have played a lot of within the last few days. This playlist is a slightly different one, in that it is updated every day.
The issue that I have is that there is a song that I have not listened to on the list. The specific title is "Fighting For" by Evan Honer. Why would this one be in my playlist if I never listened to it? Well, I have listened to a slightly different version of the song; that one is a duet with Hailey Whitters.
You can see in the image below the fact that the title "Fighting For" is not in my library. This is indicated by the "Add to Library" menu item in the popup.
What is even better is that it even says "Remove Download", when that particular title is not actually already downloaded.
The image below is for "girl you’re taking home" by Ella Langley, and it is in my library. This can be identified by the "Remove" menu item, which will ask to remove to either remove the download or delete from the library.
Why?
What I do not understand is how Apple can add a song that I have not listened to. Yes, the title for the two items is the same, but the song is not in my library. It would be one thing if it was a different album, or even if the album artwork was different, but this is not the issue. It is an entirely different song. The two items have different song id numbers. It is beyond me how Apple cannot be bothered to easily check to see if a song is even in a user’s library. My thought is that if the song is not in a user's library, then it should not be included in the playlist at all.
One thing that is not known, almost a year after being introduced, is whether the playlist is updated on a device and then uploaded to Apple's servers, or whether Apple generates the playlist on their servers and then updates the playlist. Regardless of how it is generated, Apple has access to the data, meaning that there is no reason it should not be able to detect items that are in a user's library.
Closing Thoughts
It is not that the version done by Evan Honer on his own is bad; it is a perfectly good song. However, I personally prefer the duet with Hailey Whitters because it puts a different spin on the song. Another example where this is also true is with the song "The Joker and the Queen" by Ed Sheeran. His version is good, but the duet with Taylor Swift makes the song even better.
Apple's music "matching" needs to actually look at the title and the artist, and whether or not it is a live version. In this case, it is not live, but a duet. Which, in my mind, means that due to the fact that there are two artists, it should never match a song with a single artist.
In the grand scheme of things, this is just a minor annoyance. Yet, this is just one of the numerous software quality issues that seem to be creeping into Apple’s software as of late.
If you have ever built any sort of software, you are well aware that no software is every bug free, but it is common to try and eliminate as many bugs as possible. Apple has a large number of operating systems, including, audioOS (HomePod), iOS, iPadOS, macOS, visionOS, watchOS, and even tvOS.
Over the last 15 years Apple has made tremendous strides in eliminating the larger bugs, like crashing devices, kernel panics, and even app crashes. That is not to say that they cannot happen, of course they can, but their frequency is significantly less than in previous years. However, it seems like there are still some operating systems that do not get nearly as much attention.
One of the areas where I see a number of a bugs is with tvOS. My "favorite" bug occurs within the Library section. There are actually two issues.
Not Always Showing Group
The first issue that I have is that sometimes when switching to eithe the "Movies" or "TV Show" tab, the actual data might not change to show the proper data. The selected group changes, but the actual data does not.
This is easily fixable by just choosing another item and then going back. This is not an issue, since there is a fix, but it is still irritating when you want to watch something.
Incorrect Movie Posters
The more annoying bug that I have is with movie posters that are shown. As an example, I have a movie called "Tomorrow When the War Began". The actual poster that is shown is in Spanish, even though I do not own the Spanish version. What is more irritating is that this has happened on several titles previously, but this is the current one.
The more egregious example though, is for the movie "Whiteout". The movie poster for this is COMPLETELY wrong. It shows the movie poster for "Dracula III: Legacy". There is no reason why this should be the case. I am sure this is a caching issue, but the strange part is that it does not happen on all devices.
On my iPhone, it is the right movie poster, the same on my iPad. However, on my MacBook Pro, Apple TV 4K, MacBook Pro, and Mac Studio, it is in the correct movie poster. Even on Apple's "Marketing Tools" site, it shows the incorrect movie poster.
Unfortunately, there is no way for me to fix the caching issues. I have reported it to Apple using their Feedback mechanism (FB16415600), but I will likely never hear back.
Closing Thoughts
While Apple has significantly improved the underlying software, they still have a long way to go with their operating systems that are not that important, like Apple TV. I get that there is only so much time that Apple is willing to spend on a platform that is ancillary, but it would be nice to see it actually get some attention.
Today, Pixelmator has announced that it has agreed to be acquired by Apple. From the brief posting:
Today we have some important news to share: the Pixelmator Team plans to join Apple.
We’ve been inspired by Apple since day one, crafting our products with the same razor-sharp focus on design, ease of use, and performance. And looking back, it’s crazy what a small group of dedicated people have been able to achieve over the years from all the way in Vilnius, Lithuania. Now, we’ll have the ability to reach an even wider audience and make an even bigger impact on the lives of creative people around the world.
Regarding any immediate changes, the post states:
Pixelmator has signed an agreement to be acquired by Apple, subject to regulatory approval. There will be no material changes to the Pixelmator Pro, Pixelmator for iOS, and Photomator apps at this time. Stay tuned for exciting updates to come.
My Thoughts
This could be huge in many respects. I suspect there are two possible things that we can see. The first is that once the deal closes, I suspect that many of Pixelmator’s features could be incorporated into Apple’s own Photos app. Furthermore, I could see Apple utilizing Pixelmator as a means of testing out early Apple Intelligence features, particularly within the Photomator app, given that the purpose of that app is to allow you to edit your photos in a non-destructive manner. By using this approach, they could test out new AI features faster before incorporating them into the main Photos app.
The second outcome is a bit different. There are other companies, particularly Adobe, which have artificial intelligence photo enhancement tools already incorporated into their products. Apple likely needs something that can compete. While Apple could absolutely build something, it would take some time. It would be faster to acquire an existing product, and Pixelmator is likely that product.
I can honestly see Pixelmator and Photometer quickly become the new “Image Playgrounds” apps. It is undoubtedly an undertaking to incorporate Apple’s image generation tools into Pixelmator and/or Photomator, but that would definitely be much more of an expense than to build out their own app entirely. I could then easily see Apple providing these two apps for free with basic features, but then having the subscriptions for Pixelmator and/or Photomator for the basis of more advanced photo features powered by Apple Intelligence.
Undoubtedly, it will be interesting to see how Apple incorporates the apps into their own product suite, or what they end up doing with Pixelmator in the long run.
Today Apple has unveiled the final new release related to the Mac, this time the MacBook Pro. As expected the new MacBook Pros have the M4, M4 Pro, and the newly unveiled M4 Max.
Display and Camera
At the top of the display is the notch and within the notch is the camera. There is a new 12 Megapixel Center Stage camera. Center Stage is intended to keep you and everyone else around you in frame as much as possible. This camera also supports Desk View, so you can display what is happening on your physical desktop while in a FaceTime call.
The display on the MacBook Pro is a Liquid Retina XDR display. It has always come with a glossy finish, but that now changes. There is now a Nano Texture option. Much like the other Nano Texture displays, this is designed to reduce glare in bright light situations. This will cost an extra $150, but if you are frequently in areas with bright light, it might be worth looking at.
M4, M4 Pro, and M4 Max
Logos for the M4, M4 Pro, and M4 Max
The MacBook Pros are powered by Apple Silicon and can be configured with three different processors, the M4, the M4 Pro, and the M4 Max. There are a few configuration options for each model.
M4
The M4 comes in 10-Core CPU and 10-Core GPU model. This can be configured with 16GB, 24GB, or 32GB of memory. The base model comes with 512GB of storage and this can be configured with either 1TB or 2TB of storage. The maximum memory bandwidth for the M4 is 120 gigabits per second.
According to Apple, the MacBook Pro with M4 delivers:
- Up to 7x faster image processing in Affinity Photo when compared to the 13‑inch MacBook Pro with Core i7, and up to 1.8x faster when compared to the 13-inch MacBook Pro with M1.
- Up to 10.9x faster 3D rendering in Blender when compared to the 13‑inch MacBook Pro with Core i7, and up to 3.4x faster when compared to the 13‑inch MacBook Pro with M1.
- Up to 9.8x faster scene edit detection in Adobe Premiere Pro when compared to the 13‑inch MacBook Pro with Core i7, and up to 1.7x faster when compared to the 13‑inch MacBook Pro with M1.
M4 Pro
The M4 Pro comes in two variants. The first is a 12-Core CPU, 16-core GPU version or a 14-Core CPU. and a 14-Core CPU with a 20-Core GPU version. Both models come with 24GB of unified memory, and can be configured with 48GB. The M4 Pro models come with 512GB of storage, and can be configured with 1TB, 2TB, or 4TB of storage. The maximum memory bandwidth for the M4 is 273 gigabits per second.
According to Apple, the MacBook Pro with M4 Pro delivers:
- Up to 4x faster scene rendering performance with Maxon Redshift when compared to the 16-inch MacBook Pro with Core i9, and up to 3x faster when compared to the 16-inch MacBook Pro with M1 Pro.
- Up to 5x faster simulation of dynamical systems in MathWorks MATLAB when compared to the 16-inch MacBook Pro with Core i9, and up to 2.2x faster when compared to the 16-inch MacBook Pro with M1 Pro.
- Up to 23.8x faster basecalling for DNA sequencing in Oxford Nanopore MinKNOW when compared to the 16-inch MacBook Pro with Core i9, and up to 1.8x faster when compared to the 16-inch MacBook Pro with M1 Pro.
M4 Max
The M4 Max is a new chip not released until today. Much like the M4 Pro, the M4 Max comes in two variants. The first is a 14-Core CPU with 32-Core GPU version. This can only be configured with 36GB of unified memory. This memory has a maximum bandwidth of 410 gigabits per second, which is nearly 3.5x more memory bandwidth than the M4, and 1.5x more memory than the M4.
The second variant is a 16-Core CPU with a 40-Core GPU. This starts at 48GB of unified memory, but can be configured with 96GB or 128GB. The memory in this model is 546 gigabits per second, which is 4.5x the memory in the M4, 2x that of the M4 Pro, and 1.33x more memory bandwidth than the 14-Core M4 Max version.
Both M4 Max variants come with 1TB of storage, but can be configured for 2TB, 4TB, or even 8TB of storage, depending on needs.
And the MacBook Pro with M4 Max enables:
- Up to 7.8x faster scene rendering performance with Maxon Redshift when compared to the 16-inch MacBook Pro with Intel Core i9, and up to 3.5x faster when compared to the 16-inch MacBook Pro with M1 Max.
- Up to 4.6x faster build performance when compiling code in Xcode when compared to the 16‑inch MacBook Pro with Intel Core i9, and up to 2.2x faster when compared to the 16‑inch MacBook Pro with M1 Max.
- Up to 30.8x faster video processing performance in Topaz Video AI when compared to the 16‑inch MacBook Pro with Intel Core i9, and up to 1.6x faster when compared to the 16-inch MacBook Pro with M1 Max.
Connectivity and Ports
Ports on the 14-inch MacBook Pro with M4
Similar to the M4 Mac mini, there is a difference in ports with the M4 and the M4 Pro, not in the number, but the USB-C ports. For the M4, you get three Thunderbolt 4 ports, up to 40 Gigabits per second, and the M4 Pro and M4 Max devices come equipped with three Thunderbolt 5 ports up to 120 gigabits per second. This is the same setup as the Mac mini with M4 and M4 Pro.
The number of displays supported varies depending on the M4 version. The M4 and M4 Pro can support up to two external displays up to 6K at 60Hz over Thunderbolt, or one display up to 6K at 60Hz, and one display up to 4K at 144Hz over HDMI. The HDMI is also capable of supporting one display at 8K resolution at 60Hz, or one display 4K at 240Hz, both of these are over HDMI.
The M4 Max can have up to four external displays, three displays up to 6K with 60Hz over Thunderbolt, and one at 4K up to 144Hz on HDMI. Alternatively, you can have two external displays up to 6K resolution at 60Hz, and one external display up to 8K resolution nat 60Hz, or one display up to 4K at 240Hz on the HDMI port.
Along with the Thunderbolt ports, you also get an SDXC card reader, a dedicated HDMI port, and a 3.5mm headphone jack.
The Wi-Fi in all models is Wi-Fi 6E and support for Bluetooth 5.3 is also included.
Pricing and Availability
The M4 MacBook Pro comes in the same two sizes of 14-inch and 16-inch. The pricing differs for each model and chip. For the 14-inch you can get an M4 model starting at $1599. The M4 Pro model starts at $1999, and the M4 Max starts at $3199.
The 16-inch starts at $2499 for the M4 Pro with 14-Core CPU, 20-Core GPU, 24GB of unified memory, and 512GB of storage. The 16-inch M4 Max version starts at $3499 for a 14-core CPU with a 32-Core GPU, 36GB of unified memory, and 1TB of storage.
All of the M4-line of MacBook Pros are available to order today and will be available starting November 8th.
Closing Thoughts
The MacBook Pros continue to be the workhorses of the Apple laptops. Many users do a ton of work on these devices and now with M4 processors they should be able to accomplish even more than before. The new M4 Max adds even more horsepower to the laptops and are welcome upgrades. The line up is a bit strange, but for today’s modern Apple, it is makes sense because it is not too dissimilar to the iPhone Pro line of devices. If you have an Intel-based MacBook Pro, now would be a great time to update your MacBook Pro.
Today Apple has unveiled a new Mac mini that has the M4. This is not just a spec bump, but it includes a couple of new features, chief amongst them is a new form factor.
Form Factor
The Mac mini was introduced in 2005, and was a smaller version of the Mac, hence the name Mac mini. The Mac mini was 6.5 inches wide, had a 6.5 inch depth, and was 2 inches tall. This remained the form factor until 2011 when a new Unibody version was introduced, one that eliminated the internal disc drive. This Mac mini was physical larger at 7.7 inches wide, 7.7-inches in depth, and only 1.4 inches tall. All Mac minis introduced since 2011 have had the exact same physical footprint, including the M1 and M2 Mac minis. This all changes with the M4.
In 2022 Apple introduced a whole new machine, the Mac Studio. This took some of the design elements from the Mac mini but expanded them. The M1 and M2 Mac Studio were 7.7-inches wide, had a 7.7 inch depth, but was significantly taller at 3.7 inches.
The M4 Mac mini takes some design cues from the Apple TV. The M4 Mac mini is 5 inches wide, has a 5 inch depth, and is only 2 inches tall. This means that it is smaller than the previous Mac mini, but still a bit larger than an Apple TV. Before we dive into the ports, let us look at the processor.
M4 and M4 Pro
The Mac mini has come with a variety of processors. The previous M2 Mac mini was available in both M2 and M2 Pro variants. The same continues for the M4 Mac mini, with the M4 and M4 Pro. The M4 consists of a 10-core CPU, with 4 performance cores and 6-efficiency cores, and a 10-Core GPU. According to Apple, the M4 Mac mini is significantly faster than the M1 Mac mini. Specifically,
When compared to the Mac mini with M1, Mac mini with M4:
- Performs spreadsheet calculations up to 1.7x faster in Microsoft Excel.
- Transcribes with on-device AI speech-to-text up to 2x faster in MacWhisper.
- Merges panoramic images up to 4.9x faster in Adobe Lightroom Classic.
The M4 Pro has tow configurations, a 12-core version with 8 performance cores, and 4 efficiency cores with a 16-Core GPU. The other M4 Pro option is a 14-core CPU, with 10 performance cores and 4 efficiency cores and a 20-core GPU. From Apple’s press release:
When compared to the Mac mini with M2 Pro, Mac mini with M4 Pro:
- Applies up to 1.8x more audio effect plugins in a Logic Pro project.
- Renders motion graphics to RAM up to 2x faster in Motion.
- Completes 3D renders up to 2.9x faster in Blender.
All M4 and M4 Pro models have a 16-core Neural engine for machine learning and Apple Intelligence tasks.
Ports
Back view of the M4 Mac mini
The M4 Mac mini has a total of 7 ports, an ethernet jack, an HDMI port, and 5 USB-C ports. Of these ports, two are on the front, much like the Mac Studio, and three are on the back. The two on the front are USB-C with USB 3 speeds up to 10 gigabits per second. The three ports on the back are Thunderbolt/USB 4 ports. For the M4 models, these are Thunderbolt 4 ports, which can delivery data up to 40 Gigabits per second. The M4 M4 Pro devices are Thunderbolt 5 ports, which can deliver a whopping 120 Gigabits per second. The USB portion can deliver up to 40 Gigabits per second.
The difference in Thunderbolt ports does mean that there is a difference in DisplayPort compatibility. The Thunderbolt 4 ports support DisplayPort 1.4 while the Thunderbolt 5 ports support DisplayPort 2.1. The HDMI port on either model can support one display with 8K resolution at 60Hz, or 4K resolution at 240Hz.
By default the ethernet port is a gigabit port, but you can opt for a 10-gigabit per second port for $100 more. The Mac mini has long had a headphone jack this is still present on all models of the M4 Mac mini.
Pricing and Availability
The M4 Mac mini starts at $599 for 16GB of unified memory and 256GB of storage. You can configure the M4 models with 24GB or 32GB of memory, and up to 2TB of storage.
The M4 Pro Mac mini starts at $1399 for a 12-core CPU and 16-core GPU, 24GB of unified memory, 512GB of storage. You can configure the M4 Pro Mac mini with 48GB or 64GB of unified memory, and 1TB, 2TB, 4TB, or 8TB of storage.
The M4 Mac mini is available for pre-order today and will be available for delivery and in store on Friday November 8th.
Closing Thoughts
While other devices have received a redesign specifically for the lower power usage of Apple Silicon, the Mac mini was not one of them. The Mac mini has finally received its redesign. The smaller form factor takes cues from both the Mac Studio and Apple TV. The M4 and M4 Pro should be great upgrades from anyone who has an Intel Mac, and if you are upgrading from the M1, it will still be a solid update.
Today Apple unveiled a new iMac, one powered by the M4. While it might seem like a small update from the M3, there are a number of improvements, including the M4, ports, and colors, just to name a few items.
M4
The 24-inch iMac is powered by the M4 chip. This comes in two processor configurations, an 8-core CPU with 8-Core GPU model, and a 10-Core CPU with 10-Core GPU model. According to Apple, the M4 iMac is up to 1.7x faster for daily productivity and up to 2.1x faster for graphics editing and gaming; at least when you compare it to the M1 iMac.
Display
The size of the iMac has not changed, but there is a new option, a nano-texture display option. This is a similar display as on the iPads and on the Apple Studio Display. This is an option and will cost $200 more. This option is only available on the
Beyond this, there is a new 12Megapixel Center Stage camera. This should provide even better quality, because this camera is capable of providing Desk View, which is the ability to show your desk while in a video call, the previous iMac could not provide you this functionality.
Colors
Orange M4 iMac with matching keyboard and mouse
The 24-inch iMac has come in a variety of colors. The available colors have been updated. There are seven options:
Silver
Blue
Purple
Pink
Orange
Yellow
Green
Unlike like the previous model, all of the colors are available for any processor choice. There is a difference depending on the model, and that is with the ports. To go with this, are new color-matched accessories, including the Magic Keyboard with Touch ID, Magic Trackpad, and Magic Mouse. These all now have USB-C cables, instead of the previous lightning. Beyond the port change, the design and port locations have not changed at all.
Ports and Connectivity
Depending on the processor, you will either get two or four ports. The 8-Core CPU model has two thunderbolt/USB 4 ports. The 10-core CPU models have four thunderbolt 4 ports. All of the iMacs have Wi-Fi 6E and Bluetooth 5.3. The four thunderbolt four ports means that you can have up to two 6K external displays, which is an improvement over the M3 model, which only supported one external 6K monitor.
Back of the Green M4 iMac with two-ports
Pricing
There are actually four different configuration options available. These starting configuration options are:
8-Core CPU with 8-Core GPU, 16GB of unified memory, and 256GB of storage - $1299
10-Core CPU with 10-core GPU, 16 GB of unified memory, and 256GB of storage - $1499
10-Core CPU with 10-core GPU, 16 GB of unified memory, and 512GB of storage - $1699
10-Core CPU with 10-core GPU, 24 GB of unified memory, and 256GB of storage - $1899
You can configure the 10-Core models with up to 32GB of unified memory and up to 2TB of storage. The 10-Core models also come with Ethernet, whereas the 8-core model is Wi-Fi only, but you can add Ethernet to that model for $30.
Closing Thoughts
You can pre-order the new iMac today and they will be available starting on Friday, November 8th. If you are looking for a new iMac, now is the time to upgrade, particularly if you have an Intel machine, or want to upgrade from an M1 iMac.
Technology is consistently entertaining new crazes. Some examples include blockchain, subscription juicers, netbooks, 3D televisions, hyperloop, and "hoverboards", just to name a handful of examples. All of these were going to be "the next big thing", but none of these have panned out as the inventors intended.
There has been a term bandied about that people think may be the end-all for computers. Said term is "Artificial Intelligence", or "AI". The term "AI" can mean a variety of different things, depending on whom you ask. However, when most use the term AI, what they are expecting is a fully conscious and sentient entity that can think, act, and rationalize as a human would. This is called "Artificial General Intelligence". Today's technology is nowhere even close to being able to come to this reality. It is not yet known whether or not Artificial Intelligence will actually live up to its ultimate expectations.
The term "Artificial Intelligence" can garner a number of thoughts, and depending on who you ask, these can range from intrigue, worry, elation, or even skepticism. Humans have long wanted to create a machine that can think like a human, and this has been depicted in media for a long time. Frankenstein is an example where a machine is made into a human and then is able to come to life . Another great example is Rosie from the 1960s cartoon The Jetsons. In case you are not aware, The Jetsons is a fictional animated tv show that depicts the far future where there are flying cars, and one of the characters, Rosie, is an robot that can perform many household tasks, like cleaning and cooking.
We, as a society, have come a long way to creating modern "artificial intelligence", but we are still nowhere close to creating a robot that is anywhere close to human. Today's modern artificial intelligence falls into a number of categories, in terms of its capabilities, but it is still a long way off from being the idealistic depiction that many expect artificial intelligence to be.
Artificial Intelligence comes in a variety of forms. This includes automated cleaning robots, automated driving, text generation, image generation, and even code completion. There are many companies that are attempting to create mainstream artificial intelligence, but nobody has done so that we know of.
Apple is one of those companies, but they are taking a different approach with their service called Apple Intelligence. Apple Intelligence is Apple's take on artificial intelligence. Apple Intelligence differs in a number of ways from standard "artificial intelligence". This includes the use of on-device models, private cloud computing, and personal context. Before we delve into each of those, let us look at artificial intelligence, including a history.
Artificial Intelligence
Artificial intelligence is not a new concept. You may think that it is a modern thing, but in fact, it harkens back to World War II and Alan Turing. Turing is known for creating a machine that could crack the German Enigma codes. In 1950, Turing released a paper which was the basis of what is known as the "Turing Test". The Turing Test is one where a machine is able to exhibit intelligent behavior that is indistinguishable from a human.
There have been a number of enhancements to artificial intelligence in recent years, and many of the concepts that have been used for a while have come into more common usage. Before we dive into some aspects of artificial intelligence, let us look at how humans learn.
How Human Brains Operate
In order to be able to attempt to recreate the human brain in a robot, we first need to understand how a human brain works. While we have progressed significantly in this, we are still extremely far from fully understanding how a human brain functions, let alone even attempting to control one.
Even though we do not know everything about the brain, there is quite a bit of information that we do know. Human brains are great at spotting patterns, and the way that this is done is by taking in large amounts of data, parsing that data, and then identifying a pattern. A great example of this is when people look at clouds. Clouds come in a variety of shapes and sizes, and many people attempt to find recognizable objects within the clouds. Someone is able to accomplish this by taking their existing knowledge, looking at the cloud, determining if there is a pattern, and if there is one, identifying the object.
When a human brain is attempting to identify an object, what it is doing is going through all of the objects (animals, plants, people, shapes, etc.) that they are aware of, quickly filtering them, and seeing if there is a match.
The human brain is a giant set of chemical and electrical synapses that connect to produce consciousness. The brain is commonly called a neural network due to the network of neural pathways. According to researchers, humans are able to update their knowledge. In a technical sense, what is happening is that the weights of the synaptic connections that are the basis of our neural network brain are updated. As we go through life, our previous experiences will shape our approach to things. Beyond this, it can also affect how we feel about things in a given moment, again, based upon our previous experiences.
This approach is similar to how artificial intelligence operates. Let us look at that next.
How Artificial Intelligence Works
The current way that artificial intelligence works is by allowing you to specify an input, or prompt, and having the model create an output. The output can be text, images, speech, or even just a decision. All artificial intelligence is based on what is called a Neural Network.
A Neural Network is a machine learning algorithm that is designed to make a decision. The manner in which this is done is by processing data through various nodes. Nodes generally belong to a single layer, and for each neural network, there are at least two layers: an input layer and an output layer.
Each node within a neural network is composed of three different things: weights, thresholds (also called a bias), and an output. Data goes into the node, the weights and thresholds are applied, and an output is created. A node requires the ability to actually come to a determination and is based on training, or what a human might call, knowledge.
Training
Humans have a variety of ways of learning something that can include family, friends, media, books, TV shows, audio, and just exploring. Neural Networks cannot be trained this way. Instead, neural networks need to be given a ton of data in order to be able to learn.
Each node within a neural network provides an output, sending that to another node, which provides its output, and the process continues until a result is determined. Each time that a result is determined, a positive or negative correlation is determined. Much like a human, the more positive connections that are made, the better, and eventually, the positive correlation between an answer and the result will push away the negative connections. Once it has made enough positive correlations (gotten the right answer), it will eventually be trained.
There are actually two types of training: Supervised Learning and Reinforcement Learning.
Supervised Learning is the idea of feeding a training model so that it can learn the rules and provide the proper output. Typically, this is done using two methods: either classification or regression. Classification is pretty simple to understand. Let us say that you have 1000 pictures, 500 dogs, and 500 cats. You provide the training model with each photo individually and you tell it the type of pet for each image.
Reinforcement learning is similar, but different. In this scenario, let us say you have the same 1000 pictures, again 500 dogs and 500 cats. But instead of telling the model what is what, you let it determine the similarities between the items and as it continues to get them right, that will reinforce what it already knows.
Inference
Inference, in reference to artificial intelligence, is the process of applying a training model to a set of data. The best way to test a model is to provide it with brand-new data to try and infer the result with this brand-new data.
Artificial Intelligence works by taking the input of the new data, applying the weights, also known as parameters, that are stored in the model and applying them to the actual data.
Inference is not free, it does have a cost, most particularly when it comes to energy usage. This is where optimizations can be useful. As an example, Apple will utilize the Neural Engine as much as possible for its on-device inference. The reason for this is because the Neural Engine is optimized to perform inference tasks, while minimizing the amount of energy needed.
Artificial Intelligence Use Cases
No tool is inherently good or inherently bad, the tool is the tool. It is how it is used that determines whether it is a positive usage or a negative use. Artificial Intelligence is no different in this. Artificial intelligence can have a wide range of possible use cases. Current artificial intelligence is capable of performing actions related to detecting cancer, synthesizing new drugs, detecting brain signals in amputees, and much more. These are all health-related, but that is where many artificial intelligence models are thriving, at least at the moment, but that is not all that is possible.
Not all artificial intelligence usage is positive. There are many who will want to make what are called "Deep Fakes". A deep fake is a way of taking someone and either placing them in a situation where they never were, or even making them say something that they have never said. This is not new, not by a long shot. Since the inception of photos, there have always been manipulations. This is designed to influence someone into thinking a particular way. As you might guess, this can have detrimental effects because it distorts reality. While there are those who want to use these for nefarious purposes, there can be some positive use cases for this type of technology.
Back in 2013, country music artist Randy Travis suffered a stroke and, as a result, now suffers from aphasia, which, according to the Mayo Clinic, is "a disorder that affects how you communicate." This effectively left him unable to perform. However, in May of 2024, a brand-new Randy Travis song was released using artificial intelligence that used two proprietary AI models to help create the song. This was done with full permission from Randy Travis himself, so there is no issue there.
Let us look at a couple of different approaches used, including Large Language Models and Image Generators.
Large Language Models
Large language models, or LLMs, are those that are able to generate language that a human would understand. To quote IBM:
"In a nutshell, LLMs are designed to understand and generate text like a human, in addition to other forms of content, based on the vast amount of data used to train them. They have the ability to infer from context, generate coherent and contextually relevant responses, translate to languages other than English, summarize text, answer questions (general conversation and FAQs), and even assist in creative writing or code generation tasks." - Source: IBM.
LLMs can be used for generating, rewriting, or even changing the tone of text. The reason that this is possible is because most languages have pretty rigid rules, and it is not a complex task to calculate the probability of what the next word would be in a sentence.
The way that an LLM is trained is by consuming vast amounts of text. It then recognizes patterns from this data and then it can generate text based upon what it has learned.
Image Generation
One of the uses of modern artificial intelligence is the ability to create images. Similar to LLMs, there are image generation models that have been trained on a massive number of images. This data has been used to train the models which are used for the actual image generation. Depending on the model, you may be able to generate various types of images, ranging from cartoons to completely realistic ones.
Image generation models use a technique called Generative Adversarial Networks, or GANs. The way that a GAN works is using two different algorithms, the generator, and the discriminator, that work in tandem. The generator will output a bunch of random pixels as an image and then send it over to the discriminator. The discriminator, which has knowledge of millions of pictures of what you are trying to generate, will provide a result, which is basically a "Yes" or "No". If it is a 'no', then the generator will try again and again.
This back and forth is what is called an "adversarial loop" and this loop continues until the generator is able to generate something that the discriminator will say matches the intended type of image.
The training for GANs is quite interesting. It starts with an image and then purposely introduces noise into the image, and it does so again, and again, and again. This process reiterates a large number of times. This noisy data is what becomes the basis for the generator.
All of this is a good base for looking at what Apple has in store for its own artificial intelligence technologies, so let us look at that now.
Apple and Artificial Intelligence
You might think that Apple is late to the artificial intelligence realm, but in fact, Apple has been working with artificial intelligence for many years; it has just been called something else. Some of the areas where Apple has been using artificial intelligence have been with Photos, Siri, Messages, and even auto-correct.
Apple Intelligence
As mentioned above, Apple Intelligence is Apple's take on artificial intelligence. Apple Intelligence differs from standard artificial intelligence in that Apple intelligence is designed to work on YOUR information, not on general knowledge. The primary benefit of working on your data is that your data can remain private. This is done using on-device models.
On-Device Requests
A vast majority of Apple Intelligence requests will be performed on your device. There are a number of examples of this, including things like:
"Find me pictures of [someone] while in London."
"When is Mom's flight landing?"
Apple has been doing a lot of research with machine learning to be able to run on-device. This has meant that the machine learning models have needed to be kept the same in terms of quality but need to be able to be used on devices with limited amounts of memory. Limited, of course, is relative. We are not talking like 1GB of RAM, but more like 8GB.
The reason that Apple wants to be able to do much of the processing on your device is twofold. The first is response time. By having devices handle requests, they can be almost instantaneous. This is quite beneficial for those times when you may not have connectivity. Beyond this, sending all of your requests to the cloud would end up providing some sort of delay, even with a direct connection and incredibly fast connection speeds.
The second reason is privacy. Privacy is a big part of Apple's core beliefs. When using your own device and processing the request on the device, that means that nobody else will get access to your data, not even Apple. Instead, only you will have access to your data, which is great for your own peace of mind.
Even though as much as possible will be done on your own devices, there may be instances when your device is not able to handle your request locally. Instead, it may need to be sent to the cloud. This can be needed for larger models that require additional memory or processing to be done. If this is needed, it is handled automatically by sending it to Apple's Private Cloud Compute platform. Let us look at that next.
Private Cloud Compute
Nobody wants their data to get out of their control, yet it does happen from time to time. Apple takes data privacy seriously and has done a lot to help keep people's data private. This is in contrast to other artificial intelligence companies, who have no compunction to take user data and use it to train their machine learning models.
Apple has been working on reducing the size and memory requirements for many machine learning models. They have accomplished quite a bit, but right now there are some machine learning models that require more tokens, which means more memory, than devices are capable of having. In these instances, it may be necessary to use the cloud to handle requests.
Apple has 1.2 billion users, and while not all of the users will utilize Apple Intelligence immediately, Apple still needs to scale up Apple Intelligence to support all users who will be using it. In order to make this happen, Apple could just order as many servers as they want, plug them in, and make it all work. However, that has its own set of tradeoffs. Instead, Apple has opted to utilize their own hardware, create their own servers, and make things as seamless as possible for the end user, all while protecting user data.
Private Cloud Compute is what powers online requests for Apple Intelligence. Private Cloud Compute runs in Apple's own data centers. Private Cloud Compute is powered by a series of nodes. Each of these nodes uses Apple Silicon to process requests. These are not just standard Macs; they have been heavily customized.
Nodes
Each Private Cloud Compute node undergoes significant quality checks in order to maintain integrity. Before the node is sealed and its tamper switch activated, each component undergoes a high-resolution scan to make sure that it has not been modified. After the node has been shipped and arrives at an Apple data center, it undergoes another verification to make sure it still remains untouched. This process is handled by multiple teams and overseen by a third party who is not affiliated with Apple. Once verification has been completed, the node is deployed, and a certificate is issued for the keys embedded in the Secure Enclave. Once the certificate has been created, it can be used.
Request Routing
Protecting the node is just the first step in securing user data. In order to protect user data, Apple uses what is called "target diffusion". This is a process of making sure that a user's request cannot be sent to a specific node based on the user or its content.
Target diffusion begins with the metadata of the request. This information strips out user-specific data as well as the source device. The metadata is used by the load balancers to route the request to the appropriate model. In order to limit what is called a "replay attack", each request has a single-use credential which is used to authorize requests without tying it to a specific user.
All requests are routed through an Oblivious HTTP, or OHTTP, relay, managed by a third-party provider, which hides the device's source IP address well before it ever reaches the Private Cloud Compute node. This is similar to how Private Relay works, where the actual destination server never knows your true IP address. In order to steer a request based on source IP, both Apple's Load Balancer as well as the HTTP relay would need to be compromised; while possible, it is unlikely.
User Requests
When a user's device makes a request, it is not sent to the entire Private Cloud Compute service as a whole; instead, pieces of the request are routed to different nodes by the load balancer. The response that is sent back to the user's device will specify the individual nodes that should be ready to handle the inference request.
When the load balancer selects which nodes to use, an auditable trail is created. This is to protect against an attack where an attacker compromises a node and manages to obtain complete control of the load balancer.
Transparency
When it comes to privacy, one could say, with confidence, that Apple does what they say they are doing. However, in order to provide some transparency and verification, Apple is allowing security researchers the ability to inspect software images. This is beyond what any other cloud company is doing.
In order to make sure there is transparency, each production build of Apple's Private Cloud Compute software will be appended to a write-only log. This will allow verification that the software that is being used is exactly what it claims to be. Apple is taking some additional steps. From Apple's post on Private Cloud Compute:
Our commitment to verifiable transparency includes:
1. Publishing the measurements of all code running on PCC in an append-only and cryptographically tamper-proof transparency log.
2. Making the log and associated binary software images publicly available for inspection and validation by privacy and security experts.
3. Publishing and maintaining an official set of tools for researchers analyzing PCC node software.
4. Rewarding important research findings through the Apple Security Bounty program.
This means that should an issue be found, Apple will be notified before it can become an issue, take actions to remedy the issue, and release new software, all in an attempt to keep user data private.
Privacy
When a request is sent to Apple's Private Cloud Compute, only your device and the server can communicate. Your data is sent to the server, processed, and returned to you. After the request is complete, the memory on the server is wiped so your data cannot be retrieved. This includes wiping the cryptographic keys on the data volume. Upon reboot, these keys are regenerated and never stored. The result of this is that no data can be retrieved because the cryptographic keys are sufficiently random that they could never be regenerated.
Apple has gone to extensive lengths to make sure that nobody's data can be compromised. This includes removing remote access features for administration, high-resolution scanning of the Private Cloud Compute node before it is sealed, and making sure that requests cannot be routed to specific nodes, which may allow someone to compromise data. Beyond this, when a Private Cloud Compute node is rebooted, the cryptographic keys that run the server are completely regenerated, so any previous data is no longer readable.
For even more detail, be sure to check out Apple's blog post called "Private Cloud Compute" available at https://security.apple.com/blog/private-cloud-compute.
General World Knowledge
Apple Intelligence is designed to work on your private data, but there may be times when you need to go beyond your own data and use general world knowledge. This could be something like asking for a recipe for some ingredients you have, or it could be a historical fact, or even to confirm some existing data.
Apple Intelligence is not capable of handling these types of requests. Instead, you will be prompted to send these types of requests off to third parties, like OpenAI's ChatGPT. When you are prompted to use one of these, you will need to confirm that you want to send your request and that your private information (for that specific request) will be sent to the third party.
At launch, only OpenAI's ChatGPT will be available. However, there will be more third-party options coming in the future. This type of arrangement is a good escape valve should you need to get some information that is not within your own private data. Now that we have covered what Private Cloud Compute is, let us look at what it will take to run Apple Intelligence.
Minimum Requirements
Apple Intelligence does require a minimum set of requirements in order to be used. Apple Intelligence will work on the following devices:
iPhone 16 Pro/Pro Max (A18 Pro)
iPhone 16/16 Plus (A18)
iPhone 15 Pro/Pro Max (A17 Pro)
iPad mini (A17 Pro)
iPad Pro (M1 and later)
iPad Air (M1 and later)
MacBook Air (M1 and later)
MacBook Pro (M1 and later)
Mac mini (M1 and later)
Mac Studio (M1 Max and later)
Mac Pro (M2 Ultra and later)
There are a couple of reasons why these are the devices that can be used. The first is that they require a neural engine. For the Mac, this was not present until 2020 when the first Macs with Apple Silicon were released. For the iPhone, the first Neural Engine appeared with the A11 Bionic chip on the iPhone 8, 8 Plus, and iPhone X. All iPhones since have included a Neural Engine, but that is just one requirement.
The second requirement is the amount of memory. The minimum amount of memory to run the on-device models is 8 gigabytes. The iPhone 15 Pro and iPhone 15 Pro Max are the first iPhones to come with 8GB of memory. All M1 Macs have had at least 8GB of memory.
Now, this is the minimum amount of memory. Not all features will work with only 8GB of memory. One example is a new feature for developers within Apple's Xcode app. With Xcode 16, developers will have the option of using Apple's Predictive Code Completion Model. When you install Xcode 16, there is an option that allows you to download the Predictive Code completion model, but only if your Mac has 16GB of memory or more. To illustrate this, if you have a Mac mini with 8GB of memory, you will get the following installation screen.
Similarly, let us say you have a MacBook Pro with 32GB of unified memory, you will get this installation screen.
As you can see, the Predictive Code Completion checkbox is not even an option on the Mac mini with 8GB of memory. And the Predictive Code Completion is a pretty limited amount of knowledge. Swift, while being a large programming language, is limited in scope, and that model does not work on 8GB.
It would not be presumptuous to think that this may be the case for various Apple Intelligence models going forward. Now that we have covered the minimum requirements, let us look at some of the use cases that Apple Intelligence can handle, starting with something called Genmoji.
Enabling Apple Intelligence
As outlined above, Apple Intelligence is available for compatible devices running iOS 18.1, iPadOS 18.1, or macOS Sequoia 15.1. However, Apple Intelligence is not automatically enabled. Instead, you will need to enable it. Apple Intelligence is activated on a per Apple Account basis. This only needs to be done once. Once activated, it will need to be enabled per device. To activate Apple Intelligence perform these steps:
Open Settings on iOS, or iPadOS, or System Settings on macOS Sequoia.
Scroll down to "Apple Intelligence".
Tap, or click, on "Apple Intelligence" to bring up the settings.
Tap, or click, on "Join Apple Intelligence Waitlist". A popup will appear
Tap on the "Join Apple Intelligence Waitlist" button to confirm you want to join the waitlist.
Screenshot of the "Join Waitlist" setting within the Apple Intelligence System Setting on macOS Sequoia 15.1
Once you do this, you will join the Apple Intelligence waitlist. It may take some time before you are able to access the features. Once your Apple Account has had Apple Intelligence activated on it, you will then get a notification on your device indicating that Apple Intelligence is ready.
At this point, you can click on the "Turn On Apple Intelligence" button, and a popup will appear that will allow you to enable the features. Once you have enabled Apple Intelligence on your device, you will be able to use the features.
Screenshot of the notification indicating that Apple Intelligence is ready to be used.
Closing Thoughts on Apple Intelligence
Many Artificial Intelligence tools require sending your private data to a server in the cloud to be able to perform a particular task. Doing this has the potential to not only leak your private data, but your private data can possibly be used to train additional artificial intelligence models. This is an antithesis to the core values of Apple, so Apple has taken a different approach with their own artificial intelligence that they are calling Apple Intelligence.
Apple Intelligence is designed to work on your private data and maintain that privacy. The way that this is accomplished is through a service called Private Cloud Compute. Private Cloud Compute is a set of servers in Apple's own datacenter that are built on Apple Silicon, utilizing features like the Secure Enclave to maintain the integrity of the server. Beyond this, each time that a request has been completed, the previous keys are wiped, and the server is completely reset and reinitialized with no data being retained between reboots.
Apple Intelligence is designed to help you accomplish tasks that you need, like summarizing text, generating new emojis, creating images, and more.
Apple Intelligence will be a beta feature starting in late 2024, with some overall features not coming until 2025, and it will be English only at first. Furthermore, these features will not be available in the European Union, at least not at first.
Apple Intelligence will have some pretty stiff requirements, so it will not work on all devices. In fact, you will need to have an Apple Silicon Mac or an iPad with an M1 or newer, or an A17 Pro. For the iPhone, you will need a device with an A17 Pro, A18, or A18 Pro. These devices are the iPhone 15 Pro, iPhone 16/16 Plus, or iPhone 16 Pro/Pro Max to take advantage of the Apple Intelligence features.
This is merely an introduction to Apple Intelligence, There will be more articles in this series, so be sure to check out those articles.
Today's modern internet is a leap forward from the start of the modern smartphone era of 2007 and 2008. Before then, particularly in the 1990s and early 2000s, if you were going to somewhere that you did not know very well you would need to print out a paper map using a site like Mapquest or Google Maps.
When Apple introduced the iPhone one of the few apps on the phone was a mapping app, specifically Google Maps. If you were following Apple back in 2012 it is possible that you remember Apple's announcement that they would be replacing Google Maps with their own Apple Maps. If you do not remember the announcement, it is quite likely that you do remember its launch. It was lackluster to say the least. Even though Apple Maps did not start off on the best foot, having any map app was better than no map app.
The one thing that Apple Maps has not had is a web version. You had to use either your iPhone, iPad, or Mac in order to use Apple Maps. That has now changed because today, Apple announced that there is beta of the web version of Apple Maps. Apple's announcement states:
Today, Apple Maps on the web is available in public beta, allowing users around the world to access Maps directly from their browser.
Now, users can get driving and walking directions; find great places and useful information including photos, hours, ratings, and reviews; take actions like ordering food directly from the Maps place card; and browse curated Guides to discover places to eat, shop, and explore in cities around the world. Additional features, including Look Around, will be available in the coming months.
All developers, including those using MapKit JS, can also link out to Maps on the web, so their users can get driving directions, see detailed place information, and more.
Maps on the web is currently available in English, and is compatible with Safari and Chrome on Mac and iPad, as well as Chrome and Edge on Windows PCs. Support for additional languages, browsers, and platforms will be expanded over time.
It is not clear why it took Apple 12 years to provide a web-based version of their Maps. Not having it for a few years makes complete sense, but then again it has taken 14 years to get an Apple Calculator app on the iPad, so I guess this is two years ahead of schedule.
Today Apple held its World Wide Developer Conference, or WWDC, keynote. The WWDC keynote is a way of Apple to highlight the features that will be coming to its platforms overs the next year. This year keynote is a big one with features that were highlighted for all of Apple’s platforms. I will highlight what I think are the biggest announcements.
Vision Pro
The Apple Vision Pro is Apple’s latest platform and while it has only been around for four months, there are some good updates coming. First, for those who use a Mac with their Apple Vision Pro you will be able to use a display that will be like having two 4K monitors side-by-side. This is great so you can have even more screen real estate.
For Photos, you will be able to use machine learning to make any photo a Spatial Photo, so you can view it with Apple Vision Pro.
The Apple Vision Pro is also coming to new regions both later this month and also in July.
Home Screen
Our devices are super personalized and we often use our home screen to reflect that. Now, you can do even more customization. You can customize your home screen to place icons where you would like them. This works well for images that may normally be covered by icons. Just like you re-arrange the icons, and you can place them wherever you would like.
Locking Apps
Some apps can contain some sensitive data, like photos or a journal app. There may be those times when you want to protect this data. You can now lock apps behind Face ID. This means that you can use require Face ID to open up an app.
Hiding Apps
There may be apps that you do not want others to see when you hand them your phone. You can now hide apps and these will be put into a "Hidden Apps" folder in the App Library, and they will be locked behind Face ID, just like other apps.
Icon Tint
There are those of us who like to have complete color coordination between our Home Screen and our icons, but icons come in a variety of colors. You can work around this by creating a shortcut that opens up the app, but this can be tedious. There is a new option for customizing the tint color of icons. This tint color applies to all icons, but you can select any color you want.
Control Center
The Control Center also allows you to organize, resize, and organize the icons as you see fit. You can even have groups of controls that make sense for you. Developers will be able to add additional control center items for you as well.
Contacts
A couple years ago Apple added a new way to limit which photos that an app could see. Now, you can do the same with Contacts. Instead of allowing an app full access, you can choose which contacts an app will have access to. This is a great addition for privacy.
Passwords
There is now a new Passwords app that works across Mac, iPhone, iPad, Apple Vision Pro, and even on Windows. This will allow you to store your passwords, one-time codes, passkeys, Wi-Fi, shared passwords, and Sign-In with Apple.
Messages
Messages gets some new features like custom Tapbacks, so you can now add emoji inline or as a tapback. Beyond this, you can use the new Genmoji feature to generate your own custom emoji-looking items to get the right images for the situation.
Sometimes all you need to do is add some emphasis to text. This can also be done through the new "text effects". These will allow you to add bold, italics, underline, or strike through some text. Beyond this, you can add some effects including:
Big
Small
Shake
Nod
Explode
Ripple
Bloom
Jitter
Here is a photo of some of them. When you preview them, they will show a preview of what they will look like.
TV
The AppleTV app is getting a new feature called "Insights". This is where you will be able to see other things that an actor or actress has been in as well as identify a song. You will then be able to add the song to an Apple Music playlist. If you use your iPhone as a remote while watching TV with others, you will also be able to get Insights.
Another feature is Enhance Dialog for TV speakers and speakers, this can be super helpful so you can hear what is being said. Subtitles will also get some tweaks, where they will automatically come up if you mute the audio or jump back in time.
Calculator
There is now a calculator on the iPad and it includes a feature called "Math Notes". Math Notes allows you to write out expressions with the Apple Pencil and calculator will compute the answer once you put an equals sign in the equation. If you need to add a column of numbers, this can be done as well by putting a line under all of the numbers to be added.
You will also be able to add graphs and update values in real-time with variables. Math Notes are not limited to just the calculator app. You can also use them in the Notes app.
macOS
macOS is the oldest and most mature of Apple’s operating systems, but it also seems some new features, like the aforementioned Passwords app. There are two iPhone-related features coming, mirroring and mirroring and notifications.
iPhone Mirroring
There may be instances when you would might want to be able to see what is happening on your iPhone while you are using your Mac. This can be particularly true if your iPhone is charging in another room. Now, you will be able to actually use your iPhone while it is elsewhere. You are able to interact with it by swiping and clicking, just as if you were using the iPhone.
When you do you your iPhone via mirroring, it will remain locked, so nobody else will be able to see what you are doing.
iPhone Notifications
Much like being able to add widgets from your iPhone to your Mac, you will be able to get iPhone notifications right on your Mac. Along with this, you can interact with them and they should even be able to open up via iPhone mirroring.
Mail
Mail is also getting some updates, including categorizations. This is done on device and emails will be put into one of a few categories:
Primary - Most important
Transactions - Receipts
Updates - Newsletters
Promotions - Marketing/Sales
This is a nice update as well. There is another thing coming to mail, Writing Tools.
Writing Tools will allow you to spell check, proof-read, and rewrite an email. It will not be limited Mail, but can also be found in Keynote. Pages, Notes, and even third-party apps.
Apple Intelligence
One of the big items highlighted is Artificial Intelligence. Artificial Intelligence will allow you to create images, rework text, and even use Siri to perform actions and find your own data. Apple could have just integrated existing Artificial Intelligence, but they have decided to go above and beyond with a new feature called Apple Intelligence.
Apple Intelligence is an initiative that takes Artificial intelligence and expands upon it to make sure that your information stays private. This is done through a combination of on-device and cloud infrastructure. A vast majority of the data will be on device, but for tasks that require more resources, there is the cloud portion.
But not just any cloud. Apple has dubbed their solution Private Cloud Compute. PrivatE Cloud Compute is built on Apple Silicon and uses many of the features built into the system. One of the features of this is that there is no data persistence, so your private data is only available to that server for that one request, before the data is wiped from the server.
Apple Intelligence provides access to your data so you can perform actions like "Find photos of Suzy in a Pink dress" and it knows enough context to be able to find what you are looking for.
Image Generation
One of the more common uses of current artificial intelligence is to generate images. You can do this on iOS 18, iPadOS 18, and macOS Sequoia as well. You will be able to create images to send to others based upon a template, and a few limited styles. Beyond this, you will be able to write out what you are looking for and it will perform a search.
Being able to use your own data is great, but sometimes you need access to general world knowledge. Apple has a solution with that as well.
ChatGPT
Apple is partnering with Open AI and to use their ChatGPT 4o model to allow you to ask Siri general knowledge. The request will be sent and it will end up responding. If an app needs to use your personal data, you will need to confirm that you want to send the data to ChatGPT before it sent. This way, you are always able to decide to not send the data.
Again, these features will be coming later in the year.
Closing Thoughts
All of the features outlined above should be coming over the next year. Some will be released this fall, while others will be later. There are a number of great features, like Home Screen customization, new text effects in Messages. The new Passwords app will make it easier to manage all of your passwords, and related information in a single location.
Artificial Intelligence is a big topic with a slew of features planned including Writing Tools, mail organization, and general ChatGPT features through Siri and throughout Apple's operating systems.