Apple Intelligence

TrotWaddles

Member
SoSH Member
Jan 23, 2004
1,625
San Antonio, TX
Hello. One of the things I love about this place is the depth of knowledge on just about everything.

Quick background. Some of you may know me from the airline pilot thread over in BLTS. Being honest upfront: I know airplanes and aviation technology in depth. And there my technical knowledge ends. I will freely admit to being like a pig looking at a football when it comes to some of this emerging AI stuff. Now that Apple is getting into the fray and being an Apple ecosystem household, I'm thinking it may be smart to become more knowledgeable about this stuff.

Questions: Why should the average person be excited about the direction Apple is headed with AI? Why should they be nervous/apprehensive (privacy, not Terminator)? Do you think this will broadly applicable to our lives? What changes do you see over the next couple of years as a result of widespread adoption of these capabilities on our phones? Do you think that Apple will begin to differentiate phones based on AI computing power instead of the camera?

At any rate, thank you for any inputs you may have.
 

NortheasternPJ

Member
SoSH Member
Nov 16, 2004
20,216
The easy answer is that Apples doing as much on device as possible for security and also has their own more secure cloud GenAI. They also allow users to opt out of ChatGPT if you’re paranoid about it.

No one has any idea what’s happening in AI 12 months from so I can’t answer. I spend a portion of my job around GenAI and AI Security and I’m not concerned right now with Apples model. It’s as good if not better for privacy than nearly any other public model.

Your financial institutions and others are all already putting your info into things like Snowflake (breached 2 weeks ago) into Microsoft OpenAI (the most popular one for Enterprises right now) and are just starting to build GenAI clusters into their data centers. I’d say 1% at most of companies are at that level, if that.
 

The_Powa_of_Seiji_Ozawa

Member
SoSH Member
Sep 9, 2006
8,397
SS Botany Bay
It also has the added bonus for Apple of nudging people using "older" hardware to buy new equipment. To use this AI feature, the starting point for phones is the iPhone 15 Pro, and for Macs, it's the M1 series.
 

NortheasternPJ

Member
SoSH Member
Nov 16, 2004
20,216
It also has the added bonus for Apple of nudging people using "older" hardware to buy new equipment. To use this AI feature, the starting point for phones is the iPhone 15 Pro, and for Macs, it's the M1 series.
Of course. This isn’t an Apple thing, it’s every compute cluster. GPUs and purpose built processors. I was talking to a customer the other day that had an 8 figure quote for an Nvidia powered cluster. I bet they haven’t spent 8 figures in hardware in their data centers in the last 3 years combined.
 

The_Powa_of_Seiji_Ozawa

Member
SoSH Member
Sep 9, 2006
8,397
SS Botany Bay
Of course. This isn’t an Apple thing, it’s every compute cluster. GPUs and purpose built processors. I was talking to a customer the other day that had an 8 figure quote for an Nvidia powered cluster. I bet they haven’t spent 8 figures in hardware in their data centers in the last 3 years combined.
I'm a dinosaur when it comes to the nitty gritty of studying computer processors, I haven't been in this game since the 486 era. I am astonished that Nvidia has such a stranglehold on this market. Like seriously, way back when I worked on computers, they were just another graphics card maker like ATI. For example, I know Intel has been trying to get into the game, how is it that they haven't really done so yet in a major way? If this is where tech is being pulled, there has to be successful new hardware providers on the horizon, no?
 

cgori

Member
SoSH Member
Oct 2, 2004
4,266
SF, CA
I'm a dinosaur when it comes to the nitty gritty of studying computer processors, I haven't been in this game since the 486 era. I am astonished that Nvidia has such a stranglehold on this market. Like seriously, way back when I worked on computers, they were just another graphics card maker like ATI. For example, I know Intel has been trying to get into the game, how is it that they haven't really done so yet in a major way? If this is where tech is being pulled, there has to be successful new hardware providers on the horizon, no?
The short answer in my opinion is that it's about the software stack (it kinda always is with this stuff, the hardware is the baseline but often the "best" hardware doesn't win, it's the one with the software tools.)

NVidia has been working on CUDA for well over a decade (now CUDA-X), actually close to 2 decades now. That is the layer that allows all of this parallel computation to be mapped into C/C++/Python/etc and it's kind of standard now, even some of Google's (earlier?) TensorFlow stuff had CUDA extensions, though I'm not sure if it does now or works differently with their TPUs.

There are a TON of hardware providers building different kinds of AI chips/chipsets out there, and a bunch of different architectures for doing so (example - I know the guys here but there are tons more). Most "AI" is a series of linear algebra / matrix multiplies (or multiply-and-accumulates). What everyone is finding is that the software is the limiting factor for adoption/deployment. There are also like a bajillion application level algorithms that have to be mapped (efficiently) onto the underlying architectures - for example there are a bunch of flavors of ResNet - often used for image classification, which is then a building block going into a transformer architecture (GPT - the T stands for transformer), and there are many such architectures with pluses and minuses.
 

Scott Cooper's Grand Slam

Member
SoSH Member
Jul 12, 2008
4,555
New England
I thought the Washington Post had a cogent take on this.

https://wapo.st/3VD8Czv

Early in the development process, Apple bet the franchise that most people do not want a trillion-parameter neural network, because most people do not know what any of those words mean. They want AI that can shuttle between their calendar and email to make their day a little more coordinated. They want Siri to do multistep tasks, like finding photos of their kid in a pink coat at Christmas and organizing them into a movie with music that flatters their taste. If AI is going to generate any original visuals, they’d prefer emojis based on descriptions of their friends rather than deepfakes. And of course they want all of Apple’s usual privacy guarantees.

Apple calls these kinds of AI-driven tasks “personal context.” Each is a meaningful improvement to the iPhone, which is where more than 1 billion people do the bulk of their computing and where Apple makes the bulk of its profits. They also happen to require relatively small bursts of computing power, which is where AI generates the most expense. By constraining itself, Apple says it’s able to run most of these functions on a 3 billion-parameter AI model that’s completely contained within the device — meaning no communication with an outside server and therefore no privacy risk. This sounds easy and is all kinds of hard from an engineering perspective, unless you make your own silicon and run your own supply chain and train your own AI models on licensed high-quality data. The benefits of being a control freak.
My favorite example of Apple using AI has been on iPhones for years. It's called "Dim Flashing Lights," and it does exactly that. You'll find it under Accessibility > Motion. When enabled, if you're watching video content and flashing lights are detected, it'll automatically dim the screen so as to prevent triggering a seizure in people who have photosensitive epilepsy.

To me, this exemplifies Apple's approach to AI. Solve very targeted problems using AI like a scalpel. Most people won't know it's there, and that's OK.

And Stratechery adds:

As for specifics, Apple’s presentation was very organized:
  • The capabilities of Apple Intelligence are language, image generation, actions, understanding personal context, and privacy.
  • The infrastructure of Apple Intelligence is on-device processing using Apple-designed models (which, according to Apple, compare favorably to Microsoft’s Phi models, the current gold-standard for small models), and cloud processing via Apple-owned datacenters running servers outfitted in some way with Apple Silicon. Apple is promising that the latter is designed in such a way that all requests are guaranteed to be private and disposed of immediately.
  • These capabilities and infrastructure are exposed through various experiences, including an overhauled Siri that can take actions on apps (to the extent they support it); writing tools including rewrite, tone changes, and proofreading; summarization of things like emails and notifications; genemoji (i.e. generated emoji in the style of current emoji offerings); a system-level component called Image Playground that developers can incorporate into their apps; and new experiences in Notes and Photos.
The key part here is the “understanding personal context” bit: Apple Intelligence will know more about you than any other AI, because your phone knows more about you than any other device (and knows what you are looking at whenever you invoke Apple Intelligence); this, by extension, explains why the infrastructure and privacy parts are so important.

What this means is that Apple Intelligence is by-and-large focused on specific use cases where that knowledge is useful; that means the problem space that Apple Intelligence is trying to solve is constrained and grounded — both figuratively and literally — in areas where it is much less likely that the AI screws up. In other words, Apple is addressing a space that is very useful, that only they can address, and which also happens to be “safe” in terms of reputation risk. Honestly, it almost seems unfair — or, to put it another way, it speaks to what a massive advantage there is for a trusted platform. Apple gets to solve real problems in meaningful ways with low risk, and that’s exactly what they are doing.

Contrast this to what OpenAI is trying to accomplish with its GPT models, or Google with Gemini, or Anthropic with Claude: those large language models are trying to incorporate all of the available public knowledge to know everything; it’s a dramatically larger and more difficult problem space, which is why they get stuff wrong. There is also a lot of stuff that they don’t know because that information is locked away — like all of the information on an iPhone. That’s not to say these models aren’t useful: they are far more capable and knowledgable than what Apple is trying to build for anything that does not rely on personal context; they are also all trying to achieve the same things.
 

singaporesoxfan

Well-Known Member
Lifetime Member
SoSH Member
Jul 21, 2004
11,977
Washington, DC
I thought the Washington Post had a cogent take on this.

https://wapo.st/3VD8Czv

They want AI that can shuttle between their calendar and email to make their day a little more coordinated.
This is really it for me. Everyone else in tech seems to be keeps trying to get AI to do the interesting stuff. The great stuff I want AI to do is to take care of a lot of drudgery so that I can focus on the interesting stuff, and it's been frustrating seeing Google, OpenAI etc. trying to leapfrog to "look at the movies you can make". And then it's not surprising you get pushback, because GenAI isn't ready for that - it often feels like a novelty.

So far my favourite use case for AI has been the function with Shortwave (my email service) to summarize each emails. I would love if Apple could take that further and read my email, and every morning say that "Hey, you or your son been invited to x, y, and z, I've blocked off your calendar and added the location/dial-in, let me know if you want to accept."

My sense is that tech leaders often have very good executive assistants and chiefs of staff who arrange much of their lives for them and they don’t know how much it would help the average person to even have half of those capabilities.
 

canderson

Mr. Brightside
SoSH Member
Jul 16, 2005
41,250
Harrisburg, Pa.
Basically everything apple showed in WWDC before the AI portion .... was AI. All the camera logic, the new calculator, phone filters, dark modes, etc - that's all AI and really want makes most sense for its use.
 

epraz

Member
SoSH Member
Oct 15, 2002
6,308
The short answer in my opinion is that it's about the software stack (it kinda always is with this stuff, the hardware is the baseline but often the "best" hardware doesn't win, it's the one with the software tools.)

NVidia has been working on CUDA for well over a decade (now CUDA-X), actually close to 2 decades now. That is the layer that allows all of this parallel computation to be mapped into C/C++/Python/etc and it's kind of standard now, even some of Google's (earlier?) TensorFlow stuff had CUDA extensions, though I'm not sure if it does now or works differently with their TPUs.

There are a TON of hardware providers building different kinds of AI chips/chipsets out there, and a bunch of different architectures for doing so (example - I know the guys here but there are tons more). Most "AI" is a series of linear algebra / matrix multiplies (or multiply-and-accumulates). What everyone is finding is that the software is the limiting factor for adoption/deployment. There are also like a bajillion application level algorithms that have to be mapped (efficiently) onto the underlying architectures - for example there are a bunch of flavors of ResNet - often used for image classification, which is then a building block going into a transformer architecture (GPT - the T stands for transformer), and there are many such architectures with pluses and minuses.
Thanks for this, but also I don't quite follow--why don't any other providers have a software stack that competes with NVidia?
 

Scott Cooper's Grand Slam

Member
SoSH Member
Jul 12, 2008
4,555
New England
This is really it for me. Everyone else in tech seems to be keeps trying to get AI to do the interesting stuff. The great stuff I want AI to do is to take care of a lot of drudgery so that I can focus on the interesting stuff,
That's right. Every enterprise calendar system has a "find next available time" function if you want to schedule a meeting with 2+ people. But they never work, because a) people are busy b) people block their calendars to avoid being discoverable by this function. A great, context-aware calendar would be able to look past that and say "OK, based on your relationship to these people and whatever shared priorities/interests you have, these people are actually available at time XYZ."
 

singaporesoxfan

Well-Known Member
Lifetime Member
SoSH Member
Jul 21, 2004
11,977
Washington, DC
That's right. Every enterprise calendar system has a "find next available time" function if you want to schedule a meeting with 2+ people. But they never work, because a) people are busy b) people block their calendars to avoid being discoverable by this function. A great, context-aware calendar would be able to look past that and say "OK, based on your relationship to these people and whatever shared priorities/interests you have, these people are actually available at time XYZ."
Even more than work, where I really need something like this is for my home life, and that's why I'm excited that Apple is coming into this space. I find myself dealing with so many emails from my older kid's school and my younger kids' daycare about key dates and functions, invites to bar mitzvahs and birthday parties, travel soccer calendars etc., all of which are asking for my time. I would love to have the equivalent of an executive assistant for my home life. The fact that it's AI rather than a human is a plus for me here, because while you can hire people to act as virtual personal assistants, I would much rather not have someone read through my emails.
 

cgori

Member
SoSH Member
Oct 2, 2004
4,266
SF, CA
Thanks for this, but also I don't quite follow--why don't any other providers have a software stack that competes with NVidia?
They do, but there's a buy-in effect for CUDA and similar being around for a long time. Basically the APIs (and software layers) are at least partially related to a company's conception of how to solve a given problem. So either a competitor have to build a new API in a different way to map better onto their hardware (and then convince the consumers to adjust to your new API), or rebuild something similar to CUDA but mapping it onto their hardware. Basically then you'd be agreeing to Nvidia's "rules of the game" (as far as how to frame higher-level problems) but also spotting them a big lead (15-20 years of development and people getting used to what they did), and probably making it harder to exploit any innovative features of the competitor's hardware architecture.

Owning de facto standard APIs (or mid/low-level software layers) can be a big deal. I'm not sure if I'm explaining it well, but we see it a lot across different markets/situations.
 

epraz

Member
SoSH Member
Oct 15, 2002
6,308
Owning de facto standard APIs (or mid/low-level software layers) can be a big deal. I'm not sure if I'm explaining it well, but we see it a lot across different markets/situations.
Thanks--that does help. Seems like NVidia had good insight/made a good bet a while ago and it's paying off 1,000,000-1
 

skidmark21

New Member
Jul 24, 2022
25
My favorite example of Apple using AI has been on iPhones for years. It's called "Dim Flashing Lights," and it does exactly that. You'll find it under Accessibility > Motion. When enabled, if you're watching video content and flashing lights are detected, it'll automatically dim the screen so as to prevent triggering a seizure in people who have photosensitive epilepsy.

To me, this exemplifies Apple's approach to AI. Solve very targeted problems using AI like a scalpel. Most people won't know it's there, and that's OK.
Maybe I’m a bit daft, but how is that AI?