you have the possibility to publish an article related to the theme of this page, and / or to this region:
Kenya - -An information and promotions platform.
Links the content with your website for free.
Kenya - Web content about Apple
an email comes in pushing back a work meeting, but his daughter is appearing in a play that night.
His phone can now find the PDF with information about the performance, predict the local traffic, and let him know if he’ll make it on time.
These capabilities will extend beyond apps made by Apple, allowing developers to tap into Apple’s AI too.
Because the company profits more from hardware and services than from ads, Apple has less incentive than some other companies to collect personal online data, allowing it to position the iPhone as the most private device.
Even so, Apple has previously found itself in the crosshairs of privacy advocates.
Security flaws led to leaks of explicit photos from iCloud in 2014.
In 2019, contractors were found to be listening to intimate Siri recordings for quality control.
Disputes about how Apple handles data requests from law enforcement are ongoing.
The first line of defense against privacy breaches, according to Apple, is to avoid cloud computing for AI tasks whenever possible.
“The cornerstone of the personal intelligence system is on-device processing,” Federighi says, meaning that many of the AI models will run on iPhones and Macs rather than in the cloud.
“It’s aware of your personal data without collecting your personal data.
” That presents some technical obstacles.
Two years into the AI boom, pinging models for even simple tasks still requires enormous amounts of computing power.
Accomplishing that with the chips used in phones and laptops is difficult, which is why only the smallest of Google’s AI models can be run on the company’s phones, and everything else is done via the cloud.
Apple says its ability to handle AI computations on-device is due to years of research into chip design, leading to the M1 chips it began rolling out in 2020.
Yet even Apple’s most advanced chips can’t handle the full spectrum of tasks the company promises to carry out with AI.
If you ask Siri to do something complicated, it may need to pass that request, along with your data, to models that are available only on Apple’s servers.
This step, security experts say, introduces a host of vulnerabilities that may expose your information to outside bad actors, or at least to Apple itself.
“I always warn people that as soon as your data goes off your device, it becomes much more vulnerable,” says Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project and practitioner in residence at NYU Law School’s Information Law Institute.
Apple claims to have mitigated this risk with its new Private Cloud Compute system.
“For the first time ever, Private Cloud Compute extends the industry-leading security and privacy of Apple devices into the cloud,” Apple security experts wrote in their announcement, stating that personal data “isn’t accessible to anyone other than the user—not even to Apple.
”How does it work? Historically, Apple has encouraged people to opt into end-to-end encryption (the same type of technology used in messaging apps like Signal) to secure sensitive iCloud data.
But that doesn’t work for AI.
Unlike messaging apps, where a company like WhatsApp does not need to see the contents of your messages in order to deliver them to your friends, Apple’s AI models need unencrypted access to the underlying data to generate responses.
This is where Apple’s privacy process kicks in.
First, Apple says, data will be used only for the task at hand.
Second, this process will be verified by independent researchers.
Needless to say, the architecture of this system is complicated, but you can imagine it as an encryption protocol.
If your phone determines it needs the help of a larger AI model, it will package a request containing the prompt it’s using and the specific model, and then put a lock on that request.
Only the specific AI model to be used will have the proper key.
When asked whether users will be notified when a certain request is sent to cloud-based AI models instead of being handled on-device, an Apple spokesperson said there will be transparency to users but that further details aren't available.
Dawn Song, co-Director of UC Berkeley Center on Responsible Decentralized Intelligence and an expert in private computing, says Apple’s new developments are encouraging.
“The list of goals that they announced is well thought out,” she says.
“Of course there will be some challenges in meeting those goals.
” Cahn says that to judge from what Apple has disclosed so far, the system seems much more privacy-protective than other AI products out there today.
That said, the common refrain in his space is “Trust but verify.
” In other words, we won’t know how secure these systems keep our data until independent researchers can verify its claims, as Apple promises they will, and the company responds to their findings.
“Opening yourself up to independent review by researchers is a great step,” he says.
“But that doesn’t determine how you’re going to respond when researchers tell you things you don’t want to hear.
” Apple did not respond to questions about how the company will evaluate feedback from researchers.
Apple is not the only company betting that many of us will grant AI models mostly unfettered access to our private data if it means they could automate tedious tasks.
OpenAI’s Sam Altman has described his dream AI tool as one “that knows absolutely everything about my whole life, every email, every conversation I’ve ever had.
” At its own developer conference in May, Google announced an ambitious project to build a “universal AI agent that is helpful in everyday life.
” It’s a bargain that will force many of us to consider for the first time what role, if any, we want AI models to play in how we interact with our data and devices.
When ChatGPT first came on the scene, that wasn’t a question we needed to ask.
It was simply a text generator that could write us a birthday card or a poem, and the questions it raised—like where its training data came from or what biases it perpetuated—didn’t feel quite as personal.
Now, less than two years later, Big Tech is making billion-dollar bets that we trust the safety of these systems enough to fork over our private information.
It’s not yet clear if we know enough to make that call, or how able we are to opt out even if we’d like to.
“I do worry that we’re going to see this AI arms race pushing ever more of our data into other people’s hands,” Cahn says.
Apple will soon release beta versions of its Apple Intelligence features, starting this fall with the iPhone 15 and the new macOS Sequoia, which can be run on Macs and iPads with M1 chips or newer.
Says Apple CEO Tim Cook, “We think Apple intelligence is going to be indispensable.
” AlphaProof and AlphaGeometry2 are steps toward building systems that can reason, which could unlock exciting new capabilities.
The voice-enabled chatbot will be available to a small group of people today, and to all ChatGPT Plus users in the fall.
It will make it easy to build new game environments on the platform, even if you don’t have any design skills.
We’re having trouble saving your preferences.
Try refreshing this page and updating them one more time.
If you continue to get this message, reach out to us with a list of newsletters you’d like to receive.