The iPhone vs Android debate is over — arrival of AI means you must do more to stay secure, expert warns

For years, Apple has touted privacy as one of the primary reasons to choose iPhone over competitors' handsets. With the tagline "Privacy. That's iPhone", the Californian company has highlighted how its on-device encryption keeps your health data, personal information, messages, and photos safe under lock and key. It has also commissioned an entire ad spot highlighting the little-known data broker industry.

Google takes a different approach — offering almost all of its products and services for free. Not only that, but processing often takes place on Google's servers, unlocking powerful applications like Google Docs, Sheets, Gmail, and Google Photos on almost any device with a web browser.

If you'll excuse the oversimplification, one of the biggest differentiators between the two biggest mobile operating systems on the planet — Apple's iOS and Android, developed by Google — has been the amount of control you'll have over your personal data.

But the race towards Generative Artificial Intelligence (AI) could radically alter that dynamic

animated gif showing a google search with AI overview and someone choosing the simpler setting to limit the amount of information shown in search results

With the announcement of a faster, more flirtatious version of ChatGPT and the arrival of generative AI in Google's core product — its biggest overhaul since it launched in the late 90s, the spotlight is on Artificial Intelligence and its wide-ranging impact. And it could forever change our concept of privacy.

Discussing the lay of the land, GB News spoke with Darius Belejevas, head of data protection service Incogni, about the security implications of the current approaches taken by Google and Apple with their hugely popular mobile operation systems.

"Apple and Google have taken radically different approaches to on-device versus cloud processing, and each has its own implications for privacy and security," Mr Belejevas tells us. "By focusing on on-device processing, Apple is prioritising user privacy by keeping sensitive data local and minimizing data transmission to the cloud.

"This approach ensures that personal information stays within the user's control, reducing the risk of data breaches or unauthorized access during transmission or storage in remote servers. By contrast, Google’s emphasis on cloud processing, which leverages the computational resources of cloud infrastructure for AI tasks, should allow for more extensive data analysis and more sophisticated AI models.

"However this approach also raises concerns about data privacy and security, as transmitting user data to the cloud could bring with it potential vulnerabilities, including interception during transmission or unauthorized access to personal data stored on remote servers."

As more of us look to generative AI products, like ChatGPT and Google Gemini, to make everyday tasks easier ...both iPhone and Android users will likely need to take a new approach to privacy.

“When considering iOS versus Android for privacy, it's essential to look beyond individual device features and consider the broader privacy practices of each ecosystem. While Apple's focus on privacy is well-documented, Android offers a range of privacy-focused features and settings, with a more diverse hardware and software ecosystem that may vary in terms of privacy protections.

“Above all, remember that data security on AI-enabled mobile devices will come down to more than just which operating system you pick — users who want to maximize their privacy should explore using additional cybersecurity tools too.”

That's because many of the tasks performed by AI chatbots require access to your data — it's not possible to ask an AI model like ChatGPT-4o to rewrite an email in a more succinct and entertaining way without providing it every word from the original.

Not only that, but these vast immensely powerful AI systems require a lot of processing power to crunch though complex tasks, like putting together a travel itinerary, meal plan, step-by-step DIY instructions, generating computer code or Excel formulae, and everything else you've requested in your short written prompt.

incogni head Darius Belejevas

To do all of this on-device isn't going to happen anytime soon. And even if the latest custom-designed processors from Apple, Qualcomm, and Intel do make this a reality ...it's going to be limited to the most expensive, high-spec smartphones and tablets on store shelves. That means the benefits of AI chatbots won't be affordable for most.

So, what do you need to look out for?

Security specialist Darius Belejevas tells us: "Above all, remember that data security on AI-enabled mobile devices will come down to more than just which operating system you pick. If you want to maximise your privacy, you really should explore using additional cybersecurity tools, too. Antivirus software, a password manager and a good VPN are a good place to start.

"But remember your own behaviour is as important as the cybersecurity tools you use. When it comes to AI-enabled devices, be wary of sharing personal or sensitive information. Before you use them for the first time, check the terms and conditions to see what type of information they might be collecting.

"AI is a fantastically powerful tool, but guard your personal data carefully to minimise the risk of it being exploited. The aim should be to strike the right balance between the risks and advantages that AI offers. AI should be your tool, your data shouldn't be its tool."

Incogni, developed by the team under Darius Belejevas, is designed to help erase any personal information that you've already shared — intentionally or otherwise — online. Data brokers are in the business of harvesting information on you and hundreds of millions of others.

Some leading brands boast that they hold up to 1,500 data points on everyone in their database. This information could be taken from a myriad of places, including social media posts, GPS data from mobile apps, answers from online quizzes, the date of birth entered into an online form, or sites you've recently visited.

While these breadcrumbs are all worthless when viewed individually, that changes when it's bundled together with additional material purchased from other companies, like credit card providers. This is what data brokers to — promising great insight for marketers and other customers.

The anonymised data can include details like your ethnicity, religion, marital status, hobbies, shows you're watching, online purchases you’ve made, address and phone number, search history, and political affiliation.

Tesla CEO Elon Musk pictured sitting in a chair on-stage next to OpenAI CEO Sam Altman

Fortunately, under privacy laws, data brokers are required to wipe your information from their database when you ask them. If you want to start removing your information from these vast repositories, you’ll need to look up the data brokers operating in your country and send individual opt-out requests to each one.

The team at Incogni will send requests under GDPR (General Data Protection Regulation) both in the UK and Europe, CCPA (California Consumer Privacy Act), and other applicable privacy laws to force data brokers to remove your information from their databases.

“Most data removal requests are processed within 2 months and we follow up on those that aren’t to make sure your personal information is removed," it adds. You’ll be kept in the loop about what the Incogni team has found and how many records have been expunged with regular emails.

There’s also an online dashboard that lets you track the number of requests sent, the number of records wiped from the internet, and other key data points. With over 4,000 data brokers on the market right now, Incogni says it’d take around 304 hours to do this work manually.

What’s the catch? Well, the Incogni team aren’t doing this out of the kindness of their hearts. It’s a subscription service that costs £5.25 per month.