A Vampire Must be Invited in
#TIPS4FAMILIES | I love the current slate of AI tools, but I don't appreciate my email, documents, nor phone invaded by data-sucking, privacy-compromising, Big Tech vampires that were not invited in.
I use AI tools like Claude.ai and ChatGPT daily — whether it's for refining an important email, analyzing contracts, summarizing a research paper, or creating charts from spreadsheets.
But I choose how, where, and when to use these tools. No matter how sophisticated they become, I reject the automatic integration of AI features into the platforms and tools I use regularly.
We all deserve the right to opt in, not the burden of navigating a complex, time-consuming opt-out process.
Additionally, the danger in allowing AI to silently infiltrate our email, phones, and operating systems is that we start to surrender our personal agency, as well as, our data security. We end up making ourselves vulnerable where we could instead be equally strong.
How did we reach this point where Big Tech infantilizes us, presumes to know what's best, and makes opting out so cumbersome that we simply acquiesce?
Vampires Need an Invitation in
In vampire lore, the undead can only enter a home when invited. Of course, the vampire makes a compelling—albeit deceptive—case to coax its way in.
But rules are rules, and enthusiasts will tell you that this aspect of the vampire myth is about boundaries, and a reminder that as humans we maintain free will.
Similarly, we should control which data-hungry entities we let into our digital lives, with clear understanding and explicit consent.
A Monster of Our Own Creation
For years, platforms like Netflix, Instagram, and Amazon have been “learning” from our behavior, using machine learning to serve content based on our digital footprints.
But now AI tools are shifting from suggestions to actions. Imagine instead of a targeted ad, an integrated bot announces: “Based on your history, I'll purchase this handbag for you” (with your stored address and credit card details ready).
You might welcome your own Frankenstein digital assistant, but we must be intentional about its creation, not sleepwalk into this version of our future.
What to Do Next
Finding and disabling AI within applications can be challenging; sometimes even impossible. But by exploring these settings, we build valuable knowledge for navigating future AI integrations. Think of it as building digital muscle that gives us many times the strength to face tomorrow's challenges.
You might find these features delightful— but just make sure you chose to use them.
Here’s a Start
Gmail
Google has aggressively integrated AI into its services, from AI-generated search summaries to email features that have angered many who would rather finish sentences themselves!
To disable AI in Gmail:
1. Click on Settings (upper-right corner)
2. Turn off all “Smart Features”
3. Also, click on the Workspace button to disable additional AI features
For paid Google Workspace users, more detailed directions here:
Opting Out of Gmail's Gemini AI Summaries Is a Mess. Here's How to Do It, We Think, 404 Media, Jan 17, 2025
Google Quietly Installed A.I. to My Workspace. Getting Rid of It Was Creepy, Slate, Jan 29, 2025
Apple OS
Apple's latest iOS update introduced “Apple Intelligence and Siri” which is fairly easy to disable.
But there are a few key settings to check beyond that main toggle:
1. App-Level Siri Settings
Go to Settings > Apps > Siri and disable “Learn from these apps.” By default, Siri collects data about your app usage patterns.
2. Hidden Photo Analysis
Settings > Apps > Photos > Enhanced Visual Search (at bottom)
This feature matches your photos with others to improve Apple's location identification.
This second feature also raises a broader concern over how we are “compensated” for a company's AI training. In this case, Apple is offering “free” image hosting, but we already, of course, pay for the device and additional storage.
So as an Apple user, should your photos be used to improve their visual search? When Apple builds value at our expense, is product improvement enough of a fair exchange?
Microsoft Office
Microsoft also automatically opted users into features that allow content to be trained, here is how to disable it:
User Warns of Microsoft Setting You Need To Turn Off if You Don't Want To Be Used for 'AI Training', Newsweek, Jan 31, 2025
LinkedIn
LinkedIn faced backlash after automatically enabling AI learning from user content. Despite their professional user base's predictable response, a legal case against the company was dismissed—highlighting the complex intersection of AI, data rights, and user consent.
It's worth reading the details of the case, as there are likely to be many more similar legal challenges in the years ahead.
LinkedIn is using your data to train generative AI models. Here's how to opt out, Sept 19, 2024
Lawsuit Accusing LinkedIn of Training AI Models With InMail Private Messages Dismissed,” CPO magazine, Feb 3, 2025
What’s the Takeaway
Know what you're being opted into. Disable features until you've weighed their risks and benefits, and demand an end to this game of AI Whack-a-Mole.
We've created this digital monster—now let's be mindful about inviting it in.
P.S.
Hot off the press, here is Gizmodo coverage regarding what is now a one-year push by myself, the Parent Coalition for Student Privacy, and the NYCLU for NYC to fix, or end, its privacy-comprising partnership with Talkspace. To further illustrate how complex and twisted this battle has become, Talkspace maintains that it removed trackers from the New York teen sign-up pages. But if a child clicks on, say, the privacy policy, the trackers come back. And if the teen starts (as one is likely to) on Talkspace.com, trackers start with them on their journey from there. I’ll keep you in the loop on next steps.
Love this straightforward guidance! Super helpful