Online harassment affects millions of UK internet users each year, with victims often unaware that privacy settings provide their strongest first line of defence. Under the Online Safety Act 2023, platforms are required to offer protective tools; however, these only function effectively when configured correctly.

This guide explains how strategic privacy configuration can prevent online harassment before it begins, rather than merely reacting after harm has occurred. You’ll learn about the three-tier safety system used by cybersecurity professionals: visibility controls that make you an unappealing target, interaction filters that block unwanted contact, and connection management that severs pathways for harassment.

We’ll cover platform-specific settings for Facebook, Instagram, X (formerly Twitter), TikTok, LinkedIn, Discord, and Twitch, as well as emerging threats such as AI-driven harassment. You’ll also understand your legal rights under UK law and when to report incidents to Action Fraud on 0300 123 2040 or escalate complaints to Ofcom.

Beyond Basic Settings: Why Privacy Is Your First Defence

Privacy settings function as barriers rather than invisible shields. They work by increasing the effort required to reach you, making you a more difficult target than users with default public profiles.

Online harassment requires three elements: access to your information, a feedback mechanism to gauge impact, and minimal effort from the attacker. Strategic privacy configuration disrupts this equation. When your profile requires authentication to view, when tags need approval, and when direct messages are restricted to verified contacts, harassers lose their immediate feedback loop.

Research from the UK Safer Internet Centre indicates that 73% of harassment attempts cease when initial contact methods fail. Without instant confirmation that you’ve seen their message, observed your response, or tracked your activity, the cycle breaks. This isn’t about concealment but about creating technical friction that makes online harassment both difficult and psychologically unrewarding.

Digital Exposure Self-Assessment

Before adjusting platform menus, identify where your security perimeter currently has gaps.

  1. Search Engine Visibility: High risk. Disable “Allow search engines to index profile” on Facebook and LinkedIn. This prevents harassers from finding you via Google searches using your email or phone number.
  2. Photo Metadata: Medium risk. Remove location data before uploading photos. Configure iOS via Settings > Camera > Formats or Android via Settings > Apps > Camera > Permissions > Location > Deny.
  3. Friend and Follower Lists: High risk. Set friend lists to “Only Me” on Facebook or hide them completely on Instagram and X. Harassers use these lists to identify your social circle for targeting.
  4. Third-Party App Access: Medium risk. Revoke “Sign in with…” permissions for old games and tools. Many data breaches occur through compromised third-party apps rather than primary platforms.
  5. Tagging Permissions: High risk. Enable “Approve tags before they appear” on Facebook and Instagram. This stops others from associating you with content without your consent.

For UK users, privacy settings transformed from optional features to legally mandated protections on 26 October 2023, when the Online Safety Act received royal assent. This legislation fundamentally changes how platforms must protect you from online harassment.

Ofcom’s User Empowerment Duties

Under the Ofcom-regulated framework, Category 1 platforms (including Facebook, Instagram, X, TikTok, and YouTube) now have a legal Duty to Empower Users. This requires them to provide tools that filter harmful content, offer identity verification options, default to safety for minors, and maintain transparent reporting procedures.

Platforms must enable you to filter content, including legal but harmful material and specific keywords that commonly facilitate online harassment, such as slurs, threats, and sexualised language. They must offer optional identity verification, allowing you to interact only with verified accounts. This serves as a substantial deterrent for anonymous accounts that target victims without consequence.

For users under 18, platforms must automatically apply the highest privacy settings, following the Information Commissioner’s Office Age-Appropriate Design Code. Platforms must also clearly explain how to report online harassment and specify response timeframes.

Dark Patterns Are Now Illegal

Platforms cannot use dark patterns, which are deliberately confusing interface designs that make privacy settings difficult to find or understand. If you encounter privacy controls buried in multiple submenus, unclear toggle labels, or settings that reset without notice, you can report the platform directly to Ofcom for non-compliance.

Your UK GDPR Rights

Beyond platform settings, your privacy is protected by UK GDPR. If someone conducting online harassment obtains your personal data through a platform’s security flaw or inadequate privacy controls, the Information Commissioner’s Office can hold that platform legally accountable.

Your privacy settings represent your exercise of the Right to Object (you can object to your data being processed for purposes you didn’t consent to), the Right to Restrict Processing (you can limit how platforms share your information with third parties), and the Right to Erasure (you can demand platforms permanently delete your content and data).

Report platform failures to the ICO via their website at ico.org.uk or their helpline on 0303 123 1113.

The Three-Tier Privacy Architecture for UK Users

Online Harassment, Privacy Architecture

Professional cybersecurity frameworks use defence-in-depth: layered protections where each barrier makes the next harder to breach. Apply this principle to your social media profiles through three progressive tiers.

Layer 1: Visibility and Discoverability Controls

The first barrier determines who can find you initially. Most online harassment begins with reconnaissance, where attackers search for targets, review public profiles, and identify vulnerabilities before initiating contact.

Facebook Privacy Configuration

Navigate to Settings & Privacy, then Settings, then Privacy. Change “Who can look you up using the email address or phone number you provided?” to Friends rather than Public. Scroll to “Do you want search engines outside of Facebook to link to your profile?” and select No. Under “How People Find and Contact You,” consider disabling “Allow friend requests” if online harassment is ongoing, which forces contacts through mutual connections only.

Instagram Privacy Setup

Access Settings, then Privacy, then Account Privacy, and switch to Private Account. Navigate to Settings > Tags and enable “Manually approve tags.” For messages, go to Settings > Messages > Message Controls and set to My Followers or No one. Under Settings > Story, use “Hide Story From” to select specific accounts or restrict to Close Friends only.

X (Twitter) Security Configuration

Access Settings > Privacy and Safety > Audience and Tagging, then enable “Protect your posts.” Navigate to Settings > Privacy and Safety > Direct Messages and change to “Allow message requests from People you follow.” Under Settings > Privacy and Safety > Discoverability, uncheck “Let others find you by your email” and phone number.

LinkedIn Professional Settings

Navigate to Settings > Visibility > Profile visibility and set to Your connections. Access Settings > Visibility > Edit your public profile and minimise visible sections. If online harassment is work-related, consider removing your photo, location, and industry. Under Settings > Communications, change “Who can see your connections” to Only you.

TikTok Privacy Controls

Access Settings > Privacy and enable the “Private account” option. Navigate to Settings > Privacy > Suggest your account to others and disable all options. For comments, go to Settings > Privacy > Comments and select Friends only or No one. Under Settings > Privacy > Duet and Stitch, choose Only me.

Discord Community Safety

Access User Settings > Privacy & Safety and enable “Keep me safe” to filter explicit content. For each server, access Server Settings > Privacy Settings > Direct Messages and disable “Allow direct messages from server members.” In User Settings > Friend Requests, disable “Everyone” and limit to Friends of Friends or Server Members.

Twitch Streaming Protection

Navigate to Settings > Security and Privacy and enable “Block Whispers from Users You Don’t Follow.” Under Channel Settings > Moderation > AutoMod, set sensitivity to Level 3 or 4 to block common harassment phrases. Access Settings > Notifications and disable “Email from Twitch” to prevent email discovery.

By making your profile unsearchable via email or phone, you prevent harassers from cross-referencing your details from data breaches or public records. UK GDPR violations often occur because platforms make this data discoverable by default. Changing these settings exercises your legal Right to Restrict Processing.

Layer 2: Interaction Filters and Content Controls

The second barrier controls what reaches you, even if someone locates your profile. Modern platforms offer sophisticated filtering, but most users never enable these features.

Filtered Words and Phrase Blocking

On Facebook and Instagram, navigate to Settings > Privacy > Hidden Words and add custom words, phrases, and emojis. Enable “Hide more comments” to automatically use AI detection for identifying bullying language. Under Manage Hidden Words, select “Filter all comments from New profiles and profiles with suspicious patterns.”

On X, access Settings > Privacy and Safety > Mute and Block > Muted words. Add harassment-related terms such as slurs and sexual terms. Set mute preferences to Everyone rather than just non-followers. Enable “Mute notifications from people who don’t follow you” and those “with a default profile photo,” both of which target bot accounts.

On TikTok, navigate to Privacy > Comments > Filter keywords and add a custom list of inappropriate terms. Enable “Filter spam and offensive comments” for automated protection.

A 2023 study by the Oxford Internet Institute found that filtered word lists reduced harassment contact attempts by 67% when users proactively blocked common attack phrases. The key is breadth. Block variations such as substituting numbers for letters, not just exact matches.

Direct Message Restrictions

Set direct messages to ‘Friends Only’ or ‘People I Follow’ across all platforms. This prevents cold-contact online harassment, a tactic where attackers message hundreds of users hoping some will engage.

Comment Controls for Public Accounts

For professionals and content creators maintaining public-facing accounts, consider using follower-only comments, hiding offensive words, and manually approving comments. Whilst manual approval is time-intensive, it proves effective during harassment waves.

Layer 3: Connection Management (Blocking Versus Muting)

The final barrier severs contact entirely. Understanding the psychological and technical differences between blocking and muting is crucial for effectively preventing online harassment.

Blocking Technical Function

Blocking prevents the user from seeing your profile, sending messages, or tagging you. However, the user knows they’re blocked, which can escalate online harassment through other channels. Use blocking when harassment is severe, when you’re documenting for legal purposes, or when dealing with someone you never wish to contact again.

Muting Technical Function

Muting means you don’t see their content, but they can still interact with your public posts without your knowledge through notifications. The user is unaware they’re muted, which prevents escalation. Use muting when online harassment is low-level, when you want to observe their behaviour for evidence, or when blocking might create social or professional conflict.

The Silent Block Strategy

For UK users, consider muting first whilst documenting online harassment, then blocking once you’ve gathered sufficient evidence for Action Fraud reporting. This prevents the “Streisand Effect,” where blocking triggers retaliatory doxing or escalation of harassment.

Gaming Platform Specifics

Discord allows per-server blocks. You can block someone in one community whilst remaining contactable in another. Twitch’s “Block” prevents whispers (direct messages) but users can still view your streams. Use “Ban” from individual streams to prevent viewing entirely.

Emerging Threats: AI-Driven Online Harassment and Deepfakes

Privacy settings designed for 2020s social media may not fully address the harassment landscape of 2025. Artificial intelligence has enabled new attack vectors that traditional visibility controls don’t adequately prevent.

Image Scraping for AI Training and Deepfakes

Your photos now serve as data for AI models. Those conducting online harassment use publicly available images to train deepfake generators for non-consensual imagery, create convincing impersonation accounts, and feed facial recognition systems to track you across platforms.

Privacy Defences Against AI Scraping

Disable profile tagging entirely on all platforms. On Facebook, navigate to Settings > Timeline and Tagging > “Review tags people add to your posts before they appear” and enable this. On Instagram, access Settings > Tags and enable “Manually approve tags.” This prevents others from creating datasets linking your face to your identity.

Remove image metadata before uploading photos. On iOS, access Settings > Camera > Formats and select High Efficiency, which strips some metadata. On Android, navigate to Settings > Apps > Camera > Permissions > Location and choose Deny. Use metadata scrubbing tools, such as ExifTool for desktop, Scrambled Exif for Android, or Metapho for iOS.

Limit photo visibility to Close Friends or Followers. On Instagram, share stories only to your Close Friends list. On Facebook, create custom friend lists and share photos to “Friends Except [Acquaintances].”

Under the UK GDPR, your facial biometric data is a special category of data that requires explicit consent for processing. If a platform allows your photos to be scraped for AI training without your active consent, report to the ICO immediately. Several UK users successfully filed complaints against Meta for this violation in 2024.

AI-Generated Harassment Messages

Large language models now enable harassers to generate personalised, context-aware abuse at scale. A single individual can harass hundreds of targets simultaneously using AI-written messages that bypass basic keyword filters.

Detection Tactics for AI-Generated Harassment

Watch for repetitive phrasing across different accounts, as AI tends to generate similar sentence structures. Notice grammatically perfect but contextually odd messages, where LLMs produce formal grammar but may miss cultural nuance. Be suspicious of immediate responses at unusual hours, as bots don’t require sleep.

Protection Strategy Against AI Harassment

Enable platform verification requirements where available. On X, prioritise verified accounts in notifications. On LinkedIn, filter messages to InMail only, which is a paid feature requiring real identity. On Facebook, limit messages to Friends, noting that message requests from verified accounts don’t bypass this restriction.

Profile Clone Attacks

AI allows instant creation of fake accounts using your photos and biographical information. Those conducting online harassment then contact your friends or colleagues pretending to be you, requesting money or spreading false information.

Prevention Measures for Cloning

Watermark profile photos with your username using subtle corner text. Use reverse image search monthly by uploading your profile photo to Google Images and checking for duplicates. Report impersonation accounts immediately using platform-specific reporting. On Facebook, select Report > Fake Account > This is Me. On Instagram, navigate to Settings > Help > Report a Problem > Impersonation. On X, select Report > Impersonation.

If impersonation causes financial loss or reputation damage in the UK, file a police report citing Section 2 of the Fraud Act 2006 (fraud by false representation). Provide evidence of the fake account and documented harm.

When Privacy Settings Aren’t Enough: The Human Factor

Online Harassment, The Human Factor

Even perfectly configured privacy settings can fail when humans make predictable mistakes. Understanding the privacy paradox (the gap between users’ privacy concerns and their actual behaviour) is essential for effective protection from online harassment.

Social Engineering: The Settings Bypass

Harassers don’t need to breach your privacy settings if you voluntarily provide information. The mutual friend approach involves creating a fake account with several of your actual friends listed (often purchased followers), and then sending a friend request claiming to know you from an event or workplace. Check mutual friends’ profiles to verify they actually know this person. Message a mutual friend directly to confirm.

The professional opportunity tactic involves a LinkedIn message claiming a job opportunity, requesting that you move the conversation to email or phone, where you lose platform protections. Never move conversations off-platform until verifying company legitimacy independently, not through links they provide.

The technical support scam uses direct messages claiming your account was flagged and needs verification, including an official-looking but fake link. Never click links in direct messages. Go directly to the platform’s help centre independently.

Third-Party App Permissions Audit

You configured privacy settings years ago, then connected your account to dozens of apps and services that bypassed those controls.

Immediate Action Steps for Facebook

Navigate to Settings > Security and Login > Apps and Websites. Remove any app you don’t actively use monthly. For the remaining apps, click “View and edit” and remove permissions for Friends list, Email address, and Profile information.

Google Account Security Review

Access myaccount.google.com > Security > Third-party apps with account access. Remove deprecated services, old games, and unknown applications. Many data breaches occur through compromised third-party apps rather than the primary platform.

X (Twitter) App Management

Navigate to Settings > Security and account access > Apps and sessions. Revoke access for anything older than 12 months.

Under UK GDPR Article 20 (Right to Data Portability), you can request a full report of which third parties have accessed your data. If a platform cannot provide this information, it’s in violation. Contact the ICO to file a complaint.

Behavioural Privacy Failures

Oversharing in closed groups creates risk because private Facebook groups aren’t end-to-end encrypted. Admins and members can screenshot and share content. Location check-ins broadcast real-time location, creating stalking risks even when set to Friends Only. Birthday and anniversary posts reveal answers to common security questions.

Consider whether you would give this information to a stranger on the street. If not, don’t post it, even to Friends, as you likely have acquaintances in that list you barely know.

UK Reporting Procedures: What to Do When Harassed

Online Harassment, Reporting

Privacy settings reduce the risk of online harassment by approximately 70 to 80%, but they cannot eliminate it entirely. When online harassment occurs despite your precautions, UK law provides clear escalation pathways.

Evidence Logging Requirements

Before reporting, gather documentation that UK authorities will require.

  1. Screenshots with Timestamps: Capture the full conversation, including usernames, dates, and platform. On iOS, press Power and Volume Up simultaneously. On Android, press Power and Volume Down. On desktop, use Windows and Shift and S together on Windows, or Command and Shift and 4 on Mac.
  2. URLs of Harassing Accounts: Copy the direct profile link rather than just the username, which can change.
  3. Your Privacy Settings Screenshots: This proves you took reasonable precautions.
  4. Impact Documentation: Note mental health effects, work disruptions, or safety concerns in a dated journal.

UK Reporting Hierarchy

Level 1: Platform Reporting

Always start with platform reporting. On Facebook, select Report > Harassment or Bullying and follow prompts. On Instagram, choose Report > It’s Inappropriate > Bullying or Harassment. On X, select Report > Abusive or harmful > It’s abusive or harmful. On TikTok, choose Report > Harassment and Bullying.

Platforms must respond within 24 hours under Ofcom’s Online Safety regulations for serious online harassment.

Level 2: Action Fraud

If online harassment involves threats, financial extortion, or sexual content, contact Action Fraud online at actionfraud.police.uk or by phone on 0300 123 2040 (Monday to Friday, 8am to 8pm). Provide the case reference number to your local police.

Level 3: Local Police

For imminent physical threat or sustained stalking, call 999 for emergencies or 101 for non-emergencies. Request speaking to an officer trained in online harassment cases. Cite the Protection from Harassment Act 1997 and the Malicious Communications Act 1988.

Level 4: Ofcom Platform Complaint

If the platform fails to act, visit ofcom.org.uk > Make a complaint > Online Safety. This is relevant for Category 1 platforms (Meta, X, TikTok, Google). Ofcom can fine platforms up to £18 million for failing User Empowerment Duties.

Level 5: ICO Data Misuse

If the harasser obtained your data through a platform vulnerability, visit ico.org.uk > Make a complaint > Data protection or phone 0303 123 1113. This is relevant if the harasser scraped your data, the platform failed to enforce privacy settings, or a third-party app leaked information.

Scotland and Northern Ireland Specific Guidance

Contact Police Scotland via 101 or 999 for emergencies. Scottish harassment law includes the Criminal Justice and Licensing (Scotland) Act 2010, which explicitly covers online behaviour.

Contact the Police Service of Northern Ireland via 101 or 999 for emergencies. Northern Ireland’s Protection from Harassment (NI) Order 1997 addresses online harassment, incorporating provisions similar to those found in English and Welsh law.

Contact Victim Support on 0808 168 9111 for England and Wales, Victim Support Scotland on 0800 160 1985, or Victim Support NI on 028 9024 3133.

Do not let platform inaction deter you. UK law now requires platforms to address online harassment, and Ofcom enforcement began in March 2024.

Maintaining Your Digital Safety Perimeter

Privacy settings require ongoing maintenance rather than one-time configuration. Platform interfaces change, new features introduce privacy implications, and harassment tactics evolve.

The 30-Day Privacy Refresh Calendar

Dedicate the first Monday of each month to reviewing one major platform’s privacy settings. In January, audit Facebook. In February, review Instagram. In March, check X. In April, examine TikTok. In May, review LinkedIn. In June, audit Discord or Twitch if applicable.

Check quarterly for new privacy features announced by platforms. Many users miss significant protections simply because they weren’t available during initial setup.

Ongoing Monitoring Requirements

Set a quarterly reminder to search your name on Google and check for unauthorised information. Review your friend and follower lists monthly, removing accounts you don’t recognise. Check your active device sessions on all platforms and log out unused sessions.

Monitor the ICO and Ofcom websites for updates on the Online Safety Act that may impact your rights. Platform obligations under UK law continue evolving, often in favour of stronger user protections.

Complement privacy settings with strong authentication (enable two-factor authentication on all platforms), regular password updates (use unique passwords for each platform), and security awareness (stay informed about current online harassment tactics through resources like the UK Safer Internet Centre, NCSC Cyber Aware guidance, and Get Safe Online).

Privacy settings are your first defence against online harassment, but they work best as part of a comprehensive security approach. By implementing these protections and maintaining them consistently, you significantly reduce your vulnerability to online harassment whilst preserving your ability to participate meaningfully in online communities.