Jump to content

Search the Community

Showing results for tags 'tracking'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • Site Related
    • News & Updates
    • Site / Forum Feedback
    • Member Introduction
  • News
    • General News
    • FileSharing News
    • Mobile News
    • Software News
    • Security & Privacy News
    • Technology News
  • Downloads
    • nsane.down
  • General Discussions & Support
    • Filesharing Chat
    • Security & Privacy Center
    • Software Chat
    • Mobile Mania
    • Technology Talk
    • Entertainment Exchange
    • Guides & Tutorials
  • Off-Topic Chat
    • The Chat Bar
    • Jokes & Funny Stuff
    • Polling Station


  • Drivers
  • Filesharing
    • BitTorrent
    • eDonkey & Direct Connect (DC)
    • NewsReaders (Usenet)
    • Other P2P Clients & Tools
  • Internet
    • Download Managers & FTP Clients
    • Messengers
    • Web Browsers
    • Other Internet Tools
  • Multimedia
    • Codecs & Converters
    • Image Viewers & Editors
    • Media Players
    • Other Multimedia Software
  • Security
    • Anti-Malware
    • Firewalls
    • Other Security Tools
  • System
    • Benchmarking & System Info
    • Customization
    • Defrag Tools
    • Disc & Registry Cleaners
    • Management Suites
    • Other System Tools
  • Other Apps
    • Burning & Imaging
    • Document Viewers & Editors
    • File Managers & Archivers
    • Miscellaneous Applications
  • Linux Distributions


  • General News
  • File Sharing News
  • Mobile News
  • Software News
  • Security & Privacy News
  • Technology News

Find results in...

Find results that contain...

Date Created

  • Start


Last Updated

  • Start


Filter by number of...

Found 25 results

  1. I IN NO WAY TAKE ANY CREDIT FOR THIS IT WAS TAKEN FROM MDL FORUM AND SOME POSTS BY MEMBERS ON THIS FORUM! Manual: Tools: Microsoft Telemetry Tools Bundle v2.33 Windows 10 Lite v9 Private WinTen v0.84 Blackbird v6 v1.0.80.2 [Works with Win 7/8/8/1/10] O&O ShutUp10 v1.8.1423 WPD - Windows Privacy Dashboard v1.4.1834 WindowsSpyBlocker v4.36.0 Spybot Anti-Beacon v3.5 [Works with Win 7/8/8/1/10] W10Privacy v3.7.0.8 Privatezilla v0.50.0 SharpApp v0.46.1 Debotnet v0.7.8 Disable Windows 10 Tracking v3.2.3 Destroy Windows Spying v1.0.1.0 [Works with Win 7/8/8/1/10] [NOT RECOMMENDED AS NOT UPDATED ANYMORE]
  2. Firefox now blocks cross-site tracking by default in private browsing Mozilla says that Firefox users will be protected against cross-site tracking automatically while browsing the Internet in Private Browsing mode. This is because, starting with the Firefox 89 version released today, the Total Cookie Protection will be enabled by default in Private Browsing windows. Total Cookie Protection is designed to force all websites to keep their cookies in separate "jars," thus preventing them from tracking you across the web and building browsing profiles. First introduced in Firefox 86 in February 2021, this privacy feature was only active until now when users would manually toggle on ETP Strict Mode in the web browser's settings. "Firefox's Total Cookie Protection is a sophisticated set of privacy improvements that enforce a simple, revolutionary principle: your browser should not allow the sharing of cookies between websites," said Arthur Edelstein, Firefox Privacy and Security Senior Product Manager at Mozilla. "This principle is now enforced in Firefox Private Browsing windows by creating a separate cookie jar for every website you visit." While constantly blocking misbehaving sites that want to track you around the web, Total Cookie Protection does make a limited exception in the case of cross-site cookies needed for non-tracking purposes, such as the ones used by popular third-party login providers. How Total Cookie Protection works (Mozilla) Firefox private browsing is designed for privacy protection According to Mozilla, while browsing the web using Firefox's private mode, your privacy will be defended using the following privacy protection technologies, all enabled by default: Total Cookie Protection isolates cookies to the site where they were created Supercookie protections stop supercookies from following you from site to site Cookies and caches are cleared at the end of every Private Browsing session and aren't shared with normal windows Trackers are blocked, including cookies, scripts, tracking pixels, and other resources from domains on Disconnect's list of known trackers Many fingerprinting scripts are blocked, according to Disconnect's list of invasive fingerprinting domains SmartBlock intelligently fixes up web pages that were previously broken when tracking scripts were blocked To go into private browsing mode in Firefox, all you need to do is to open the Application Menu by clicking the button (☰) on the top right and then choosing "New Private Window." Those who only use their keyboards can enable private browsing mode using Ctrl + Shift + P (or Cmd + Shift + P on macOS). In related news, Mozilla is also rolling out the Site Isolation to all Firefox channels, a security feature designed to protect users from attacks launched via malicious websites. Enabling Site Isolation is highly recommended given that it "sandboxes web pages and web frames, isolating them from each other, further strengthening Firefox security." Firefox now blocks cross-site tracking by default in private browsing
  3. Cross-browser tracking vulnerability tracks you via installed apps Researchers have developed a way to track a user across different browsers on the same machine by querying the installed applications on the device. Certain applications, when installed, will create custom URL schemes that the browser can use to launch a URL in a specific application. For example, the custom URL scheme for a Zoom web meeting is zoommtg://, which when opened, will prompt the browser to launch the Zoom client, as shown below. The application opened via a customer URL handler Over a hundred different custom URL handlers configured by applications exist, including Slack, Skype, Windows 10, and even steam. Cross-browser tracking using URL schemes A researcher from one of the most well-known fingerprinting scripts, FingerprintJS, has disclosed a vulnerability that allows a website to track a device's user between different browsers, including Chrome, Firefox, Microsoft Edge, Safari, and even Tor. "Cross-browser anonymity is something that even a privacy conscious internet user may take for granted. Tor Browser is known to offer the ultimate in privacy protection, though due to its slow connection speed and performance issues on some websites, users may rely on less anonymous browsers for their every day surfing," explains a new vulnerability report by FingerprintJS' Konstantin Darutkin. "They may use Safari, Firefox or Chrome for some sites, and Tor for sites where they want to stay anonymous. A website exploiting the scheme flooding vulnerability could create a stable and unique identifier that can link those browsing identities together." To perform cross-browser tracking using scheme flooding, a website builds a profile of applications installed on a device by attempting to open their known URL handlers and checking if the browser launches a prompt. If a prompt is launched to open the application, then it can be assumed that the specific app is installed. By checking for different URL handlers, a script can use the detected applications to build a unique profile for your device. As the installed applications on a device are the same regardless of the browser you are using, this could allow a script to track a user's browser usage on both Google Chrome and an anonymizing browser such as Tor. To test this vulnerability, we visited Darutkin's demo site at schemeflood.com with Microsoft Edge, where a script launches URL handlers for a variety of applications to determine if they are installed. When completed, a unique identifier was shown on my profile that was also the same for tests using different browsers on my PC, including Firefox, Google Chrome, and Tor. ID generated for my device Darutkin's scheme flooding vulnerability currently checks for the following twenty-four applications, Skype, Spotify, Zoom, vscode, Epic Games, Telegram, Discord, Slack, Steam, Battle.net, Xcode, NordVPN, Sketch, Teamviewer, Microsoft Word, WhatsApp, Postman, Adobe, Messenger, Figma, Hotspot Shield, ExpressVPN, Notion, and iTunes. It is possible that multiple users can have the same combination of installed programs, leading to the same profile ID. Existing mitigations can be bypassed Of the four major browsers tested by Darutkin, only Google Chrome had previously added mitigations to prevent this type of attack by preventing multiple attempts to use URL handlers without a user gesture (interaction). However, Darutkin discovered that triggering a built-in Chrome extension, such as the Chrome PDF Viewer, bypasses this mitigation. "The built-in Chrome PDF Viewer is an extension, so every time your browser opens a PDF file it resets the scheme flood protection flag. Opening a PDF file before opening a custom URL makes the exploit functional," explains Darutkin. Microsoft Edge Program Manager Eric Lawrence has acknowledged the attack, and Chromium and Microsoft engineers are working on a fix in a new bug report. Until browsers add working mitigations for this attack, the only way to prevent this method of cross-browser tracking is to use a browser on a different device. Source: Cross-browser tracking vulnerability tracks you via installed apps
  4. How to Tell Which Emails Quietly Track You Your emails know more about you than you might think, like when you open them or when you forward them to others. But you can reclaim your privacy. Photograph: Yulia Reznikov/Getty Images Everyone sends emails now: political parties, your book club, freelance journalists, the social networks you're signed up to, your parents, that online store that you only bought one item from a decade ago, and many, many more. What do a lot of those email senders have in common? They want to know whether the messages they send you are being opened, and there are a variety of tools available to help them do just that—tools that aren't all that hard to use. A tracking pixel, embedded somewhere in the email, is how most people monitor whether an email gets opened. Once the tiny, hidden single-pixel image is loaded, it reports back to base. Their use across emails is now up to "endemic" levels according to some experts. Tracking pixels can report the times and dates their associated email was opened, as well as the location of the device used, and the email client involved. That's a lot of data to feed back to a third-party that you might not know much about. Marketers and newsletter writers would say this kind of tracking is essential to understand their audience and what they're most interested in reading about—as well as the sort of return they're getting on their advertising dollars—but from the other end, it can feel like an invasion of privacy to essentially have an eye hovering over your shoulder making a note every time you open and read a specific email, especially if you don't know it's happening. You might not be able to do much about the use of these tracking pixels, but you can take steps to stop them from functioning, and to see which messages include them—so you know which people and which companies are taking a particular interest in you, and you can choose who to allow and who not to. Stopping Tracking Emails Emails are typically tracked using the pixel method we mentioned, so the easiest way to stop this is to stop images from loading by default inside your email application of choice. Your messages might end up looking less visually appealing, but it's a trade-off worth making if you want that level of control. In Gmail on the web, click the cog icon (top right), then See all settings and General: next to Images, select Ask before displaying external images. In Mail on macOS, choose Mail, Preferences, Viewing and uncheck Load remote content in messages. In the Outlook Mail program that comes with Windows 10, tap the cog icon at the bottom of the navigation pane, then pick Reading pane and make sure both Automatically download external images options are set to off. You can find similar settings on your phone. In Gmail for Android or iOS, tap the menu button (top left), then Settings, then your email account and Images. For Mail on iOS, open the main Settings app then choose Mail and turn off the Load Remote Images option. In Outlook for Android and iOS, tap your profile picture (top left), then the cog icon, then your email account—you can then enable the Block External Images option. Other email apps besides the ones we've mentioned will usually have similar options you can make use of. It's still possible to view images inside emails in these apps, it'll just need an extra tap or click to do it. If images aren't loaded up, then any embedded tracking pixels won't be accessed and won't report that they've been opened. Spotting Tracking Emails Blocking images from loading by default is quite a blunt tool for your inbox, and more precise options are available if you want even more control. The free and open-source Ugly Email has been around for several years and is one of the best countermeasures to tracking pixels: it's an extension for Chrome and Firefox that works with Gmail in your browser. The add-on will display an eye icon next to emails in Gmail that have tracking pixels in them, before you even open the email itself—you can opt to just steer clear of any messages with trackers in them. If you do open an email marked with an eye icon, Ugly Email will do its best to block the tracking pixel functionality. Trocker is another impressive free extension for Chrome and Firefox, and as well as working with Gmail, it's also compatible with Yahoo and Outlook webmail. Like Ugly Email, tracking pixels are blocked, and emails that contain them will be marked with a small T icon in the subject header when you open the message. You can also identify tracked links inside messages, and block them if necessary. Other tools, such as the freemium Mailtrack (another Gmail add-on), can both add trackers to emails and spot emails with trackers in them, so they work both sides of the table. You might find you prefer seeing email read receipts like the ones that appear in many instant messaging apps, but bear in mind that the users at the other end of the email chain will often be unaware that this is happening. How to Tell Which Emails Quietly Track You
  5. New browser-tracking hack works even when you flush caches or go incognito At least 4 top browsers affected by "powerful tracking vector," researchers say. Enlarge Getty Images The prospect of Web users being tracked by the sites they visit has prompted several countermeasures over the years, including using Privacy Badger or an alternate anti-tracking extension, enabling private or incognito browsing sessions, or clearing cookies. Now, websites have a new way to defeat all three. The technique leverages the use of favicons, the tiny icons that websites display in users’ browser tabs and bookmark lists. Researchers from the University of Illinois, Chicago said in a new paper that most browsers cache the images in a location that’s separate from the ones used to store site data, browsing history, and cookies. Websites can abuse this arrangement by loading a series of favicons on visitors’ browsers that uniquely identify them over an extended period of time. Powerful tracking vector “Overall, while favicons have long been considered a simple decorative resource supported by browsers to facilitate websites’ branding, our research demonstrates that they introduce a powerful tracking vector that poses a significant privacy threat to users,” the researchers wrote. They continued: The attack workflow can be easily implemented by any website, without the need for user interaction or consent, and works even when popular anti-tracking extensions are deployed. To make matters worse, the idiosyncratic caching behavior of modern browsers, lends a particularly egregious property to our attack as resources in the favicon cache are used even when browsing in incognito mode due to improper isolation practices in all major browsers. The attack works against Chrome, Safari, Edge, and until recently Brave, which developed an effective countermeasure after receiving a private report from the researchers. Firefox would also be susceptible to the technique, but a bug prevents the attack from working at the moment. Favicons provide users with a small icon that can be unique for each domain or subdomain on the Internet. Websites use them to help users more easily identify the pages that are currently open in browser tabs or are stored in lists of bookmarks. Browsers save the icons in a cache so they don't have to request them over and over. This cache isn't emptied when users clear their browser cache or cookies, or when they switch to a private browsing mode. A website can exploit this behavior by storing a specific combination of favicons when users first visit it, and then checking for those images when users revisit the site, thus allowing the website to identify the browser even when users have taken active measures to prevent tracking. Browser tracking has been a concern since the advent of the World Wide Web in the 1990s. Once it became easy for users to clear browser cookies, websites devised other ways to identify visitors’ browsers. One of those methods is known as device fingerprinting, a process that collects the screen size, list of available fonts, software versions, and other properties of the visitor's computer to create a profile that is often unique to that machine. A 2013 study found that 1.5 percent of the world’s most popular sites employed the technique. Device fingerprinting can work even when people use multiple browsers. In response, some browsers have attempted to curb the tracking by blocking fingerprinting scripts. Two seconds is all it takes Websites can exploit the new favicon side channel by sending visitors through a series of subdomains—each with its own favicon—before delivering them to the page they requested. The number of redirections required varies depending on the number of unique visitors a site has. To be able to track 4.5 billion unique browsers, a website would need 32 redirections, since each redirection translates to 1 bit of entropy. That would add about 2 seconds to the time it takes for the final page to load. With tweaks, websites can reduce the delay. The paper explains it this way: By leveraging all these properties, we demonstrate a novel persistent tracking mechanism that allows websites to reidentify users across visits even if they are in incognito mode or have cleared client-side browser data. Specifically, websites can create and store a unique browser identifier through a unique combination of entries in the favicon cache. To be more precise, this tracking can be easily performed by any website by redirecting the user accordingly through a series of subdomains. These subdomains serve different favicons and, thus, create their own entries in the Favicon-Cache. Accordingly, a set of N-subdomains can be used to create an N-bit identifier, that is unique for each browser. Since the attacker controls the website, they can force the browser to visit subdomains without any user interaction. In essence, the presence of the favicon for subdomain in the cache corresponds to a value of 1 for the i-th bit of the identifier, while the absence denotes a value of 0. The researchers behind the findings are: Konstantinos Solomos, John Kristoff, Chris Kanich, and Jason Polakis, all of the University of Illinois, Chicago. They will be presenting their research next week at the NDSS Symposium. A Google spokesman said the company is aware of the research and is working on a fix. An Apple representative, meanwhile, said the company is looking into the findings. Ars also contacted Microsoft and Brave, and neither had an immediate comment for this post. As noted above, the researchers said Brave has introduced a countermeasure that prevents the technique from being effective, and other browser makers said they were working on fixes. Until fixes are available, people who want to protect themselves should investigate the effectiveness of disabling the use of favicons. Searches here, here, and here list steps for Chrome, Safari, and Edge respectively. New browser-tracking hack works even when you flush caches or go incognito
  6. iOS App Tracking Transparency will roll out in the spring Last year, Apple announced that it would be giving users more control over how apps track their activities across apps and websites on iOS, iPadOS, and tvOS. It's taken some time, but the company has provided a clearer timeline for the rollout of the feature, which will be available in the next beta update for the operating systems. Called App Tracking Transparency, this new feature means that iOS and iPadOS apps will have to explicitly ask for consent from users in order to track their activity on apps and websites owned by other companies. If the user decides to not allow the app to track their activities, the app won't be given access to the system advertising identifier (IDFA), and Apple will also require apps to respect the user choice beyond that. If apps are found to be tracking the user after they asked not to be tracked, Apple may remove them from the App Store. Additionally, apps are obligated to provide users full access to their features with or without tracking permission. Facebook heavily criticized Apple's move, stating that it will hurt part of its business as well as publishers' ability to grow through targeted advertising. In a later statement, the social network said Apple's "discouraging prompt" will hurt small businesses the most, taking away as much as 60% of the revenue they could make thanks to targeted ads. However, that doesn't seem to have deterred Apple. The move has also been criticized for potentially giving Apple preferential access to its advertising network, whereas other companies need specific permissions. After it's available in beta, Apple plans to roll out App Tracking Transparency as part of an update to iOS 14 in the early spring. The announcement comes alongside the release of a report called "A Day in the Life of Your Data", where Apple talks about how its products and services help users maintain their privacy throughout the day. It's worth keeping in mind that some advertising platforms give users at least some control over how ads are personalized. Facebook already provides an option to turn off off-Facebook activity as well as other controls, and Google similarly allows users to turn off ad personalization. iOS App Tracking Transparency will roll out in the spring
  7. Microsoft’s Obscure ‘Self Service for Mobile’ Office Activation Microsoft requires a product activation after installing. Users of Microsoft Office currently are facing trouble during telephone activation. After dealing with this issue, I came across another obscure behavior, Microsoft’s ‘Self Service for Mobile’ solution to activate Microsoft Office via mobile devices. Microsoft describes how to activate Microsoft Office 2013, 2016 and Office 365 within this document. There are several possibilities to activate an installed product, via Internet or via Telephone for instance. Activation by phone is required, if the maximum Internet activation threshold is reached. But Office activation by phone fails Within my blog post Office Telephone activation is no longer supported error I’ve addressed the basis issue. If a user re-installs Office, the phone activation fails. The activation dialog box shows the message “Telephone activation is no longer supported for your product“. Microsoft has confirmed this issue for Office 2016 users having a non subscriber installation. But also users of Microsoft Office 2010 or Microsoft Office 2013 are affected. A blog reader posted a tip: Use Mobile devices activation… I’ve posted an article Office 2010: Telefonaktivierung eingestellt? – Merkwürdigkeit II about the Office 2010 telephone activation issue within my German blog, back in January 2017. Then a reader pointed me within a comment to a Self Service for Mobile website. The link http: // bit.ly/2cQPMCb, shortened by bit.ly, points to a website https: // microsoft.gointeract.io/mobileweb/… that provides an ability to activate Microsoft Office (see screenshot below). After selecting a 6 or 7 Digits entry, an activation window with numerical buttons to enter the installation id will be shown (see screenshots shown below). The user has to enter the installation id and receives the activation id – plain and simple. Some users commented within my German blog, that this feature works like a charm. Obscurity, conspiracy, oh my God, what have they done? I didn’t inspect the posted link until writing last Fridays blog post Office Telephone activation is no longer supported error. My idea was, to mention the “Self Service for Mobile” page within the new article. I managed to alter the link to direct it to the English Self Service for Mobile language service site. Suddenly I noticed, that both, the German and also the English “Self Service for Mobile” sites uses https, but are flagged as “unsecure” in Google Chrome (see the screenshot below, showing the German edition of this web page. The popup shown for the web site „Self Service for Mobile“ says, that there is mixed content (images) on the page, so it’s not secure. That catches my attention, and I started to investigate the details. Below are the details for the German version of the web site shown in Google Chrome (but the English web site has the same issues). First of all, I noticed, that the „Self Service for Mobile“ site doesn’t belongs to a microsoft.com domain – in my view a must for a Microsoft activation page. Inspecting the details, I found out, the site contains mixed content (an image contained within the site was delivered via http). The content of the site was also delivered by Cloudflare (I’ve never noticed that case for MS websites before). The image flagged in the mixed content issue was the Microsoft logo, shown within the sites header, transferred via http. The certificate was issued by Go Daddy (an US company) and ends on March 2017. I’ve never noticed, that Go Daddy belongs to Microsoft. I came across Go Daddy during analyzing a phishing campaign months ago. A compromised server, used as a relay by a phishing campaign, has been hosted (according to Whois records) by Go Daddy. But my take down notice send to Go Daddy has never been answered. That causes all alarm bells ringing in my head, because it’s a typical behavior used in phishing sites. Also my further findings didn’t calm the alarm bells in my head. The subdomain microsoft used above doesn’t belongs to a Microsoft domain, it points to a domain gointeract.io. Tying to obtain details about the owner of gointeract.io via WhoIs ended with the following record. Domain : gointeract.io Status : Live Expiry : 2021-03-14 NS 1 : ns-887.awsdns-46.net NS 2 : ns-1211.awsdns-23.org NS 3 : ns-127.awsdns-15.com NS 4 : ns-1980.awsdns-55.co.uk Owner OrgName : Jacada Check for 'gointeract.sh' --- http://www.nic.sh/go/whois/gointeract.sh Check for 'gointeract.ac' --- http://www.nic.ac/go/whois/gointeract.ac Pretty short, isn’t it? No Admin c, no contact person, and Microsoft isn’t mentioned at all, but the domain has been registered till 2021. The Owner OrgName Jacada was unknown to me. Searching the web didn’t gave me more insights at first. Overall, the whole site looks obscure to me. The tiny text, shown within the browser’s lower left corner, was a hyperlink. The German edition of the „Self Service for Mobile“ site opens a French Microsoft site – the English site opens an English Microsoft site. My first conclusion was: Hell, I was tricked by a phishing comment – somebody set up this site to grab installation ids of Office users. So I deactivated the link within the comment and I posted a warning within my German blog post, not to use this „Self Service for Mobile“ site. I also tried to contact the user, who has posted the comment, via e-mail. … but “Microsoft” provides these links … User JaDz responded immediately in an additional comment, and wrote, that the link shortened via bit.ly has been send from Microsoft via SMS – after he tried the telephone activation and selected the option to activate via a mobile device. I didn’t noticed that before – so my conclusion was: Hell, this obscure „Self Service for Mobile“ site is indeed related to Microsoft. Then I started again a web search, but this time with the keywords Jacada and Microsoft. Google showed several hits, pointing to the site jacada.com (see screenshot below). It seems that Jacada is a kind of service provider for several customers. I wasn’t able to find Microsoft within the customer reference. But I know, that Microsoft used external services for some activities. Now I suppose, that somebody from Jacada set up the „Self Service for Mobile“ activation site. The Ajax code used is obviously able to communicate with Microsoft’s activation servers and obtain an activation id. And Microsoft’s activation mechanism provides an option to send the bit.ly link via SMS. Closing words: Security by obscurity? At this point I was left really puzzled. We are not talking about a startup located within a garage. We are having dealing with Microsoft, a multi billion company, that claims to run highly secured and trustable cloud infrastructures world wide. But what’s left, after we wipe of the marketing stuff? The Office activation via telephone is broken (Microsoft confirmed that, after it was reported by customers!). As a customer in need to activate a legal owned, but re-installed, Microsoft Office is facing a nasty situation. Telephone activation is refused, the customers will be (wrongly) notified, that this option is no longer supported. Internet activation is refused due “to many online activations” – well done. But we are not finish yet. They set up a „Self Service for Mobile“ activation site in a way, that is frequently used by phishers. They are sending links via SMS to this site requesting to enter sensitive data like install ids. A site that is using mixed content via https, and is displaying an activation id. In my eyes a security night mare. But maybe I’ve overlooked or misinterpreted something. If you have more insights or an idea, or if my assumptions a wrong, feel free, to drop a comment. I will try to reach out and ask Microsoft for a comment about this issue. Article in German Source Alternate Source reading - AskWoody: Born: Office activation site controlled by a non-Microsoft company
  8. The Best Chrome Extensions to Prevent Creepy Web Tracking Ad trackers follow you everywhere online—but it doesn’t have to be that way. Illustration: Elena Lacey Almost every site you visit tracks you to try and link your browsing history to your interests and, in turn, show you targeted adverts. One minute you’re searching for a new desk, the next adverts for that furniture website you looked at are stalking you around the web If you want to get a sense of how you’re being tracked across the web take a little time to read (and disable) the cookie options that appear each time you visit a new site. They’ll give you a sense of what technology each website is using to track its visitors and the information they gather. (The consent preferences on WIRED are controlled by parent company Condé Nast). With the Do Not Tracking setting in browsers becoming increasingly useless, web browsers are increasingly flexing their privacy credentials Apple’s Safari browser has boosted its anti-tracking tech and Firefox has blocked trackers by default since 2018. Google Chrome is also planning on getting rid of third-party cookies. However this won’t happen until 2022 and there are still significant questions about how the change will be implemented. In the meantime, your best best for stopping creepy web tracking is to move to a privacy-first web browser. For some people this step might be too radical or impractical at the moment. If that’s you, there’s another way to push back against online tracking: bring the blocking tech to your current browser. These browser extensions are a simple first step in improving your online privacy. Privacy Badger Privacy Badger is one of the best options for blocking online tracking in your current browser. For a start, it’s created by the Electronic Frontier Foundation, a US-based non-profit digital rights group that’s been fighting online privacy battles since 1990. It’s also free. Privacy Badger tracks all the elements of web pages you visit—including plugins and ads placed by external companies. If it sees these appearing across multiple sites you visit then the extension tells your browser not to load any more of that content. The organization says it doesn’t keep lists of what to block but discovers trackers as you browse the web and is more effective as time goes on. Like many of the extensions in this list the tool will also show you which trackers are blocked on each site you visit. This includes how many trackers and what type each website is using. You can disable the tool for individual sites and change settings if you don’t mind being tracked by some sites. There are Privacy Badger extensions for Chrome, Firefox (desktop and Android), Microsoft Edge and Opera. DuckDuckGo DuckDuckGo is best-known for its anonymous search engine that doesn’t collect people’s data. But in recent years the firm has also been moving more into privacy controls for the whole web (including introducing its own mobile browser). DuckDuckGo also makes an extension for Chrome. The Privacy Essentials extension blocks hidden third-party trackers, showing you which advertising networks are following you around the web over time. The tool also highlights how websites collect data through a partnership with Terms of Service Didn’t Read and includes scores for sites’ privacy policies. It also adds its non-tracking search to Chrome. Ghostery This New York-based firm—which has five different tools—is owned by the same company that was behind Cliqz, a privacy-focussed browser and search engine, which was shuttered earlier this year due to the impact of Covid-19. But Ghostery lives on. Its browser extension blocks trackers and shows lists of which ones are blocked for each site (including those that are slow to load), allows trusted and restricted sites to be set up and also lets people you block ads. The main Ghostery extension is free but there’s also a paid for $49 per month subscription that provides detailed breakdowns of all trackers and can be used for analysis or research. There are Ghostery extensions for Chrome, Firefox, Microsoft Edge and Opera. Adblock Plus Unlike other tools here, Adblock Plus is primarily marketed as an ad blocking tool—the others don’t necessarily block ads by default but aim to be privacy tools that may limit the most intrusive types of ads. Using an ad blocker comes with a different set of ethical considerations to tools that are designed to stop overly intrusive web tracking; ad blockers will block a much wider set of items on a webpage and this can include ads that don’t follow people around the web. Adblock Plus is signed up to the Acceptable Ads project that shows non-intrusive ads by default (although this can be turned off). On a privacy front Adblock Plus’s free extensions block third party trackers and allow for social media sharing buttons that send information back to their owners to be disabled. There are Adblock Plus extensions for Chrome, Firefox, Edge, Safari, Opera and Yandex. Browse Differently Adding a privacy-enabling browser extension will definitely help your online privacy, but it’s not the only step that you can take. Moving to a privacy-first browser may be the next obvious step once you’ve tried out the experience with an extension. Extensions, for Chrome at least, are limited to the desktop browser so won’t do anything to stop you being tracked on mobile. For that you’ll need a to pick a new mobile browser. For added protection you may also want to start using a VPN to mask your web activity further. This story originally appeared on WIRED UK. The Best Chrome Extensions to Prevent Creepy Web Tracking
  9. Disclosed tracking, helicopter parenting programs are still kosher In an update to its Android Developer Program Policy, Google on Wednesday said stalkerware apps in its app store can no longer be used to stalk non-consenting adults. Stalkerware, which the web giant defines as "code that transmits personal information off the device without adequate notice or consent and doesn't display a persistent notification that this is happening," may still be used for keeping track of one's kids. But starting October 1, 2020, the ad biz says it's no longer acceptable for Android apps in the Google Play Store to track another person, such as a spouse, without permission, unless there's a persistent visible notification that data is being transmitted. The ban follows a similar prohibition in August on Google-served ads for "spyware and technology used for intimate partner surveillance," which reportedly hasn't worked very well. In recent years, computer security experts have argued that the privacy and security risks in intimate relationships remain haven't been adequately anticipated or addressed. But rules against invasive behavior aren't necessarily effective. Via Twitter, Michael Veale, a lecturer at University College London, observed that a 2018 research paper "found that 'abusers frequently exploit dual-use applications—tools whose main purpose is legitimate but that can be easily repurposed to function as spyware,' so banning explicit stalkerware of questionable efficacy." Google will continue to allow non-stalkerware apps (i.e. policy compliant apps) to monitor and track people, provided the programs are not marketed as surveillance apps, they disclose any such functions, and they present the requisite persistent notification and icon. Monitoring apps of the permissible sort continue to be subject to removal for violating applicable laws in the locations where they're published, and may not link to resources (e.g. servers, SDKs) that provide policy violating functions or non-compliant APKs hosted outside the Google Play Store. Google's developer policy update also includes a ban on misrepresentation, both for apps and developer accounts. Apps or accounts that impersonate a person or organization, or attempt to conceal the app's purpose or ownership, or engage in coordinated misleading activity, are no longer allowed. Online gambling apps have been disallowed too, except for in Brazil (with government approval), France, Ireland, and the UK. Source
  10. (Reuters) - Walmart Inc said on Monday it has dropped its plans to use roving robots to scan shelves and keep track of inventory, squashing the world’s largest retailer’s years-long efforts to automate the task. A robot rolled through aisles at a Natrona Heights, Pa., Walmart The company has worked with Bossa Nova Robotics Inc for five years to improve customer experience by using robots to assist shop staff and make their jobs easier. “This was one idea we tried in roughly 500 stores just as we are trying other ideas in additional stores,” Walmart said in a statement. Walmart added that it would continue to test new technologies and invest in its own processes and apps to track inventory and help move products to shelves as quickly as possible. The company now has more workers walking the aisles to collect online orders, extracting new data on inventory problems, people familiar with the situation told the Wall Street Journal, which was first to report this news. The supermarket chain began rolling out shelf-scanning robots in 2017 to replenish inventory faster and save employees time when products run out, as a part of its broader effort to digitize its stores to make shopping faster as it competes with Amazon.com Inc. Source
  11. The IRS used smartphone location data from a contractor to try and track Americans without a warrant. The body tasked with oversight of the IRS announced in a letter that it will investigate the agency's use of location data harvested from ordinary apps installed on peoples' phones, according to a copy of the letter obtained by Motherboard. The move comes after Senators Ron Wyden and Elizabeth Warren demanded a formal investigation into how the IRS used the location data to track Americans without a warrant. "We are going to conduct a review of this matter, and we are in the process of contacting the CI [Criminal Investigation] division about this review," the letter, signed by J. Russell George, the Inspector General, and addressed to the Senators, reads. CI has a broad mandate to investigate abusive tax schemes, bankruptcy fraud, identity theft, and many more similar crimes. Wyden’s office provided Motherboard with a copy of the letter on Tuesday. n June, officials from the IRS Criminal Investigation unit told Wyden's office that it had purchased location data from a contractor called Venntel, and that the IRS had tried to use it to identify individual criminal suspects. Venntel obtains location data from innocuous looking apps such as games, weather, or e-commerce apps, and then sells access to the data to government clients. A Wyden aide previously told Motherboard that the IRS wanted to find phones, track where they were at night, use that as a proxy as to where the individual lived, and then use other data sources to try and identify the person. A person who used to work for Venntel previously told Motherboard that Venntel customers can use the tool to see which devices are in a particular house, for instance. The IRS' attempts were not successful though, as the people the IRS was looking for weren't included in the particular Venntel data set, the aide added. But the IRS still obtained this data without a warrant, and the legal justification for doing so remains unclear. The aide said that the IRS received verbal approval to use the data, but stopped responding to their office's inquiries. The Inspector General specifically said they would investigate the legal argument used. "You requested that TIGTA investigate CI's use of commercial databases in the performance of its duties, and that TIGTA examine the legal analysis IRS lawyers performed to authorize this practice. Your concern is that CI's use of the data described above may not be consistent with the holding of the Supreme Court in the case Carpenter v. United States," the Inspector General's letter continues, referring to the Treasury Inspector General for Tax Administration. "Upon compilation, to the extent allowable under the law, we will advise you of the results," the letter adds. The IRS did not immediately respond to a request for comment. On Tuesday, Motherboard reported that U.S. Customs and Border Protection had paid for access to Venntel's "global" dataset, meaning the agency could track phones beyond U.S. borders. Source
  12. How to enable Redirect Tracking Protection in Firefox Mozilla released Firefox 79.0 to the stable channel recently and one of the main changes of that release improved the browser's tracking protection feature. Enhanced Tracking Protection 2.0 introduced support for preventing an advanced tracking technique called redirect tracking. Redirect Tracking is used to bypass a browser's mechanisms to block online tracking. While browser's may block third-party cookies, redirect tracking basically adds the tracker's site to the navigational event to make it first party in the context. So, instead of visiting Site B from Site A right away, you would be taken to Site T as well (Site A > Site T > Site with T being the tracker site. Site T would just load briefly and then redirect to the actual target. Mozilla notes on its developer site: Redirect trackers work by forcing you to make an imperceptible and momentary stopover to their website as part of that journey. So instead of navigating directly from the review website to the retailer, you’ll end up navigating to the redirect tracker first rather than to the retailer. This means that the tracker is loaded as a first party. The redirect tracker associates tracking data with the identifiers they have stored in their first-party cookies and then forwards you to the retailer. Firefox's redirect tracking protection clears cookies and site data from trackers regularly provided that the preference network.cookie.cookieBehavior is set to the value 4 or 5. You can check the value of the preference by loading about:config in the browser's address bar and searching for the preference. Mozilla will introduce support for the values 1 and 3 in Firefox 80. Firefox users may configure the browser's tracking protection feature on about:preferences#privacy. Firefox will clear the following data associated with the tracking attempt: Network cache and image cache Cookies AppCache DOM Quota Storage (localStorage, IndexedDB, ServiceWorkers, DOM Cache, etc.) DOM Push notifications Reporting API Reports Security Settings (i.e. HSTS) EME Media Plugin Data Plugin Data (e.g. Flash) Media Devices Storage Access permissions granted to the origin HTTP Authentication Tokens HTTP Authentication Cache Origins will only be cleared if they met the following conditions: If it stored or accessed site storage within the last 72 hours. The origin is classified as a tracker by Mozilla's Tracking Protection list. No origin with the same base domain has a user-interaction permission. Permissions are granted for 45 days if a user interacts with the top-level document, e.g. by scrolling. Data is cleared when the user has been idle for 1 minute (>48 hours after the last purge) or 3 minutes (24-48 hours after the last purge). Manage Redirect Tracking Protection in Firefox Redirect tracking protection is rolled out over the next two weeks to all Firefox users. The feature is controlled by a preference that Firefox users may set right away to enable the protection. Enable Redirect Tracking Protection in Firefox: Load about:config in the browser's address bar. Search for privacy.purge_trackers.enabled. Set the preference to TRUE to enable it, or FALSE to disable it. Search for network.cookie.cookieBehavior. Make sure it is set to 4 or 5 in Firefox 79, and 1,3,4 or 5 in Firefox 80). Restart the web browser. Check out the post on Mozilla's developer site for additional information. How to enable Redirect Tracking Protection in Firefox
  13. How to prevent being tracked while reading your Gmail Keep from being followed online Illustration by Alex Castro / The Verge If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement. All of those obnoxious marketing emails that crowd your inbox aren’t just pushing a product. They’re also tracking whether you’ve opened the email, when you opened it, and where you were at the time by embedding tracking software into the message. Just type “email tracking” into your search engine and watch all the software apps appear. There are a variety of methods used to track emails. For example, one of the simplest is a redirect link. Let’s say you click a link in a promotional email that leads to the page for a product you want to buy. The link has been coded to be trackable; it will go to another server with a variety of data, like what browser you are using or where you clicked the link from, before it takes you to the article. But while it’s fairly easy to spot a redirect link (for one thing, you can often spot all the additional code added to the URL), there are other methods that aren’t quite so obvious. The method that we’re looking at here is tracking pixels. How does it work? A single tracking pixel is embedded into the email, usually (but not always) hidden within an image or a link. When the email is opened, code within the pixel sends the info back to the company’s server. There have been some attempts to restrict the amount of information that can be transmitted this way. For example, since 2014, Google has served all images through its own proxy servers, which could hide your location from at least some tracking applications. Extensions such as Ugly Mail and PixelBlock have been developed to block trackers on Chrome and Firefox. And there are alternative browsers that emphasize privacy such as Brave and the Tor Browser. There is also a simple step you can take to avoid most trackers: stop your email from automatically loading images, since images are where the majority of these pixels hide. You won’t be able to avoid all of the trackers that might be hidden in your email this way, but you will stop many of them. Disable image autoloading on a computer Click on the gear icon in the upper right corner. Click on “See all settings.” In the “General” tab (the first one), scroll down to “Images.” Select “Ask before displaying external images.” Scroll down to the bottom of the page and click on “Save Changes.” Note that this will also turn off Gmail’s dynamic email feature, which makes email messages more interactive. Turn image autoloading off in Gmail’s settings. Disable image autoloading on a mobile device In the Gmail app, select the three-line icon in the upper left corner. Scroll down and select “Settings.” Select the email account you want to fix. Scroll down to and select “Images.” Select “Ask before displaying external images (also disables dynamic email).” In settings for your mobile Gmail account, scroll down to “Images.” You can now disable the autoloading of images. How to prevent being tracked while reading your Gmail
  14. All the Ways Slack Tracks You—and How to Stop It From changing privacy settings to putting limits on those infuriating notifications, here’s how to take control of Slack. Illustration: Elena Lacey The global remote work experiment shows no sign of ending anytime soon. As Europe struggles to contain a deadly second wave of Covid-19, many forward-looking companies have confirmed that their employees will largely be working from home for at least the first quarter of 2021. That means that Slack, Microsoft Teams, and Zoom will continue to dominate the lives of office workers. As you settle down for a long, cold winter of trying to ignore Slack, it’s important to get things in order. From changing privacy settings to putting some limits on those infuriating notifications, here’s how to get some control over Slack. Slack’s Data Collection Slack’s business model is very different from the tracking- and advertising-heavy setups of Google and Facebook. Slack makes money by selling premium-tier subscriptions, though there are also free accounts that have limits placed upon them. But that doesn’t mean Slack doesn’t collect a lot of your data; everything it does collect is listed in its privacy policy. The data Slack gathers will either be information that’s voluntarily given to it (names, emails, messages, and more) or information that’s automatically generated through just using Slack. Slack collects information about when and how you use its platform. This can be the device and operating system you’re logging in from, your IP address, the webpage you visited before opening Slack in your browser, the types of files you share, and more. Slack may also get some information about you based on what third-party services—for example, Google Drive or Calendar—you connect to it. However, what information is shared is based on each privacy policy What Your Boss Can and Can’t See Part of Slack’s success comes from it being a useful store of information—quick updates and messages, which can later be searched and revisited, don’t have to clog up an email inbox anymore. Key to this is Slack not deleting anything. Ever. “By default, Slack never deletes your messages or files, so they’re always accessible,” the company says. (This also applies to free accounts, but only a certain number of messages are searchable). Control of the messages you send is given to the workspace owner—in most cases this will be your employer. Slack says that it has strict measures, tools, and audits in place to stop employees from accessing messages and other company information. Workspace owners can decide how long messages are stored for and set limits on what information can and can’t be accessed by your employer in the future. There’s the option for a workspace admin to select messages and files to be automatically deleted after a set amount of time. This can be customized by each channel or direct messages. It’s possible to see your company’s Slack data-retention settings by heading to [Slack channel name].slack.com/account/workspace-settings#retention. There are also controls on who can access messages within a company. Slack says conversation data can be exported in some circumstances—such as conducting investigations into harassment, following court orders for disclosure, requests under GPDR’s subject access rights, or for audits. How data is extracted can depend on the type of Slack subscription a company has (if any). It’s possible to check what your company’s settings are for data exports at the bottom of the page on the URL above. Typically it’s possible for workspace owners to export messages and files that are shared in public channels. These are exported as text logs, not in the typical Slack user interface. It gets harder for someone to export your messages and files once they’re in private channels or DMs. In these instances a workspace owner must contact Slack and apply for permission to export the data. Slack can grant an owner permission to use a “self-serve data export tool.” “Each org owner must ensure that (a) appropriate employment agreements and corporate policies have been implemented, and (b) all use of business exports is permitted under applicable law,” the company says in its guidance on exports. It’s possible for anyone on Slack to see how much a person uses it. The analytics page [Slack channel name].slack.com/stats shows how many messages have been sent on a workspace and the most popular public channels. It is also possible to search for individual people and see how many messages they’ve sent in the last 30 days and for how many of those days they’ve been active. Boost Security Slack has a pretty good record when it comes to data breaches. In March 2015, when the company had just half a million daily users, it was hacked, and some user data was compromised. Since then, no other hacks have been reported. The biggest risk for many people using Slack is bad passwords and a lack of two-factor authentication. By using common, repeated passwords, accounts are vulnerable when passwords from previous data breaches are used. Failing to use a password manager could put an entire company at risk. Turning on Slack’s two-factor authentication will also help to protect accounts. While good password management and two-factor can help, it’s also worth considering what you’re sharing on Slack. Operational security is key—you should think about what you’re sharing and with whom. When you add a new person to a Slack channel, they can see past messages and files, including any gossip about them. You should also consider whether Slack is the appropriate platform for the messages you’re sending. If you wouldn’t say something out loud at a meeting with a colleague, you probably shouldn’t be sharing it on Slack. Even private channels and DMs could eventually have their messages revealed during a legal case or other similar types of investigation. If you really need to send that bitchy message during these times of remote working, it may be worth considering a different, encrypted platform. Or really, just not saying it at all. Fix Notifications Finally, if you’re spending time going over Slack’s privacy settings, you should really sort out your notifications. You’re probably already getting more Slack notifications than you need—especially if you’ve still, for some reason, got email notifications turned on. In the app’s settings and preferences menu, you can alter what you get notifications about: all new messages, direct messages, mentions, and keywords are the options. These can also be adapted for mobile, so you only get the notifications that matter when you’re on your phone. You can also set periods where notifications aren’t sent. In the settings menu you can set up the option to pause notifications between certain times of the day. This will stop messages coming through when you’re not working—allowing a little separation between sitting in your working chair and sitting in your non-working chair. This story originally appeared on WIRED UK. All the Ways Slack Tracks You—and How to Stop It
  15. Google Chrome is the most popular browser in the world. Chrome routinely leads the pack in features for security and usability, most recently helping to drive the adoption of HTTPS. But when it comes to privacy, specifically protecting users from tracking, most of its rivals leave it in the dust. Users are more aware of, and concerned about, the harms of pervasive tracking than ever before. So why is Chrome so far behind? It’s because Google still makes most of its money from tracker-driven, behaviorally-targeted ads. The marginal benefit of each additional bit of information about your activities online is relatively small to an advertiser, especially given how much you directly give Google through your searches and use of tools like Google Home. But Google still builds Chrome as if it needs to vacuum up everything it can about your online activities, whether you want it to or not. In the documents that define how the Web works, a browser is called a user agent. It’s supposed to be the thing that acts on your behalf in cyberspace. If the massive data collection appetite of Google’s advertising- and tracking-based business model are incentivizing Chrome to act in Google’s best interest instead of yours, that’s a big problem—one that consumers and regulators should not ignore. Chrome is More Popular Than Ever. So is Privacy. Since Chrome’s introduction in 2008, its market share has risen inexorably. It now accounts for 60% of the browsers on the web. At the same time, the public has become increasingly concerned about privacy online. In 2013, Edward Snowden’s disclosures highlighted the links between massive, surreptitious corporate surveillance and the NSA’s spy programs. In 2016, the EU ratified the General Data Protection Regulation (GDPR), a sweeping (and complicated) set of guidelines that reflected a new, serious approach to data privacy. And in the U.S., this year’s Cambridge Analytica scandal sparked unprecedented backlash against Facebook and other big tech companies, driving states like California to pass real data privacy laws for the first time (although those laws are under threat federally by, you guessed it, Google and Facebook). Around the world, people are waking up to the realities of surveillance capitalism and the surveillance business model: the business of “commodifying reality,” transforming it into behavioral data, and using that data and inferences from it to target us on an ever-more granular level. The more users learn about this business model, the more they want out. That’s why the use of ad and tracker blockers, like EFF’s Privacy Badger, has grown dramatically in recent years. Their popularity is a testament to users’ frustration with the modern web: ads and trackers slow down the browsing experience, burn through data plans, and give people an uneasy feeling of being watched. Companies often justify their digital snooping by arguing that people prefer ads that are “relevant” to them, but studies show that most users don’t want their personal information to be used to target ads. All of this demonstrates a clear, growing demand for consumer privacy, especially as it relates to trackers on the web. As a result, many browser developers are taking action. In the past, tracker blockers have only been available as third-party “extensions” to popular browsers, requiring diligent users to seek them out. But recently, developers of major browsers have started building tracking protections into their own products. Apple’s Safari has been developing Intelligent Tracking Protection, or ITP, a system that uses machine learning to identify and stop third-party trackers; this year, the improved ITP 2.0 became the default for tens of millions of Apple users. Firefox recently rolled out its own tracking protection feature, which is on by default in private browsing windows. Opera ships with the option to turn on both ad and tracker blocking. Even the much-maligned Internet Explorer has a built-in “tracking protection” mode. Yet Google Chrome, the largest browser in the world, has no built-in tracker blocker, nor has the company indicated any plans to build one. Sure, it now blocks some intrusive ads, but that feature has nothing to do with privacy. The closest thing it offers to “private” browsing out-of-the-box is “incognito mode,” which only hides what you do from others who use your machine. That might hide embarrassing searches from your family, but does nothing to protect you from being tracked by Google. Conflicts of Interest Google is the biggest browser company in the world. It’s also the biggest search engine, mobile operating system, video host, and email service. But most importantly, it’s the biggest server of digital ads. Google controls 42% of the digital advertising market, significantly more than Facebook, its largest rival, and vastly more than anyone else. Its tracking codes appear on three quarters of the top million sites on the web. 86% of Alphabet’s revenue (Google’s parent company) comes from advertising. That means all of Alphabet has a vested interest in helping track people and serve them ads, even when that puts the company at odds with its users. Source: The EFF
  16. 2019 may finally be the year for ‘The Search Engine That Doesn’t Track You’ In late November, hotel conglomerate Marriott International disclosed that the personal information of some 500 million customers — including home addresses, phone numbers, and credit card numbers — had been exposed as part of a data breach affecting its Starwood Hotels and Resorts network. One day earlier, the venerable breakfast chain Dunkin’ (née Donuts) announced that its rewards program had been compromised. Only two weeks before that, it was revealed that a major two-factor authentication provider had exposed millions of temporary account passwords and reset links for Google, Amazon, HQ Trivia, Yahoo, and Microsoft users. These were just the icing on the cake for a year of compromised data: Adidas, Orbitz, Macy’s, Under Armour, Sears, Forever 21, Whole Foods, Ticketfly, Delta, Panera Bread, and Best Buy, just to name a few, were all affected by security breaches. Meanwhile, there’s a growing sense that the tech giants have finally turned on us. Amazon dominates so many facets of the online shopping experience that we might have to rewrite antitrust law to rein them in. Google has been playing fast and loose with its “Don’t Be Evil” mantra by almost launching a censored search engine for the Chinese government while simultaneously developing killer A.I. for Pentagon drones. And we now know that Facebook collected people’s personal data without their consent, let companies such as Spotify and Netflix look at our private messages, fueled fake news and Donald Trump, and was used to facilitate a genocide in Myanmar. The backlash against these companies dominated our national discourse in 2018. The European Union is cracking down on anticompetitive practices at Amazon and Google. Both Facebook and Twitter have had their turns in the congressional hot seat, facing questions from slightly confused but definitely irate lawmakers about how the two companies choose what information to show us and what they do with our data when we’re not looking. Worries over privacy have led everyone from the New York Times to Brian Acton, the disgruntled co-founder of Facebook-owned WhatsApp, to call for a Facebook exodus. And judging by Facebook’s stagnating rate of user growth, people seem to be listening. For Gabriel Weinberg, the founder and CEO of privacy-focused search engine DuckDuckGo, our growing tech skepticism recalls the early 1900s, when Upton Sinclair’s novel The Jungle revealed the previously unexamined horrors of the meatpacking industry. “Industries have historically gone through periods of almost ignorant bliss, and then people start to expose how the sausage is being made,” he says. Gabriel Weinberg, DuckDuckGo CEO and Founder This, in a nutshell, is DuckDuckGo’s proposition: “The big tech companies are taking advantage of you by selling your data. We won’t.” In effect, it’s an anti-sales sales pitch. DuckDuckGo is perhaps the most prominent in a number of small but rapidly growing firms attempting to make it big — or at least sustainable — by putting their customers’ privacy and security first. And unlike the previous generation of privacy products, such as Tor or SecureDrop, these services are easy to use and intuitive, and their user bases aren’t exclusively composed of political activists, security researchers, and paranoiacs. The same day Weinberg and I spoke, DuckDuckGo’s search engine returned results for 33,626,258 queries — a new daily record for the company. Weinberg estimates that since 2014, DuckDuckGo’s traffic has been increasing at a rate of “about 50 percent a year,” a claim backed up by the company’s publicly available traffic data. “You can run a profitable company — which we are — without [using] a surveillance business model,” Weinberg says. If he’s right, DuckDuckGo stands to capitalize handsomely off our collective backlash against the giants of the web economy and establish a prominent brand in the coming era of data privacy. If he’s wrong, his company looks more like a last dying gasp before surveillance capitalism finally takes over the world. DuckDuckGo is based just east of nowhere. Not in the Bay Area, or New York, or Weinberg’s hometown of Atlanta, or in Boston, where he and his wife met while attending MIT. Instead, DuckDuckGo headquarters is set along a side street just off the main drag of Paoli, Pennsylvania, in a building that looks like a cross between a Pennsylvania Dutch house and a modest Catholic church, on the second floor above a laser eye surgery center. Stained-glass windows look out onto the street, and a small statue of an angel hangs precariously off the roof. On the second floor, a door leading out to a balcony is framed by a pair of friendly looking cartoon ducks, one of which wears an eye patch. Just before DuckDuckGo’s entrance sits a welcome mat that reads “COME BACK WITH A WARRANT.” “People don’t generally show up at our doorstep, but I hope that at some point it’ll be useful,” Weinberg tells me, sitting on a couch a few feet from an Aqua Teen Hunger Force mural that takes up a quarter of a wall. At 39, he is energetic, affable, and generally much more at ease with himself than the stereotypical tech CEO. The office around us looks like it was furnished by the set designer of Ready Player One: a Hitchhiker’s Guide to the Galaxy print in the entryway, Japanese-style panels depicting the Teenage Mutant Ninja Turtles in the bathroom, and a vintage-looking RoboCop pinball machine in the break room. There’s even a Lego model of the DeLorean from Back to the Future on his desk. The furniture, Weinberg tells me, is mostly from Ikea. The lamp in the communal area is a hand-me-down from his mom. Weinberg learned basic programming on an Atari while he was still in elementary school. Before hitting puberty, he’d built an early internet bulletin board. “It didn’t really have a purpose” in the beginning, Weinberg says. The one feature that made his bulletin board unique, he says, was that he hosted anonymous AMA-style question panels with his father, an infectious disease doctor with substantial experience treating AIDS patients. This was during the early 1990s, when the stigma surrounding HIV and AIDS remained so great that doctors were known to deny treatment to those suffering from it. Weinberg says that the free—and private—medical advice made the board a valuable resource for the small number of people who found it. It was an early instance of Weinberg’s interest in facilitating access to information, as well as a cogent example of the power of online privacy: “The ability to access informational resources anonymously actually opens up that access significantly,” he told me over email. After graduating from MIT in 2001, Weinberg launched a slew of businesses, none of which are particularly memorable. First there was an educational software program called Learnection. (“Terrible name… the idea was good, but 15 years too early,” he says.) Then he co-founded an early social networking company called Opobox, taking on no employees and writing all the code himself. “Facebook just kind of obliterated it,” Weinberg says, though he was able to sell the network to the parent company of Classmates.com for roughly $10 million in cash in 2006. It was around that time when Weinberg began working on what would become DuckDuckGo. Google had yet to achieve total hegemony over the internet search field, and Weinberg felt that he could create a browser plugin that might help eliminate the scourge of spammy search results in other search engines. To build an algorithm that weeded out bad search results, he first had to do it by hand. “I took a large sample of different pages and hand-marked them as ‘spam’ or ‘not spam.’” The process of scraping the web, Weinberg says, inadvertently earned him a visit from the FBI. “Once they realized I was just crawling the web, they just went away,” he says. He also experimented with creating a proto-Quora service that allowed anyone to pose a question and have it answered by someone else, as well as a free alternative to Meetup.com. Eventually, he combined facets of all three efforts into a full-on search engine. When Weinberg first launched DuckDuckGo in 2008 — the name is a wink to the children’s game of skipping over the wrong options to get to the right one — he differentiated his search engine by offering instant answers to basic questions (essentially an early open-source version of Google’s Answer Box), spam filtering, and highly customizable search results based on user preferences. “Those [were] things that early adopters kind of appreciated,” he says. At the time, Weinberg says, consumer privacy was not a central concern. In 2009, when he made the decision to stop collecting personal search data, it was more a matter of practicality than a principled decision about civil liberties. Instead of storing troves of data on every user and targeting those users individually, DuckDuckGo would simply sell ads against search keywords. Most of DuckDuckGo’s revenue, he explains, is still generated this way. The system doesn’t capitalize on targeted ads, but, Weinberg says, “I think there’s a choice between squeezing out every ounce of profit and making ethical decisions that aren’t at the expense of society.” Until 2011, Weinberg was DuckDuckGo’s sole full-time employee. That year, he pushed to expand the company. He bought a billboard in Google’s backyard of San Francisco that proudly proclaimed, “Google tracks you. We don’t.” (That defiant gesture and others like it were later parodied on HBO’s Silicon Valley.) The stunt paid off in spades, doubling DuckDuckGo’s daily search traffic. Weinberg began courting VC investors, eventually selling a minority stake in the company to Union Square Ventures, the firm that has also backed SoundCloud, Coinbase, Kickstarter, and Stripe. That fall, he hired his first full-time employee, and DuckDuckGo moved out of Weinberg’s house and into the strangest-looking office in all of Paoli, Pennsylvania. Then, in 2013, digital privacy became front-page news. That year, NSA contractor Edward Snowden leaked a series of documents to the Guardian and the Washington Post revealing the existence of the NSA’s PRISM program, which granted the agency unfettered access to the personal data of millions of Americans through a secret back door into the servers of Google, Yahoo, Facebook, Apple, and other major internet firms. Though Google denied any knowledge of the program, the reputational damage had been done. DuckDuckGo rode a wave of press coverage, enjoying placement in stories that offered data privacy solutions to millions of newly freaked-out people worried that the government was spying on them. “All of a sudden we were part of this international story,” Weinberg says. The next year, DuckDuckGo turned a profit. Shortly thereafter, Weinberg finally started paying himself a salary. Today, DuckDuckGo employs 55 people, most of whom work remotely from around the world. (On the day I visited, there were maybe five employees in the Paoli office, plus one dog.) This year, the company went through its second funding round of VC funding, accepting a $10 million investment from Canadian firm OMERS. Weinberg insists that both OMERS and Union Square Ventures are “deeply interested in privacy and restoring power to the non-monopoly providers.” Later, via email, Weinberg declined to share DuckDuckGo’s exact revenue, beyond the fact that its 2018 gross revenue exceeded $25 million, a figure the company has chosen to disclose in order to stress that it is subject to the California Consumer Privacy Act. Weinberg feels that the company’s main challenge these days is improving brand recognition. “I don’t think there’s many trustworthy entities on the internet, just straight-up,” he says. “Ads follow people around. Most people have gotten multiple data breaches. Most people know somebody who’s had some kind of identity theft issue. The percentage of people who’ve had those events happen to them has just grown and grown.” The recent investment from OMERS has helped cover the cost of DuckDuckGo’s new app, launched in January 2018. The app, a lightweight mobile web browser for iOS and Android that’s also available as a Chrome plugin, is built around the DuckDuckGo search engine. It gives each site you visit a letter grade based on its privacy practices and has an option to let you know which web trackers — usually ones from Google, Facebook, or Comscore — it blocked from monitoring your browsing activity. After you’ve finished surfing, you can press a little flame icon and an oddly satisfying animated fire engulfs your screen, indicating that you’ve deleted your tabs and cleared your search history. The rest of the recent investment, Weinberg says, has been spent on “trying to explain to people in the world that [DuckDuckGo] exists.” He continues, “That’s our main issue — the vast majority of people don’t realize there’s a simple solution to reduce their [online] footprint.” To that end, DuckDuckGo maintains an in-house consumer advocacy blog called Spread Privacy, offering helpful tips on how to protect yourself online as well as commentary and analysis on the state of online surveillance. Its most recent initiative was a study on how filter bubbles — the term for how a site like Google uses our data to show us what it thinks we want — can shape the political news we consume. Brand recognition is a challenge for a lot of startups offering privacy-focused digital services. After all, the competition includes some of the biggest and most prominent companies in the world: Google, Apple, Facebook. And in some ways, this is an entire new sector of the market. “Privacy has traditionally not been a product; it’s been more like a set of best practices,” says David Temkin, chief product officer for the Brave web browser. “Imagine turning that set of best practices into a product. That’s kind of where we’re going.” Like DuckDuckGo — whose search engine Brave incorporates into its private browsing mode — Brave doesn’t collect user data and blocks ads and web trackers by default. In 2018, Brave’s user base exploded from 1 million to 5.5 million, and the company reached a deal with HTC to be the default browser on the manufacturer’s upcoming Exodus smartphone. Temkin, who first moved out to the Bay Area in the early ’90s to work at Apple, says that the past two decades of consolidation under Google/Facebook/Netflix/Apple/Amazon have radically upended the notion of the internet as a safe haven for the individual. “It’s swung back to a very centralized model,” he says. “The digital advertising landscape has turned into a surveillance ecosystem. The way to optimize the value of advertising is through better targeting and better data collection. And, well, water goes downhill.” In companies such as Brave and DuckDuckGo, Temkin sees a return to the more conscientious attitude behind early personal computing. “I think to an ordinary user, [privacy] is starting to sound like something they do need to care about,” he says. But to succeed, these companies will have to make privacy as accessible and simple as possible. “Privacy’s not gonna win if it’s a specialist tool that requires an expert to wield,” Temkin says. “What we’re doing is trying to package [those practices] in a way that’s empathetic and respectful to the user but doesn’t impose the requirement for knowledge or the regular ongoing annoyance that might go with maintaining privacy on your own.” In November, I decided to switch my personal search querying to DuckDuckGo in order to see whether it was a feasible solution to my online surveillance woes. Physically making the switch is relatively seamless. The search engine is already an optional default in browsers such as Safari, Microsoft Edge, and Firefox, as well as more niche browsers such as Brave and Tor, the latter of which made DuckDuckGo its default search in 2016. Actually using the service, though, can be slightly disorienting. I use Google on a daily basis for one simple reason: It’s easy. When I need to find something online, it knows what to look for. To boot, it gives me free email, which is connected to the free word processor that my editor and I are using to work on this article together in real time. It knows me. It’s only when I consider the implications of handing over a digital record of my life to a massive company that the sense of free-floating dread about digital surveillance kicks in. Otherwise, it’s great. And that’s the exact hurdle DuckDuckGo is trying to convince people to clear. Using DuckDuckGo can feel like relearning to walk after you’ve spent a decade flying. On Google, a search for, say, “vape shop” yields a map of vape shops in my area. On DuckDuckGo, that same search returns a list of online vaporizer retailers. The difference, of course, is the data: Google knows that I’m in Durham, North Carolina. As far as DuckDuckGo is concerned, I may as well be on the moon. That’s not to say using DuckDuckGo is all bad. For one, it can feel mildly revelatory knowing that you’re seeing the same search results that anyone else would. It restores a sense of objectivity to the internet at a time when being online can feel like stepping into The Truman Show — a world created to serve and revolve around you. And I was able to look up stuff I wanted to know about — how to open a vacuum-sealed mattress I’d bought off the internet, the origin of the martingale dog collar, the latest insane thing Donald Trump did — all without the possibility of my search history coming back to haunt me in the form of ads for bedding, dog leashes, or anti-Trump knickknacks. Without personalized results, DuckDuckGo just needs to know what most people are looking for when they type in search terms and serve against that. And most of the time, we fit the profile of most people. When I asked Weinberg if he wanted to displace Google as the top search engine in all the land, he demurred. “I mean, I wouldn’t be opposed to it,” he says, “but it’s really not our intention, and I don’t expect that to happen.” Instead, he’d like to see DuckDuckGo as a “second option” to Google for people who are interested in maintaining their online anonymity. “Even if you don’t have anything to hide, it doesn’t mean you want people to profit off your information or be manipulated or biased against as a result [of that information],” he says. Even though DuckDuckGo may serve a different market and never even challenge Google head-on, the search giant remains its largest hurdle in the long term. For more than a decade, Google has been synonymous with search. And that association is hard, if not impossible, to break. In the meantime, the two companies are on frosty terms. In 2010, Google obtained the domain duck.com as part of a larger business deal in a company formerly known as Duck Co. For years, the domain would redirect to Google’s search page, despite seeming like something you’d type into your browser while trying to get to DuckDuckGo. After DuckDuckGo petitioned for ownership for nearly a decade, Google finally handed over the domain in December. The acquisition was a minor branding coup for DuckDuckGo — and a potential hedge against accusations of antitrust for Google. That doesn’t mean relations between the two companies have improved. As the Goliath in the room, Google could attempt to undercut DuckDuckGo’s entire business proposition. Over the past few years, even mainstream players have attempted to assuage our privacy anxieties by offering VPNs (Verizon), hosting “privacy pop-ups” (Facebook), and using their billions to fight against state surveillance in court (Microsoft). With some tweaks, Google could essentially copy DuckDuckGo wholesale and create its own privacy-focused search engine with many of the same protections DuckDuckGo has built its business on. As to whether people would actually believe that Google, a company that muscled its way into becoming an integral part of the online infrastructure by selling people’s data, could suddenly transform into a guardian of that data remains to be seen. When it comes to the internet, trust is something easily lost and difficult to regain. In a sense, every time a giant of the internet surveillance economy is revealed to have sold out its customers in some innovatively horrifying way, the ensuing chaos almost serves as free advertising for DuckDuckGo. “The world keeps going in a bad direction, and it makes people think, ‘Hey, I would like to escape some of the bad stuff on the internet and go to a safer place,’” Weinberg says. “And that’s where we see ourselves.” Source
  17. Does Google meet its users’ expectations around consumer privacy? This news industry research says no A significant majority of consumers do not expect Google to track their activities across their lives, their locations, on other sites, and on other platforms. Numerous privacy scandals over the past couple of years have fueled the need for increased examination of tech companies’ data tracking practices. While the ethics around data collection and consumer privacy have been questioned for years, it wasn’t until Facebook’s Cambridge Analytics scandal that people began to realize how frequently their personal data is shared, transferred, and monetized without their permission. Cambridge Analytica was by no means an isolated case. Last summer, an AP investigation found that Google’s location tracking remains on even if you turn it off in Google Maps, Search, and other apps. Research from Vanderbilt professor Douglas Schmidt found that Google engages in “passive” data collection, often without the user’s knowledge. His research also showed that Google utilizes data collected from other sources to de-anonymize existing user data. That’s why we at Digital Content Next, the trade association of online publishers I lead, wrote this Washington Post op-ed, “It isn’t just about Facebook, it’s about Google, too” when Facebook first faced Capitol Hill. It’s also why the descriptor surveillance advertising is increasingly being used to describe Google and Facebook’s advertising businesses, which use personal data to tailor and micro-target ads. Consumers are on alert. DCN surveyed a nationally representative sample1 to find out what people expect from Google — and, as with a similar study we conducted last year about Facebook, the results were unsettling. Our findings show that many of Google’s data practices deviate from consumer expectations. We find it even more significant that consumer’s expectations are at an all-time low even after 2018, a year in which awareness around consumer privacy reached peak heights. The results of the study are consistent with our Facebook study: People don’t want surveillance advertising. A majority of consumers indicated they don’t expect to be tracked across Google’s services, let alone be tracked across the web in order to make ads more targeted. Nearly two out of three consumers don’t expect Google to track them across non-Google apps, offline activities from data brokers, or via their location history. There was only one question where a small majority of respondents felt that Google was acting according to their expectations. That was about Google merging data from search queries with other data it collects on its own services. They also don’t expect Google to connect the data back to the user’s personal account, but only by a small majority. Google began doing both of these in 2016 after previously promising it wouldn’t. Google’s personal data collection practices affect the more than 2 billion people who use devices running their Android operating software and hundreds of millions more iPhone users who rely on Google for browsing, maps, or search. Most of them expect Google to collect some data about them in exchange for use of services. However, as our research shows, a significant majority of consumers do not expect Google to track their activities across their lives, their locations, on other sites, and on other platforms. And as the AP discovered, Google continues to do some of this even after consumers explicitly turn off tracking. With new laws in Europe and California and with federal discussions about how to bring similar protections to the rest of America, it’s critical to understand what consumers actually demand, align expectations to those demands, and rebuild trust in our industry. Consumers expect nothing less. Source
  18. What's in the latest Firefox update? Firefox 69 thwarts web tracking by default for everyone Firefox 69 switches on the browser's anti-tracking technology by default for all users. Magdalena Petrova/IDG Mozilla on Tuesday released Firefox 69 with the browser's anti-tracking technology switched on by default for all users. The organization's security engineers also patched 20 vulnerabilities, one tagged "Critical" and 11 marked "High," the organization's two top threat ratings. The single critical flaw only affected Windows, Mozilla said in its patching commentary. Firefox 69 can be downloaded from Mozilla's site for Windows, macOS and Linux. Because it updates in the background, most users need only relaunch the browser to get the latest version. To manually update, pull up the menu under the three horizontal bars at the upper right, then click the help icon (the question mark within a circle). Choose "About Firefox." The resulting page shows that the browser is either up to date or explains the refresh process. Mozilla updates Firefox every six to eight weeks; it last upgraded the browser on July 9. You get ETP and you get ETP and ... Mozilla first turned on Enhanced Tracking Protection (ETP) in June, but at the time limited the setting to new-to-Firefox users. However, existing customers could flip the ETP switch themselves using the Preferences screen. With Firefox 69, Mozilla has enabled ETP for all users. By default, "Content Blocking" - the feature's name in Firefox's Preferences - is set to "Strict," the strongest protection available. Users can reset that to "Standard" or "Custom," or even turn off everything by clearing all choices in the latter. Mozilla said that prior to Firefox 69's debut, more than 20% of all Firefox users had ETP engaged, signaling that a significant number of existing users had manually enabled ETP in the past three months. "With today's release, we expect to provide protection for 100% of our users by default," wrote Marissa Wood, vice president of product at Mozilla, in a Sept. 3 post to a company blog. ETP has taken a crooked road to release. Tracing its linage to 2015's "Tracking Protection," Mozilla got serious about the concept two years ago, when it broke the technology out of the private-browsing bubble. In October 2018, it named the feature ETP and set Firefox 65, slated to release in January 2019, as the on-by-default target. Problems persisted, however - in several instances Mozilla said the technology was breaking too many sites - and delays were inserted for more testing. Finally, Mozilla used a "soft opening" for ETP in June, limiting the automatic on-by-default to new users as a final quality control check. Wood spelled out additional information about ETP in her Tuesday post. Mozilla All Firefox users now have the browser's anti-tracking feature switched on, set to the strongest protection. Changes can be made in the Preferences pane. Block this, block that Also in Firefox 69, Mozilla's developers enhanced the choices for autoplay, the habit by sites to immediately start playing video on the computer screen and blasting audio from its speakers. Firefox has automatically blocked autoplay of audio since March and version 66. Video with accompanying audio was also stopped from playing. But if a video provider muted the audio, Firefox let the former play. With Firefox 69, users can select "Block Audio and Video" to stop such video from automatically playing. That setting is at Preferences > Privacy & Security > Permissions > Autoplay > Settings > Default for all websites. This version of Firefox also took the next step in Mozilla's kill-Flash process. The browser lost the "Always Activate" option for Flash, meaning that every request to run the player software must be user approved. From this point forward, the only settings are "Ask to Activate," the default, and "Never Activate." This move was previously announced by Mozilla (check out the "Plugin Roadmap for Firefox" here) and should be the last step before all Flash support is yanked from non-enterprise copies. (The Extended Support Release, or ESR, will continue to support Flash until the end of 2020.) The next version of the browser, Firefox 70, should release Oct. 22. Source: What's in the latest Firefox update? Firefox 69 thwarts web tracking by default for everyone (Computerworld - Gregg Keizer)
  19. The iPhone 11 Pro’s Location Data Puzzler One of the more curious behaviors of Apple’s new iPhone 11 Pro is that it intermittently seeks the user’s location information even when all applications and system services on the phone are individually set to never request this data. Apple says this is by design, but that response seems at odds with the company’s own privacy policy. The privacy policy available from the iPhone’s Location Services screen says, “If Location Services is on, your iPhone will periodically send the geo-tagged locations of nearby Wi-Fi hotspots and cell towers (where supported by a device) in an anonymous and encrypted form to Apple, to be used for augmenting this crowd-sourced database of Wi-Fi hotspot and cell tower locations.” The policy explains users can disable all location services entirely with one swipe (by navigating to Settings > Privacy > Location Services, then switching “Location Services” to “off”). When one does this, the location services indicator — a small diagonal upward arrow to the left of the battery icon — no longer appears unless Location Services is re-enabled. The policy continues: “You can also disable location-based system services by tapping on System Services and turning off each location-based system service.” But apparently there are some system services on this model (and possibly other iPhone 11 models) which request location data and cannot be disabled by users without completely turning off location services, as the arrow icon still appears periodically even after individually disabling all system services that use location. On Nov. 13, KrebsOnSecurity contacted Apple to report this as a possible privacy bug in the new iPhone Pro and/or in iOS 13.x, sharing a video showing how the device still seeks the user’s location when each app and system service is set to “never” request location information (but with the main Location Data service still turned on). The video above was recorded on a brand new iPhone 11 Pro. The behavior appears to persist in the latest iPhone operating system (iOS 13.2.3) on iPhone 11 Pro devices. A review of Apple’s support forum indicates other users are experiencing the same issue. I was not able replicate this behavior on an older model iPhone 8 with the latest iOS. This week Apple responded that the company does not see any concerns here and that the iPhone was performing as designed. “We do not see any actual security implications,” an Apple engineer wrote in a response to KrebsOnSecurity. “It is expected behavior that the Location Services icon appears in the status bar when Location Services is enabled. The icon appears for system services that do not have a switch in Settings” [emphasis added]. Apple has not yet responded to follow-up questions, but it seems they are saying their phones have some system services that query your location regardless of whether one has disabled this setting individually for all apps and iOS system services. Granted, the latest versions of iOS give users far more granular control over the sharing of this data than in the past, especially with respect to third-party apps. And perhaps this oddity is somehow related to adding support for super-fast new WiFi 6 routers, which may have involved the introduction of new hardware. But it would be nice to know what has changed in the iPhone 11 and why, particularly given Apple’s recent commercials on how they respect user privacy choices — including location information. This post will be updated in the event Apple provides a more detailed response. Source: The iPhone 11 Pro’s Location Data Puzzler (KrebsOnSecurity - Brian Krebs)
  20. Microsoft details tracking prevention improvements in the new Edge browser A while back, Microsoft announced that the new Chromium-based Edge browser was getting built-in tracking prevention, a feature that helped users manage how their activity on the web is tracked. It's been some time since then, and the company has detailed some of the improvements made to the experience with Edge version 79. This is the version that's currently in the Beta channel and set to be the first generally available version of the new browser. Microsoft said its goal with the new improvements was to simultaneously block more kinds of trackers while also ensuring as much compatibility as possible with the web. Blocking too many trackers can cause some sites to break, so a certain balance is necessary. Microsoft says it figured out a way to do this by measuring the user's engagement with organizations, so tracking prevention can be less strict when the user has a clear relationship with a website or organization. This helps users achieve better compatibility with websites they care about most, while blocking more trackers on websites they don't visit as often. Microsoft says that, by enabling this capability, called Org Engagement Mitigation, it actually blocked 25% more trackers on average, so the end result should actually be better for privacy-conscious users. However, if you really don't want any trackers, you'll be happy to know that this feature is only enabled for users that use the Balanced setting for tracking prevention in Edge. Users who opt into Strict mode will still block the maximum number of trackers regardless of their site and organization engagement. For InPrivate mode, Edge 78 initially set the tracking prevention mode to Strict at all times, but that's changing now since users were seeing some sites breaking because of it. InPrivate windows will now follow the same setting as regular browsing, but Microsoft is testing a new toggle in the Canary and Dev channels to allow users to continue using Strict mode for InPrivate browsing. For users who want to know more about which trackers Edge has blocked, there's also now a new page to view a list of blocked trackers and how many times each tracker was blocked. The list can be found in edge://settings/privacy/blockedTrackers. Source: Microsoft details tracking prevention improvements in the new Edge browser (Neowin)
  21. Apple patched a bug in May, but academics say the rest of the flaws require a redesign of some Apple services. Apple Wireless Direct Link (AWDL), a protocol installed on over 1.2 billion Apple devices, contains vulnerabilities that enable attackers to track users, crash devices, or intercept files transferred between devices via man-in-the-middle (MitM) attacks. hese are the findings of a research project that started last year at the Technical University of Darmstadt, in Germany, and has recently concluded, and whose findings researchers will be presenting later this month at a security conference in the US. The project sought to analyze the Apple Wireless Direct Link (AWDL), a protocol that Apple rolled out in 2014 and which also plays a key role in enabling device-to-device communications in the Apple ecosystem. While most Apple end users might not be aware of the protocol's existence, AWDL is at the core of Apple services like AirPlay and AirDrop, and Apple has been including AWDL by default on all devices the company has been selling, such as Macs, iPhones, iPads, Apple watches, Apple TVs, and HomePods. German and US researchers reverse-engineered AWDL But in the past five years, Apple has never published any in-depth technical details about how AWDL works. This, in turn, has resulted in very few security researchers looking at AWDL for bugs or implementation errors. However, due to the protocol's growing ubiquity in the daily lives of all Apple users, in 2018, a team of TU Darmstadt academics -- later joined by academics from Boston's Northeastern University -- decided to take a look at AWDL, and how the protocol works. "Considering the well-known rocky history of wireless protocols' security, with various flaws being repeatedly discovered in Bluetooth, WEP, WPA2, GSM, UMTS, and LTE, the lack of information regarding AWDL security is a significant concern given the increasing number of services that rely on it," the research team said. To study it, researchers reverse-engineered the AWDL protocol and then re-wrote it as a C implementation named OWL (Open Wireless Link), which they later used to test the real AWDL protocol for various attacks. AWDL vulnerabilities "Our analysis reveals several security and privacy vulnerabilities ranging from design flaws to implementation bugs enabling different kinds of attacks," the research team said. As a result of their work, researchers discovered: A MitM attack which intercepts and modifies files transmitted via AirDrop, effectively allowing for the planting of malicious files. A long-term device tracking attack which works in spite of MAC randomization, and may reveal personal information such as the name of the device owner (over 75% of experiment cases). A DoS attack aiming at the election mechanism of AWDL to deliberately desynchronize the targets' channel sequences effectively preventing communication with other AWDL devices. Two additional DoS attacks on Apple's AWDL implementations in the Wi-Fi driver. The attacks allow crashing Apple devices in proximity by injecting specially crafted frames. The attacks can be targeted to a single victim or affectall neighboring devices at the same time. While AWDL contained various security features to prevent attackers from establishing MitM rogue connections to legitimate devices without authorization, the research team was able to bypass these systems. They did this with the help of a TCP reset attack that blocked the AWDL connection and allowed researchers to interpose their $20 hardware rig between the two devices and establish legitimate connections with both the sender and the receiver. AWDL is ideal for pervasive user tracking But while MitM attacks are hard to pull off and DoS attacks that crash devices are rarely useful, the AWDL vulnerabilities that allow user tracking are the ones that are truly concerning. For this attack, the research team said they were able to obtain information from an AWDL connection such as the device hostname, real MAC address (even if the device has MAC address randomization enabled), the AP the device is connected to, the device class (iOS, watchOS, macOS, tvOS, etc.), and AWDL protocol version. This information, researchers argued, is more than enough to create profiles and track users. Combined with data from online advertisers and analytics providers, it could be used to link devices to their real owners. The research team worried that AWDL-based tracking technology could be deployed in retail stores or public spaces and track users' movement through an area. Some flaws require a protocol/service redesigns As for patches against these attacks, the research team said they notified Apple of all the vulnerabilities they found, between August and December 2018. "While Apple was able to issue a fix for a DoS attack vulnerability after our responsible disclosure, the other security and privacy vulnerabilities require the redesign of some of their services," researchers said. The fix for the AWDL DoS bug (CVE-2019-8612) rolled out in mid-May, with the release of iOS 12.3, tvOS 12.3, watchOS 5.2.1, and macOS 10.14.5. The rest of the AWDL vulnerabilities will likely remain exploitable for the foreseeable future. Some bugs might affect Android devices Furthermore, the same bugs may also affect Android and other types of devices, researchers warned. "The impact of these findings goes beyond Apple's ecosystem as the Wi-Fi Alliance adopted AWDL as the basis for Neighbor Awareness Network-ing (NAN) which, therefore, might be susceptible to similar attacks," the research team said. "NAN, commonly known as Wi-Fi Aware, is a new standard supported by Android which draws on AWDL's design and, thus, might be vulnerable to the similar attacks as presented in [our] work." However, this has not been confirmed, and additional research is needed on the impact of these AWDL bugs on real-world Android NAN (Wi-Fi Aware) implementations. More details about the vulnerabilities described in this article are available in a pre-print white paper named "A Billion Open Interfaces for Eve and Mallory: MitM, DoS, and Tracking Attacks on iOS and macOS Through Apple Wireless Direct Link" that the research team will be presenting at the USENIX security conference in mid-August, in a few weeks time. Source
  22. California's new privacy law has spurred a torrent of online notices. But the law is also forcing changes offline, in traditional stores. To anyone with eyes in their kneecaps, the notice outside gadget retailer B8ta’s glossy store next to San Francisco’s new NBA arena is obvious. “We care about your privacy,” the small plaque proclaims, offering a web address and QR code. Anyone curious and limber enough to bend down and follow these pointers is taken to the retailer’s online privacy policy, which discloses that stepping inside the store puts you in range of technology that automatically collects personal information. That includes “smartphone detectors” and Wi-Fi routers that note the location and unique identifiers of your phone, and cameras equipped with software that estimates your age and gender. B8ta added the signage to its six California stores and expanded its online privacy policy late last year as it prepared to comply with a new state law that took effect this month called the California Consumer Privacy Act. The law requires businesses to disclose what personal information they collect from consumers at or before the time it is collected. It gives state residents the right to request data collected about them be deleted and to forbid a business from selling it. CCPA’s most visible effect has been a plague of website popups on California residents. But the law also applies to offline data collection. B8ta’s new signs and disclosures show how the CCPA might shed more light on the way brick-and-mortar businesses use Wi-Fi routers and other in-store sensors to try to match the customer analytics and tracking of online retailers and ad networks. California legislators rushed to pass CCPA in 2018 to head off a stricter ballot initiative on privacy whose sponsors had collected more than 600,000 signatures. In the process, a provision allowing citizens to sue for violations was removed, leaving the state attorney general as the sole enforcer. But CCPA is in some ways broader than GDPR, the influential European Union privacy law that came into force in 2018. California’s law defines personal information more liberally, to include data about a household, which GDPR does not, for example. CCPA also requires companies to disclose details of how they sell personal data and allow consumers to opt out of any sales, using a broad definition of “sell” that includes trading data for anything of value. Mary Stone Ross, a lawyer and former CIA analyst who coauthored the initiative that led to CCPA, says it was partly inspired by research on use of in-store tracking by retailers. “It was very clear that in order for the CCPA to be effective, it had to cover all collection of all information, not just online collection,” she says. The law that took effect January 1 says businesses must “inform” consumers that they are collecting personal information “at or before the point of collection.” The attorney general’s draft regulations, due to be finalized in time for enforcement to begin in July, suggests physical premises distribute paper notices or display “prominent signage” with a web link. B8ta declined to explain how it reasoned that knee-high notices might inform customers or count as “prominent.” The company’s stores, which resemble Apple stores, feature quirky consumer gadgets such as an e-ink typewriter alongside products from names like Asus and Google. The retailer’s pitch to lure new partners cites its stores’ ability to provide live data on how customers engage or linger near products on display. Other companies collecting data from customers in stores have taken different approaches to disclosure. One patron of Brazilian steakhouse Fogo De Chão received a printed CCPA notice when he visited the chain’s San Francisco restaurant in early January. It informed him that the company collects personal information during purchases and reservations, uses security cameras, and mentions the restaurant’s guest Wi-Fi. That, too, according to the company’s updated online policy, collects personal information. When department store Macy’s updated its privacy policy to comply with CCPA, it added a surprising disclosure—facial recognition may be used on customers for “security and fraud detection purposes.” The company also said that it uses Wi-Fi routers to track where shoppers linger and beacons that “map nearby Bluetooth-enabled devices, much in the same way radar works,” and sells consumer data, including device and network information. Inside the Macy’s store in San Francisco’s Union Square this week, the cameras—potentially using facial recognition—were obvious, but no privacy notices were visible, even at knee level. The company did not respond to multiple requests for comment before publication. After this article was published, Macy's said in a statement, "Macy’s is committed to our customers’ privacy. We are taking the steps necessary to meet the new CCPA privacy law." California’s new privacy regime could help reveal how use of facial recognition is spreading in stores and other semipublic places as the technology becomes more accessible. Lowe’s says it previously tested the technology in three stores, but ultimately decided not to use it. Peter Trepp, CEO of facial recognition provider FaceFirst, declined to say whether he is telling retail customers to post notices in California informing shoppers their faces might be analyzed. The company claims to work with airports, sports teams, and Fortune 500 retailers, who use the software to alert staff when shoplifters known to a store return. source
  23. The Department of Homeland Security has been purchasing cellphone location data and using it to track activity near the US-Mexico border, according to The Wall Street Journal. The data has reportedly led to arrests after law enforcement saw where people were crossing the border and traced the data back to specific people. The location data comes from a commercial database composed of information compiled on users by marketing companies. Advertisers and app developers are often able to gather far more information than a user might realize, such as once they’ve granted an app permission to use their location for a more legitimate use, like checking the weather. All of this data collection can lead to an incredibly revealing portrait of an individual’s behavior being created, even though they might have little idea that they, theoretically, consented to the information being shared. It also means that the government can obtain very revealing data on a broad swath of people without going through the courts or relying on questionable legal precedents. It can just buy the information outright, like anyone else can. The Department of Homeland Security confirmed to the Journal that it had purchased the data. Immigration and Customs Enforcement and Customs and Border Protection are reported to have used the data but did not specifically acknowledge doing so. The Journal says the data came from Venntel, a company that calls itself a “pioneer in mobile location information” that “supports our national interests through technological innovation.” Venntel indicated that the Department of Homeland Security was a customer but otherwise declined to comment to the Journal. The New York Times recently highlighted just how revealing this information can be. With access to a database of cellphone location data, reporters were able to track even high-profile individuals in great detail. “We followed military officials with security clearances as they drove home at night. We tracked law enforcement officers as they took their kids to school,” wrote Stuart A. Thompson and Charlie Warzel. Source
  24. Verizon’s new tracking tool tells advertisers when you’re looking at your email inbox The company calls it ‘View Time Optimization’l sharing options Illustration by Alex Castro / The Verge Verizon quietly introduced a new email marketing feature yesterday that it ominously calls “View Time Optimization,” which is a fancy euphemism for a tracking tool that alerts advertisers to the moment you’re looking at your email inbox. Why? So they can send you an ad, of course. The service is part of Verizon’s suite of email and web advertising properties, which includes AOL and Yahoo, and well-known programmer David Heinemeier Hansson (the inventor of the Ruby on Rails web application framework) called out Verizon on Twitter today for what Hansson calls an “Orwellian” ad placement tool. (It should be noted Hansson is helping develop a privacy-focused email client called Hey through his company Basecamp, of which he is the co-founder and chief technology officer alongside chief exec Jason Fried.) View Time Optimization is a play on the popular email marketing tool Send Time Optimization. That tool is not exclusively used by Verizon but by Mailchimp and countless other email marketing firms as well. It uses existing data gathered about an email user through their interactions with tracking pixels and other invasive yet near-universally used ad tech to know the best time to target you with an ad, which comes in the form of a new email that shows right up at the top of your inbox. Send Time Optimization basically knows when you’re most likely to check your email, and it helps marketers time their ads appropriately. Verizon’s version of this, however, goes one step further and tracks people in their AOL or Yahoo email client to send the ad out “when users are actively engaging with their inbox.” If it shows up right then, apparently, the data shows someone is more likely to open the message. “It ensures emails appear close to the top of the inbox and thus it’s improving the sender’s open rates, click-through rates, and overall ROI of their email marketing campaign,” writes Verizon product director Marcel Becker. “Email senders who have used VTO with their email campaigns saw increases in opens by 4x and clicks by 2x.” Of course, Becker wouldn’t be a good marketer, or a model Verizon employee, if he didn’t spin this product as a benefit both to advertisers and consumers. This is where Hansson’s Orwellian descriptor is most apt. “We genuinely believe that our mutual customers deserve a unique experience which connects them to their passions,” Becker writes in the announcement. “We want to enable them to discover the things which matter to them. We want to enable them to get the most out of their inbox.” He goes to say that “we believe that tracking our customers is wrong,” and then follows that sentence up with, “But we also believe in the idea that they should be able to discover what is the most relevant to them.” It’s shocking because Becker is acknowledging that tracking is wrong while at the same time admitting Verizon simply does not care because the value it provides to the advertisers that pay it to use these tools is greater than the potential privacy implications. Of course, virtually every email client on the market, including Gmail and other popular services, aggressively tracks its users, collects and stores their data, and then sells access to the inbox and the contents of people’s messages to advertisers. That’s because these products are, by and large, free, and the companies that make them earn money by amassing large user bases of largely apathetic consumers and then monetizing that user base via ads. There are paid services out there for people who want more privacy, including ProtonMail and the recently released OnMail. But companies like Verizon accurately assume most people don’t care enough and will put up with invasive advertising in exchange for a free product. Still, View Time Optimization seems like an all-new level of tracking, and there are a lot of critical, unanswered questions about how this ad tech works — most importantly whether Verizon email users can opt out. Other pertinent issues include whether this tool provides marketers with instantaneous knowledge of when someone is sitting at their computer and whether location data is included; how much of this tool is automated or how much manual intervention a human being can take; and what kind of database of info is collected and kept on email users’ habits. Verizon did not immediately respond to a request for comment. Source: Verizon’s new tracking tool tells advertisers when you’re looking at your email inbox (The Verge)
  25. Facebook is unwittingly auto-generating content for terror-linked groups that its artificial intelligence systems do not recognize as extremist, according to a complaint made public on Thursday. The National Whistleblowers Center in Washington carried out a five-month study of the pages of 3,000 members who liked or connected to organizations proscribed as terrorist by the US government. Researchers found that the Islamic State group and al-Qaeda were "openly" active on the social network. More worryingly, the Facebook's own software was automatically creating "celebration" and "memories" videos for extremist pages that had amassed sufficient views or "likes." The Whistleblower's Center said it filed a complaint with the US Securities and Exchange Commission on behalf of a source that preferred to remain anonymous. "Facebook's efforts to stamp out terror content have been weak and ineffectual," read an executive summary of the 48-page document shared by the center. "Of even greater concern, Facebook itself has been creating and promoting terror content with its auto-generate technology." Survey results shared in the complaint indicated that Facebook was not delivering on its claims about eliminating extremist posts or accounts. The company told AFP it had been removing terror-linked content "at a far higher success rate than even two years go" since making heavy investments in technology. "We don't claim to find everything and we remain vigilant in our efforts against terrorist groups around the world," the company said. Facebook and other social media platforms have been under fire for not doing enough to curb messages of hate and violence, while at the same time criticized for failing to offer equal time for all viewpoints, no matter how unpleasant. Facebook in March announced bans at the social network and Instagram on praise or support for white nationalism and white separatism. Source
  • Create New...