Jump to content

Search the Community

Showing results for tags 'facebook'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • Site Related
    • News & Updates
    • Site / Forum Feedback
    • Member Introduction
  • News
    • General News
    • FileSharing News
    • Mobile News
    • Software News
    • Security & Privacy News
    • Technology News
  • Downloads
    • nsane.down
  • General Discussions & Support
    • Filesharing Chat
    • Security & Privacy Center
    • Software Chat
    • Mobile Mania
    • Technology Talk
    • Entertainment Exchange
    • Guides & Tutorials
  • Off-Topic Chat
    • The Chat Bar
    • Jokes & Funny Stuff
    • Polling Station


  • Drivers
  • Filesharing
    • BitTorrent
    • eDonkey & Direct Connect (DC)
    • NewsReaders (Usenet)
    • Other P2P Clients & Tools
  • Internet
    • Download Managers & FTP Clients
    • Messengers
    • Web Browsers
    • Other Internet Tools
  • Multimedia
    • Codecs & Converters
    • Image Viewers & Editors
    • Media Players
    • Other Multimedia Software
  • Security
    • Anti-Malware
    • Firewalls
    • Other Security Tools
  • System
    • Benchmarking & System Info
    • Customization
    • Defrag Tools
    • Disc & Registry Cleaners
    • Management Suites
    • Other System Tools
  • Other Apps
    • Burning & Imaging
    • Document Viewers & Editors
    • File Managers & Archivers
    • Miscellaneous Applications
  • Linux Distributions


  • General News
  • File Sharing News
  • Mobile News
  • Software News
  • Security & Privacy News
  • Technology News

Find results in...

Find results that contain...

Date Created

  • Start


Last Updated

  • Start


Filter by number of...

  1. Several adware apps promoted aggressively on Facebook as system cleaners and optimizers for Android devices are counting millions of installations on Google Play store. The apps lack all of the promised functionality and push advertisements while trying to last as long as possible on the device. To evade deletion, the apps hide on the victim's device by constantly changing icons and names, masquerading as Settings or the Play Store itself. Installed app changing icon and name (McAfee) The adware apps abuse the Contact Provider Android component, which enables them to transfer data between the device and online services. The subsystem is called every time a new app is installed, so the adware might be using it to initiate the ad-serving process. To the user it may look like the ads are pushed by the legitimate app they installed. Researchers at McAfee discovered the adware apps. They note that users don't have to launch them after installation to see the ads because the adware initiates itself automatically without any interaction. The first action from these annoying apps is to create a permanent service for displaying the advertisements. If the process is "killed" (terminated), it re-launches immediately. Malicious service re-launched almost immediately (McAfee) The following video shows how the name and icon of the adware changes automatically and how the ad-serving occurs without any interaction from the user. Millions of downloads on Google Play As McAfee comments in the report, users are convinced to trust the adware apps because they see a Play Store link on Facebook, leaving little margin for doubt. Facebook promotion for a cleaner app (McAfee) This has resulted in unusually high download numbers for the particular type of applications, as shown in the list below: Junk Cleaner, cn.junk.clean.plp, 1M+ downloads EasyCleaner, com.easy.clean.ipz, 100K+ downloads Power Doctor, com.power.doctor.mnb, 500K+ downloads Super Clean, com.super.clean.zaz, 500K+ downloads Full Clean -Clean Cache, org.stemp.fll.clean, 1M+ downloads Fingertip Cleaner, com.fingertip.clean.cvb, 500K+ downloads Quick Cleaner, org.qck.cle.oyo, 1M+ downloads Keep Clean, org.clean.sys.lunch, 1M+ downloads Windy Clean, in.phone.clean.www, 500K+ downloads Carpet Clean, og.crp.cln.zda, 100K+ downloads Cool Clean, syn.clean.cool.zbc, 500K+ downloads Strong Clean, in.memory.sys.clean, 500K+ downloads Meteor Clean, org.ssl.wind.clean, 100K+ downloads Most affected users are based in South Korea, Japan, and Brazil, but the adware has unfortunately reached users worldwide. Heatmap of infected Android users (McAfee) The adware apps are no longer available on the Play Store. However, users that installed them have to remove them manually from the device. System cleaners and optimizers are popular software categories despite the low benefits they provide. Cybercriminals know that a large number of users would try such solutions to prolong the life of their devices and often guise malicious apps as such. Facebook ads push Android adware with 7 million installs on Google Play
  2. Ad-tracking by some hospitals may violate federal law protecting health data. A tracking tool installed on many hospitals’ websites has been collecting patients’ sensitive health information—including details about their medical conditions, prescriptions, and doctor’s appointments—and sending it to Facebook. The Markup tested the websites of Newsweek’s top 100 hospitals in America. On 33 of them we found the tracker, called the Meta Pixel, sending Facebook a packet of data whenever a person clicked a button to schedule a doctor’s appointment. The data is connected to an IP address—an identifier that’s like a computer’s mailing address and can generally be linked to a specific individual or household—creating an intimate receipt of the appointment request for Facebook. On the website of University Hospitals Cleveland Medical Center, for example, clicking the “Schedule Online” button on a doctor’s page prompted the Meta Pixel to send Facebook the text of the button, the doctor’s name, and the search term we used to find her: “pregnancy termination.” Clicking the “Schedule Online Now” button for a doctor on the website of Froedtert Hospital, in Wisconsin, prompted the Meta Pixel to send Facebook the text of the button, the doctor’s name, and the condition we selected from a dropdown menu: “Alzheimer’s.” The Markup also found the Meta Pixel installed inside the password-protected patient portals of seven health systems. On five of those systems’ pages, we documented the pixel sending Facebook data about real patients who volunteered to participate in the Pixel Hunt project, a collaboration between The Markup and Mozilla Rally. The project is a crowd-sourced undertaking in which anyone can install Mozilla’s Rally browser add-on in order to send The Markup data on the Meta Pixel as it appears on sites that they visit. The data sent to hospitals included the names of patients’ medications, descriptions of their allergic reactions, and details about their upcoming doctor’s appointments. Former regulators, health data security experts, and privacy advocates who reviewed The Markup’s findings said the hospitals in question may have violated the federal Health Insurance Portability and Accountability Act (HIPAA). The law prohibits covered entities like hospitals from sharing personally identifiable health information with third parties like Facebook, except when an individual has expressly consented in advance or under certain contracts. Neither the hospitals nor Meta said they had such contracts in place, and The Markup found no evidence that the hospitals or Meta were otherwise obtaining patients’ express consent. “I am deeply troubled by what [the hospitals] are doing with the capture of their data and the sharing of it,” said David Holtzman, a health privacy consultant who previously served as a senior privacy adviser in the U.S. Department of Health and Human Services’ Office for Civil Rights, which enforces HIPAA. “I cannot say [sharing this data] is for certain a HIPAA violation. It is quite likely a HIPAA violation.” University Hospitals Cleveland Medical Center spokesperson George Stamatis did not respond to The Markup’s questions but said in a brief statement that the hospital “comport(s) with all applicable federal and state laws and regulatory requirements.” After reviewing The Markup’s findings, Froedtert Hospital removed the Meta Pixel from its website “out of an abundance of caution,” Steve Schooff, a spokesperson for the hospital, wrote in a statement. As of June 15, six other hospitals had also removed pixels from their appointment booking pages and at least five of the seven health systems that had Meta Pixels installed in their patient portals had removed those pixels. The 33 hospitals The Markup found sending patient appointment details to Facebook collectively reported more than 26 million patient admissions and outpatient visits in 2020, according to the most recent data available from the American Hospital Association. Our investigation was limited to just over 100 hospitals; the data sharing likely affects many more patients and institutions than we identified. Facebook itself is not subject to HIPAA, but the experts interviewed for this story expressed concerns about how the advertising giant might use the personal health data it’s collecting for its own profit. “This is an extreme example of exactly how far the tentacles of Big Tech reach into what we think of as a protected data space,” said Nicholson Price, a University of Michigan law professor who studies big data and health care. “I think this is creepy, problematic, and potentially illegal” from the hospitals’ point of view. The Markup was unable to determine whether Facebook used the data to target advertisements, train its recommendation algorithms, or profit in other ways. Facebook’s parent company, Meta, did not respond to questions. Instead, spokesperson Dale Hogan sent a brief email paraphrasing the company’s sensitive health data policy. “If Meta’s signals filtering systems detect that a business is sending potentially sensitive health data from their app or website through their use of Meta Business Tools, which in some cases can happen in error, that potentially sensitive data will be removed before it can be stored in our ads systems,” Hogan wrote. Meta did not respond to follow-up questions, but Hogan appears to be referencing a sensitive health information filtering system that the company launched in July 2020 in response to a Wall Street Journal article and New York Department of Financial Services investigation. Meta told the investigators that the filtering system was “not yet operating with complete accuracy,” according to the department’s February 2021 final report. The Markup was unable to confirm whether any of the data referenced in this story was in fact removed before being stored by Meta. However, a recent joint investigation with Reveal found that Meta’s sensitive health information filtering system didn’t block information about appointments a reporter requested with crisis pregnancy centers. Internally, Facebook employees have been blunt about how well—or not so well—the company generally protects sensitive data. “We do not have an adequate level of control and explainability over how our systems use data, and thus we can’t confidently make controlled policy changes or external commitments such as ‘we will not use X data for Y purpose,’” Facebook engineers on the ad and business product team wrote in a 2021 privacy overview that was leaked to Vice. “Almost any patient would be shocked” The Meta Pixel is a snippet of code that tracks users as they navigate through a website, logging which pages they visit, which buttons they click, and certain information they enter into forms. It’s one of the most prolific tracking tools on the Internet—present on more than 30 percent of the most popular sites on the web, according to The Markup’s analysis. In exchange for installing its pixel, Meta provides website owners analytics about the ads they’ve placed on Facebook and Instagram and tools to target people who’ve visited their website. The Meta Pixel sends information to Facebook via scripts running in a person’s Internet browser, so each data packet comes labeled with an IP address that can be used in combination with other data to identify an individual or household. HIPAA lists IP addresses as one of the 18 identifiers that, when linked to information about a person’s health conditions, care, or payment, can qualify the data as protected health information. Unlike anonymized or aggregate health data, hospitals can’t share protected health information with third parties except under the strict terms of business associate agreements that restrict how the data can be used. In addition, if a patient is logged in to Facebook when they visit a hospital’s website where a Meta Pixel is installed, some browsers will attach third-party cookies—another tracking mechanism—that allow Meta to link pixel data to specific Facebook accounts. And in several cases we found—using both dummy accounts created by our reporters and data from Mozilla Rally volunteers—that the Meta Pixel made it even easier to identify patients. When The Markup clicked the “Finish Booking” button on a Scripps Memorial Hospital doctor’s page, the pixel sent Facebook not just the name of the doctor and her field of medicine but also the first name, last name, email address, phone number, zip code, and city of residence we entered into the booking form. The Meta Pixel “hashed” those personal details—obscuring them through a form of cryptography—before sending them to Facebook. But that hashing doesn’t prevent Facebook from using the data. In fact, Meta explicitly uses the hashed information to link pixel data to Facebook profiles. Using a free online tool, The Markup was also able to reverse most of our hashed test information that the pixel on Scripps Memorial Hospital’s website sent to Facebook. Scripps Memorial didn’t respond to The Markup’s questions, but it did remove the Meta Pixel from the final webpages in the appointment booking process after we shared our findings with the hospital. On other hospitals’ websites, we documented the Meta Pixel collecting similarly intimate information about real patients. When one real patient who participated in the Pixel Hunt study logged in to the MyChart portal for Piedmont Healthcare, a Georgia health system, the Meta Pixel installed in the portal told Facebook the patient’s name, the name of their doctor, and the time of their upcoming appointment, according to data collected by the participant’s Mozilla Rally browser extension. When another Pixel Hunt participant used the MyChart portal for Novant Health, a North Carolina–based health system, the pixel told Facebook the type of allergic reaction the patient had to a specific medication. The Markup created our own MyChart account through Novant Health to further investigate and found the Meta Pixel collecting a variety of other sensitive information. Clicking on one button prompted the pixel to tell Facebook the name and dosage of a medication in our health record, as well as any notes we had entered about the prescription. The pixel also told Facebook which button we clicked in response to a question about sexual orientation. “Our Meta pixel placement is guided by a third party vendor and it has been removed while we continue to look into this matter,” Novant spokesperson Megan Rivers wrote in an email. Epic Systems, the software company behind MyChart, has “specifically recommended heightened caution around the use of custom analytics scripts,” Stirling Martin, a senior vice president for the company, wrote in an email. Facebook is able to infer intimate details about people’s health conditions using other means—for example, the fact that a person “liked” a Facebook group associated with a particular disease—but the data collected by pixels on hospitals’ websites is more direct. And in sharing it with Facebook, experts said, health care providers risk damaging patients’ trust in an increasingly digitized health system. “Almost any patient would be shocked to find out that Facebook is being provided an easy way to associate their prescriptions with their name,” said Glenn Cohen, faculty director of Harvard Law School’s Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics. “Even if perhaps there’s something in the legal architecture that permits this to be lawful, it’s totally outside the expectations of what patients think the health privacy laws are doing for them.” Legal implications Facebook’s data collection on hospital websites has been the subject of class-action lawsuits in several states, with mixed results. Those cases involve types of data that health law experts said are sensitive but less regulated than the health information The Markup documented the Meta Pixel collecting. In 2016, a group of plaintiffs sued Facebook and a handful of health systems and organizations, alleging that the organizations had breached their own privacy policies and several state and federal laws—including wiretapping and intrusion on seclusion statutes—by collecting data via tracking technology on the health care providers’ websites. The US District Court for the Northern District of California dismissed that case in 2017 for a variety of reasons, including that the plaintiffs failed to prove that Facebook had collected “protected health information,” as defined by HIPAA. Rather, the court found, Facebook had tracked plaintiffs on public-facing pages of the websites—such as the homepage or informational pages about diseases—where there was no evidence that the plaintiffs had established a patient relationship with the provider. In 2019, plaintiffs brought a similar class-action lawsuit in Suffolk County Superior Court against Massachusetts-based Partners Healthcare System, which has since changed its name to Mass General Brigham, alleging that the system had violated patients’ privacy and its own policies by installing the Meta Pixel and other tracking tools on its websites. The parties settled the case in January, with Mass General Brigham denying the allegations and admitting no wrongdoing or liability but paying $18.4 million to the plaintiffs and their attorneys. After the settlement, Mass General Brigham appears to have removed Meta Pixel and other tracking tools from many of its hospitals’ websites—but not all of them. When The Markup tested the website of Brigham and Women’s Faulkner Hospital, clicking the “Request Appointment” button on a doctor’s page caused the Meta Pixel to send Facebook the text of the button, the doctor’s name, and the doctor’s field of medicine. Mass General did not respond to The Markup’s request for comment. As with all such data we found the Meta Pixel collecting, it was sent to Facebook along with our computer’s public IP address. “When an individual has sought out a provider and indicated that they want to make an appointment, at that point, any individually identifiable health information that they’ve provided in this session, in the past, or certainly in the future, is protected under HIPAA and could not be shared with a third party like Facebook,” Holtzman said. The US Department of Human Services’ Office for Civil Rights “cannot comment on open or potential investigations,” spokesperson Rachel Seeger wrote in an emailed statement. “Generally, HIPAA covered entities and business associates should not be sharing identifiable information with social media companies unless they have HIPAA authorization [from the individual] and consent under state law,” said Iliana Peters, a privacy lawyer with the firm Polsinelli, who previously headed HIPAA enforcement for the Office for Civil Rights. Patients have the right to file HIPAA complaints with their medical providers, who are required to investigate the complaints, Peters said, adding, “I would hope that institutions would respond quickly to those types of complaints so that they aren’t escalated to a state or federal regulator.” “Plausible deniability” Most of the hospitals The Markup contacted for this story did not respond to our questions or explain why they chose to install Meta Pixel on their websites. But some did defend their use of the tracker. “The use of this type of code was vetted,” wrote Chris King, a spokesperson for Northwestern Memorial Hospital, in Chicago. King did not respond to follow-up questions about the vetting process. King said that no protected health information is hosted on or accessible through Northwestern Memorial’s website and that “Facebook automatically recognizes anything that might be close to personal information and does not store this data.” In fact, Meta explicitly states in its business tools terms of service that the pixel and other trackers do collect personally identifiable information for a variety of purposes. Houston Methodist Hospital, in Texas, was the only institution to provide detailed responses to The Markup’s questions. The hospital began using the pixel in 2017, spokesperson Stefanie Asin wrote, and is “confident” in Facebook’s safeguards and that the data being shared isn’t protected health information. When The Markup tested Houston Methodist’s website, clicking the “Schedule Appointment” button on a doctor’s page prompted the Meta Pixel to send Facebook the text of the button, the name of the doctor, and the search term we used to find the doctor: “Home abortion.” Houston Methodist doesn’t categorize that data as protected health information, Asin wrote, because a person who clicks the “Schedule Appointment” button may not follow through and confirm the appointment, or, they may be booking the appointment for a family member rather than for themself. “The click doesn’t mean they scheduled,” she wrote. “It’s also worth noting that people often are exploring for a spouse, friend, [or] elderly parent.” Asin added that Houston Methodist believes Facebook “uses tools to detect and reject any health information, providing a barrier that prevents passage of [protected health information].” Despite defending its use of the Meta Pixel, Houston Methodist Hospital removed the pixel from its website several days after responding to The Markup’s questions. “Since our further examination of the topic is ongoing, we elected to remove the pixel for now to be sure we are doing everything we can to protect our patients’ privacy while we are evaluating,” Asin wrote in a follow-up email. Facebook did not launch its sensitive health data filtering system until July 2020, three years after Houston Methodist began using the pixel, according to the New York Department of Financial Services’ investigation. And as recently as February of last year, the department reported that the system’s accuracy was poor. That type of Band-Aid fix is a prime example, privacy advocates say, of the online advertising industry’s inability to police itself. “The evil genius of Facebook’s system is they create this little piece of code that does the snooping for them and then they just put it out into the universe and Facebook can try to claim plausible deniability,” said Alan Butler, executive director of the Electronic Privacy Information Center. “The fact that this is out there in the wild on the websites of hospitals is evidence of how broken the rules are.” Facebook is receiving sensitive medical information from hospital websites
  3. There has been no shortage of anti-piracy initiatives in Denmark in recent years. Shutdowns, raids, arrests, and site-blocking orders have repeatedly made headlines. Against this backdrop, it's surprising to see that piracy has become more popular, with YouTube and Facebook now the top sources to access content illegally. Meanwhile, password sharing is a growing concern too. Over the past two years, Danish law enforcement authorities have effectively dismantled the once-thriving local torrent tracker scene. These enforcement actions were combined with various court orders that compel Internet providers to block pirate sites. In theory, these efforts should be the perfect environment for piracy to decrease. On the ground, the opposite is happening. Piracy Rises in Denmark Danish anti-piracy group Rights Alliance reports that illegal consumption of movie and TV content has reached the highest level in nine years. This number comes from an annual survey among more than a thousand respondents, carried out by Mediavision. “As many as 580,000 Danes watch films and series illegally, which is an increase of 80,000 Danish pirates since the spring of 2021,” Rights Alliance notes. According to the latest data, 13 percent of the Danish population between the ages of 15 and 74 regularly pirate film and TV content. Compared to previous years, people are increasingly downloading and streaming locally produced content. YouTube and Facebook Pirates Interestingly, when respondents were asked about the sites they use to access content illegally, YouTube and Facebook were mentioned most often. Nearly half of all Danish pirates (48%) use YouTube to access content illegally with 42% using Facebook. These two “big tech” sites are far more popular than traditional pirate sources such as Popcorn Time and Solarmovie. The survey further reveals that almost 30% of pirates use YouTube and Facebook exclusively. The remainder have scattered over more than a dozen alternative pirate sources, of which streaming sites FirstRow Sports and Mr. Swe are growing most rapidly. Deeply Disturbing Rights Alliance director Maria Fredenslund is concerned about this “deeply disturbing” trend that’s clearly going in the wrong direction. “We are seeing an alarming increase in the volume of illegal consumption, which is mainly due to more and more people streaming content illegally via Facebook and YouTube,” Fredenslund says. “Although these platforms now are required to provide effective tools to ensure that illegal content cannot be uploaded, Mediavision’s investigation shows that we have not yet seen the effect of this.” Fredenslund hopes that both YouTube and Facebook, partly motivated by new legislation such as the EU Copyright Directive, will do more to prevent copyright-infringing content from appearing on their platforms. Blockades & Workarounds In Denmark, Rights Alliance has been the driving force behind many enforcement actions. The group helped to obtain several site-blocking orders and following negotiations, all major ISPs will adhere to these. These blockades obviously don’t apply to YouTube and Facebook. And for traditional pirate sites, people can find workarounds. Mediavision’s survey found that more than a third of all Danish pirates use VPNs (38%) or alternate DNS servers (36%) to get access to blocked sites. Password Sharing Aside from piracy, rightsholders also face another challenge. The number of people who share passwords to subscription services such as Netflix and Disney+ is growing as well. Last year, 31% of all Danes shared passwords to streaming services with people outside of their household. In 2022, this percentage has grown to 37%, even though most people know that it’s against streaming platforms’ terms of service. While password sharing is not counted as piracy, it likely results in a loss in revenue. Nearly half of all respondents indicated that they would sign up for a subscription if sharing was no longer possible. “Sharing passwords may seem harmless to the individual user, but it is not allowed, and it affects the entire food chain when content is not paid for,” Fredenslund says, calling for more strict enforcement. All in all these data reveal that unauthorized media consumption is hard to curb. While extra enforcement efforts may help, rightsholders may also want to look into affordable legal options, as opposed to requiring people to take out a handful of subscriptions. YouTube and Facebook Are the Top ‘Pirate Sites’ in Denmark
  4. Regulators across the globe are grilling social media companies for stricter privacy measures. This will change how these companies handle their users' data in the first place. As it turns out, the fundamental problem with Facebook is that it has no idea where all of its user data goes or what the company is doing with it according to a leaked document. The document was written by Facebook privacy engineers that were on the Ad and Business Product team. The mission of this team is to “to make meaningful connections between people and businesses." The team is responsible for building and maintaining Facebook's massive ads system - the core of its business. The document reveals a rather alarming tone from the engineers who are making a call for change with respect to how Facebook deals with user data. It is very common for Facebook to run into regulatory and compliance issues regarding privacy of its users. The US, India, Europe and other countries are becoming more stringent about regulations which is putting a lot of social media companies including Facebook in trouble recently. “We can’t confidently make controlled policy changes or external commitments such as ‘we will not use X data for Y purpose.’ And yet, this is exactly what regulators expect us to do” Last year, previously leaked documents had revealed Facebook's failure in handling misinformation, believing that the leadership made decisions to avoid angering the Indian government. A researcher who set up an account as a user in India in 2019 found that by following Facebook's algorithm recommendations, they saw “more images of dead people in the past three weeks than I’ve seen in my entire life total,” according to The New York Times. Facebook's own engineers are admitting that they are struggling to keep track of where user data goes once it's inside their systems. However, regulations like the EU's GDPR limits platforms like Facebook about how they can use their users' data. In its article 5, the GDPR law mandates that personal data must be “collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes.” This essentially means that every bit of data Facebook collects, can only be collected and used for a specific purpose and cannot be reused for another purpose. Facebook had been under the fire for using its users' phone numbers for its "people you may know" feature. After getting caught, the company had to eventually stop the practice. The engineers tried to explain what's wrong with Facebook using an analogy inside the document: Imagine you hold a bottle of ink in your hand. This bottle of ink is a mixture of all kinds of user data (3PD, 1PD, SCD, Europe, etc.) You pour that ink into a lake of water (our open data systems; our open culture) … and it flows … everywhere,” the document read. “How do you put that ink back in the bottle? How do you organize it again, such that it only flows to the allowed places in the lake?” For reference, 3PD means third-party data; 1PD means first-party data; SCD means sensitive categories data. The leaked document throws light on the fact that how data can become a mess in the absence of any efforts for data regulation from the beginning. Facebook says that even though it does not have any technical control over every piece of data, it is investing in tools to build the infrastructure needed to meet the requirements it may face. Source: Vice Leaked document reveals Facebook has no control over user data and where it's going
  5. Facebook and the Folly of Self-Regulation The company's new review board is designed to move slowly and keep things intact. Photo-Illustration: Sam Whitney; Getty Images My late colleague, Neil Postman, used to ask about any new proposal or technology, “What problem does it propose to solve?” When it comes to Facebook, that problem was maintaining relationships over vast time and space. And the company has solved it, spectacularly. Along the way, as Postman would have predicted, it created many more problems. Last week, Facebook revealed the leaders and first 20 members of its new review board. They are an august collection of some of the sharpest minds who have considered questions of free expression, human rights, and legal processes. They represent a stratum of cosmopolitan intelligentsia quite well, while appearing to generate some semblance of global diversity. These distinguished scholars, lawyers, and activists are charged with generating high-minded deliberation about what is fit and proper for Facebook to host. It’s a good look for Facebook—as long as no one looks too closely. What problems does the new Facebook review board propose to solve? In an op-ed in The New York Times, the board’s new leadership declared: “The oversight board will focus on the most challenging content issues for Facebook, including in areas such as hate speech, harassment, and protecting people’s safety and privacy. It will make final and binding decisions on whether specific content should be allowed or removed from Facebook and Instagram (which Facebook owns).” Only in the narrowest and most trivial of ways does this board have any such power. The new Facebook review board will have no influence over anything that really matters in the world. It will hear only individual appeals about specific content that the company has removed from the service—and only a fraction of those appeals. The board can’t say anything about the toxic content that Facebook allows and promotes on the site. It will have no authority over advertising or the massive surveillance that makes Facebook ads so valuable. It won’t curb disinformation campaigns or dangerous conspiracies. It has no influence on the sorts of harassment that regularly occur on Facebook or (Facebook-owned) WhatsApp. It won’t dictate policy for Facebook Groups, where much of the most dangerous content thrives. And most importantly, the board will have no say over how the algorithms work and thus what gets amplified or muffled by the real power of Facebook. This board has been hailed as a grand experiment in creative corporate governance. St. John’s University Law Professor Kate Klonick, the scholar most familiar with the process that generated this board, said, “This is the first time a private transnational company has voluntarily assigned a part of its policies to an external body like this.” That’s not exactly the case. Industry groups have long practiced such self-regulation through outside bodies, with infamously mixed results. But there is no industry group to set standards and rules for Facebook. One-third of humanity uses the platform regularly. No other company has ever come close to having that level of power and influence. Facebook is an industry—and thus an industry group—unto itself. This is unprecedented, though, because Facebook ultimately controls the board, not the other way around. We have seen this movie before. In the 1930s the Motion Picture Association of America, under the leadership of former U.S. Postmaster General Will Hays, instituted a strict code that prohibited major Hollywood studios from showing, among other things, “dances which emphasize indecent movements.” The code also ensured that “the use of the [U.S.] flag shall be consistently respected.” By the 1960s, American cultural mores had broadened and directors demanded more freedom to display sex and violence. So the MPAA abandoned the Hays code and adopted the ratings system familiar to American moviegoers (G, PG, PG-13, R, NC-17). One reason the MPAA moved from strict prohibitions to consumer warnings was that American courts had expanded First Amendment protection for films, limiting how local governments could censor them. But all along, the MPAA practiced an explicit form of self-regulation, using a cartel that represented the interests of the most powerful studios to police behavior and represent the industry as a whole to regulators and the public. No one can look at the history of American film and seriously argue that either method of self regulation really served the public. Standards have been sloppily and inconsistently enforced. Through both the Hays code and the rating system, the MPAA limited artistic expression and the representation of lesbian, gay, and transgender issues and stories. But it sure helped Hollywood by keeping regulators at bay. Relevant to the Facebook comparison, the MPAA applies American standards of decency to set its ratings, while the motion picture industry is a transnational power. Studios are much more sensitive to the demands of the government of the People’s Republic of China than they are to the U.S. Senate. The same can be said of Facebook: Using American diction about “free expression” and American ways of thinking to influence a global company is folly. It’s one of the core errors that Facebook made internally years ago. Many industries and professional associations have used cartel power to self-regulate, or at least create the appearance of doing so. The American Bar Association grants accreditation to law schools and thus dictates the content and quality of legal education. It also establishes an ethical code for practicing lawyers. This is substantial power beyond the reach of the state. But, as we have seen in the global mining and textile industries, declarations of labor safety and wage standards don’t mean much in practice. Self-regulation is an excellent way to appear to promote particular values and keep scrutiny and regulation to a minimum. When self-regulation succeeds at improving conditions for consumers, citizens or workers, it does so by establishing deliberative bodies that can act swiftly and firmly, and generate clear, enforceable codes of conduct. If one movie studio starts dodging the ratings process, the MPAA and its other members can pressure theatres and other delivery channels to stop showing that studio’s films. The MPAA can also expel a studio, depriving it of the political capital generated by the association’s decades of campaign contributions and lobbying. The Facebook board has no such power. It can’t generate a general code of conduct on its own, or consider worst-case scenarios to advise the company how to minimize the risk of harm. That would mean acting like a real advisory board. This one is neutered from the start because someone had the stupid idea that it should perform a quasi-judiciary role, examining cases one-by-one. We know the process will be slow and plodding. Faux-judicial processes might seem deliberative, but they are narrow by design. The core attribute of the common law is conservatism. Nothing can change quickly. Law is set by courts through the act of cohering to previous decisions. Tradition and predictability are paramount values. So is stability for stability’s sake. But on Facebook, as in global and ethnic conflict, the environment is tumultuous and changing all the time. Calls for mass violence spring up, seemingly out of nowhere. They take new forms as cultures and conditions shift. Facebook moves fast and breaks things like democracy. This review board is designed to move slowly and preserve things like Facebook. This review board will provide a creaking, idealistic, simplistic solution to a trivial problem. The stuff that Facebook deletes creates an inconvenience to some people. Facebook makes a lot of mistakes, and dealing with the Facebook bureaucracy is next to impossible. But Facebook is not the whole Internet, let alone the whole information ecosystem. And Facebook is not the only way people communicate and learn things (yet). The most notable anecdote that inspired the idea for this board involved the 1972 photograph of nine-year-old Kim Phúc running away from a U.S. napalm attack in Vietnam. When, in 2016, the Norweigian newspaper Aftenposten included the image in a story, Facebook asked the newspaper to remove or pixelize the image because it violated the general rule against nudity on the site. After much uproar, Facebook restored the image. So, ultimately, the controversy did not matter. Problem solved. And even without Facebook, there are hundreds of sources of the same image and deep accounts of its historical significance. Since then, Facebook has tried to be both more aggressive in its content removal practices and more thoughtful about the standards it uses. The review board is a high-profile extension of that effort. The Boolean question of whether, say, a photograph that someone posted remains “on Facebook” is trivial. That question is a vestige of an 18th-century model of “free speech,” and ignores differences of power and how speech works in the real world. It was a bad model for assessing the health of communication more than 200 years ago. It’s absurd now, in the age of opaque algorithms. The initial review board includes no one with expertise in confronting the algorithmic amplification of propaganda, disinformation, or misinformation. It has no anthropologists or linguists. Of the 20 members, only one, Nicolas Suzor of Queensland University of Technology in Australia, is an internationally regarded scholarly expert on social media. In other words, it was established and appointed to favor one and only one value: Free expression. As important as this value is, the duty of protecting both Facebook users and the company itself demands attention to competing values such as safety and dignity. This board is also stacked with a disproportionate number of Americans who tend to view these issues through American legal history and conflicts. The original 20 includes five Americans, none of whom have any deep knowledge of how social media operate around the world. In contrast, the board has only one member from India—the country with more Facebook users than any other. India is home to more than 22 major languages and 700 dialects. The majority-Hindu nation has more Muslim citizens than any other country except Indonesia, along with millions of Buddhists, Christians, Jews, and Bahai. Facebook and WhatsApp have been deployed by violent Hindu nationalists (aligned closely with the ruling BJP Party of Prime Minister Narendara Modi, the most popular politician on Facebook) to terrorize Muslims, Christians, journalists, scholars, and anyone who criticize the central government’s efforts to make India a brutal, nationalistic theocracy. Is this board prepared to consider the breadth and depth of the problems that Facebook amplifies in India, let alone in Pakistan, Sri Lanka, Bangladesh or Myanmar? The lone board member from India, Sudhir Krishnaswamy, is an esteemed legal scholar and civil rights advocate. But how many of those 22 languages does he know? Would he be able to parse the linguistic and cultural nuance of an ethnic slur expressed in Marathi, the language of 83 million people in the state of Maharashtra; or Sinhalese, the major language of 17 million in the Republic of Sri Lanka? Given there are almost 300 million regular Facebook users in a country with 1.2 billion people, how would Krishnaswamy guide the process of choosing among the thousands of complaints that are sure to come from this growing and agitated population? The very idea that the board could make the slightest bit of difference to any of the life-or-death conflicts that play out on Facebook every day is absurd. Just ask yourself, “What about this board’s authority could save lives in Myanmar?” The answer is, nothing. “What about this board’s authority could minimize coordinated attacks on the workings of democracies around the world?” The answer is, nothing. “What about this board’s authority could limit coordinated harassment of activists, journalists, and scholars by major political parties?” The answer is, nothing. “What about this board’s authority could circumscribe Facebook’s ability to record and use every aspect of your movements and interests?” The answer is, of course, nothing. Ultimately, this board will influence none of the things that make Facebook Facebook: Global scale (2.5 billion users in more than 100 languages); targeted ads (enabled by surveillance); and algorithmic amplification of some content rather than other content. The problem with Facebook is not that a photograph came down that one time. The problem with Facebook is Facebook. Source: Facebook and the Folly of Self-Regulation (Wired)
  6. Facebook’s redesigned desktop site with dark mode is now available everywhere Toggle on the new Facebook now if you haven’t already Image: Facebook Facebook’s redesigned desktop site is now available globally, the company announced on Friday. Prior to this change, the new version of Facebook was available only if you opted into it starting in March and only in some markets, although the company said at the time that a majority of people would get access that month. In a blog post published today, Facebook says the new site will “now be the web experience for Facebook globally,” adding that “it’s faster, easier to use and gives your eyes a break with dark mode.” Dark mode is indeed the standout of the new version, but the update also delivers another pretty dramatic overhaul. The home layout features a new skinnier News Feed, ample empty space on the left and right rails, and larger icons and a menu bar that lets you easily jump to various parts of the app. For those who want to turn on the new design and enable dark mode immediately, here’s how: Click on the down arrow at the end of the upper menu bar to pull up old Facebook’s settings menu. Click “Switch to new Facebook.” Click the same down arrow and toggle dark mode from off to on. The goal when Facebook first unveiled the desktop redesign at its F8 developer conference in 2019 was to refocus the core web experience around the areas most people still enjoy using: events, groups, and messaging. It wasn’t quite the end of the News Feed per se, but more Facebook leadership admitted that it’s now an antiquated concept to have an algorithmic feed filled mostly with junk from pages and public posts. People are retreating more and more into private groups and messaging, and Facebook’s only core utility right now is that it’s still the best way to organize a large event with friends or reach out to someone you know whose contact info you don’t have. CEO Mark Zuckerberg said as much when, a few months prior to the redesign reveal, he announced a shift for his entire company toward privacy-focused products and features. And unless you’re in Facebook’s older demographic or you’re particularly into the site’s unique brand of hyper-partisan political content, most people have moved on to spending more of their time on Instagram, TikTok, Twitch, YouTube, and other platforms. The new Facebook reflects the company’s priorities, with a focus on videos for the News Feed (because video ads still earn the company a lot of money) and easy access to events and groups alongside a redesigned Messenger panel. It can be a jarring visual change, but it is a much more pleasant Facebook to use once you know your way around the new interface. Source: Facebook’s redesigned desktop site with dark mode is now available everywhere (The Verge)
  7. Why a small Facebook bug wreaked havoc on some of the most popular iOS apps Facebook’s near-ubiquitous SDK broke yesterday, taking major mobile apps with it Illustration by William Joel / The Verge Sometime around 6:30PM ET on May 6th, popular iOS apps from major companies like DoorDash, Spotify, TikTok, and Venmo suddenly starting crashing. The culprit didn’t remain a mystery for long. Developers on Twitter and GitHub quickly discovered the cause to be an issue with the software development kit (SDK) from Facebook, which is interwoven into the operation of countless mobile apps from companies large and small. The problem, while resolved rather quickly by Facebook, illustrates the scope of the social network’s platform and how even minor issues can have major ripple effects throughout the mobile software industry. “Earlier today, a new release of Facebook included a change that triggered crashes for some users in some apps using the Facebook iOS SDK,” a Facebook spokesperson told The Verge yesterday in a statement. “We identified the issue quickly and resolved it. We apologize for any inconvenience.” The Facebook SDK is a bundle of software tools for developers that helps power features like signing in with a Facebook account and providing share to Facebook buttons. So the issue was not unique to iOS; it could have happened to the Android SDK and, in this case, simply affected Apple’s platform. Yet Facebook didn’t exactly say what the issue was or how the new release of the SDK could have triggered the crashes. It also wasn’t clear why so many apps were so detrimentally affected, even when the user experiencing the crash didn’t log in with Facebook or even when the app itself didn’t make ample use of the SDK or rely on Facebook features. According to app developer Guilherme Rambo, the issue lies with the way Facebook markets its developer toolset. “Facebook really pushes developers into installing their SDK, likely because they want the very rich data they can collect on those app’s users. The SDK is offered as a convenience for both developers and marketing teams, since it can also be used to track the conversions of ads run through Facebook,” he explained to The Verge over email. (Rambo also has an analysis of his own posted to his website here.) For instance, he says, if you want to run an ad campaign for your mobile app through Facebook, the only way to get valuable insight into the campaign’s performance is to install the company’s SDK. “Another major reason is the infamous ‘sign in with Facebook’ we see in many apps, which can be implemented without using their SDK at all, but since using the SDK is more convenient, many companies end up going through that route instead,” he says. But if there’s an issue with the SDK, as was the case yesterday, then it has the potential to take everything down with it. Facebook pushed a server-side change to its SDK, which meant no developer had any say in whether their app would be communicating with the older, stable version or the newer broken one. And because an app communicates with the SDK every time it is opened by a user, the result was a cascading series of errors that led to full-blown app crashes. “The issue was that the SDK was expecting a server reply in a certain format, which on Wednesday, the Facebook servers were not providing,” wrote ZDNet’s Catalin Cimpanu, who cited technical analyses of the situation on GitHub and HackerNews. “Without the proper response, the Facebook SDK crashed, also bringing down all the apps that used it.” It also appears that, once affected, there was little any developer could do to restore service until Facebook fixed the issue on its end. Rambo says there should be ways to prevent this from happening, including developers deciding to implement sign-in with Facebook without using the company’s SDK. But other system-level protections are decisions Apple would have to make regarding the permissions it grants third-party SDKs. “The way it works today is if you install an app and that app includes third-party code (such as the Facebook SDK), that third-party code has the same level of permissions and access as the app itself does,” he says. “If you grant the app permission to access your location, contacts or calendar, the third-party code it embeds can also get that information. The only way to fix that would be to implement some form of sandboxing model that separates third-party SDKs from an app’s own code,” he adds. “It’s a big challenge, but I hope Apple’s engineers are working on something like that.” Apple did not respond to a request for comment. That said, developers did not seem especially pleased about the situation. “From what I’ve seen, developers are really frustrated about this, especially because the engineers who have to deal with these types of problems are usually not the ones who have decided to add such an SDK to the app they work on,” Rambo says. He adds that the decision to integrate with Facebook’s developer tools is usually a top-down decision, “many times from the marketing or product teams who only see the benefit of using those types of SDKs (more data, more analytics).” But those types of employees at tech companies “don’t see the enormous amount of engineering hours spent dealing with the problems they can cause in an app,” he says. “Crashes caused by SDKs in major apps are not that uncommon, but I’ve never seen something of this magnitude where an SDK affected so many apps at the same time. I’d say this was an unprecedented event and it shows that something must be changed in the way apps integrate third-party code.” Source: Why a small Facebook bug wreaked havoc on some of the most popular iOS apps (The Verge)
  8. Facebook to let employees work remotely through the end of 2020 The social media giant doesn’t expect to have offices reopened before July Illustration by James Bareham / The Verge Facebook will allow most of its employees to continue working from home through the end of 2020, and the company doesn’t expect to reopen most offices before July 6th, a Facebook spokesperson confirmed to The Verge. CNBC first reported the news and said CEO Mark Zuckerberg would be making a formal announcement today. The company said there were a variety of factors involved in the decision, including information from public health agencies, like the Centers for Disease Control and Johns Hopkins, as well as government guidance. California is beginning to ease some social distancing rules in a multistage process outlined by Gov. Gavin Newsom. Facebook employees have been working from home since March, and the company said it would continue to pay hourly employees who may not be able to work because of reduced staffing, office closures, or if they’re sick. Last month, Zuckerberg announced that Facebook was canceling physical events with more than 50 people through June 2021, including the Oculus Connect VR conference, which was to take place this fall and will now be an online-only event. While some employees’ jobs can’t be done remotely, Zuckerberg said in April, “Overall, we don’t expect to have everyone back in our offices for some time.” Facebook’s decision to allow remote work for most employees through the end of 2020 home may be a bellwether for other tech companies, as the social media giant was one of the first tech firms to begin asking employees to work remotely to help prevent the spread of the novel coronavirus. Source: Facebook to let employees work remotely through the end of 2020 (The Verge)
  9. Overnight it seems that Facebook has culled its platform of skinhead subculture. In the last 24 hours, people’s personal accounts, groups and business pages have been deactivated and everyone has something in common, a link to the skinhead subculture. This could be for anything from liking music pages linked to ska music, to being part of record selling groups, to 60s inspired clothing pages, were all deactivated for nearly 24 hours. In the wake of the Black Lives Matter movement and Facebook’s pledge to ban support of white nationalism from their site in March of 2019, you can’t help but have suspicions as to why people linked to the subculture were deleted. This isn’t to say that the skinhead subculture is one of white nationalism, quite the opposite in some cases. LONDON, ENGLAND – JANUARY 19: (EXCLUSIVE COVERAGE) Don Letts photographed at the launch of Punk London at The 100 Club on January 19, 2016 in London, England The skinhead subculture was born in the 1960s, and like many identities of the era it was identifiable by a set sound, look and ethos. The post-war sounds of ska and soul music, imported from Jamaica and the US were at the basis of the late 60s scene. Skinheads were often identified by there namesake of a shaven head, doc marten boots, braces, straight jeans and a button-down shirt. Women would often wear a Chelsea cut where the head would be shaved bar the fringe and side-burns and a portion of hair at the nape of the neck. The ethos was one of solidarity in the working classes, a merging of customs creating a newly founded youth culture. As documented by Don Letts, DJ and film director, in his documentary The Story of Skinheads, he talks about the multicultural unity of the subculture. The show depicts the the roots of the skinhead and how there is a ‘brilliant cultural collision between the young white working-class kids and their Jamaican counterparts in British inner cities, a moment of racial harmony’. In the documentary, Don said: “When I tell people my first point of entry into youth subculture was via skinheads they look somewhat confused not understanding I’m talking about the fashion version, not the fascist version.” He carries on stating: “I hope my film goes some way to clarifying what was the UK’s first real multi-cultural movement.” NEW YORK, NY – CIRCA 1980: The Specials circa 1980 in New York City. The scene was shaped by working-class youths who wanted to reject the middle-class hippie movement of the same era, often finding kinship with the mod and rude boy subcultures that had a crossover of interests in terms of style and music. However the second wave of the skinhead came in the mid 1970s to the early 80s and was shaped in light of a politically testing time. Identities formed in the background of the merging Thatcherite era saw the name of the skinhead hijacked by white supremacists, or at the time people affiliated with neo-nazi ideals. In an environment of unemployment and working-class struggles the movement became politically charged and one characterised often by violence. The formation of groups like the National Front and ‘white power skinheads’ soon tarred the name with a racist brush, and the actions of a few soon made the name of the skinhead and racism synonymous with one another. This brand of the culture soon informed how international skinheads often identified twinning appearance with a white nationalist mentality, particularly in the United States. ‘Skinheads’ soon became identifiable within ranks of the prison system in the US and a label to give those involved with the neo-nazi movement, and as it’s most recent in history, has blinded and somewhat erased it’s multicultural roots. There is no denying that racism within the UK scene was prevalent, however it would seem in the more modern climate, the music subculture is one predominately of solidarity. ‘SHARPs’ or Skinheads Against Racial Prejudice was formed in the wake of this to remind people who enjoy the music and the clothes of just how the subculture made it about. In the wake of this blanket ban of skinheads, it seemed as though for a moment that a whole community had been wiped out, including pages of avid anti-racists. Neville Staples of The Specials, who is black and one of the pioneers of 2-tone movement in the 80s whose purpose was to reunite black and white communities, had his Facebook deactivated. The Ska legend urged Facebook to rethink its moderation policies. He said via Twitter: “Please look into things before doing a general cull. Unity runs through the veins of me and @SugaryStaple [Christine Staple] plus all our 2Tone Ska community’s veins.” Rasha Swais, 30, a fashion designer based in North London who creates skinhead inspired garments had both her personal page and her clothing page taken down. She said: “Now I feel really uneasy because as a small business I could have lost a lot of business because of this.” As a small business who relies on Facebook for a lot sales, she has now been asked by Facebook to send in a scan of her passport and has been sent an email on community guidelines. Mrs Swais, who is of Jordanian heritage, said she had to give her ID to the site in order for them to monitor her now her page is back up. Musician and journalist Garry Bushell blasted Facebook over the move, saying it clearly doesn’t know anything about the music genre. It would seem in Facebook’s bid to rid the site of white nationalism, they have ignorantly jumped the gun with a blanket ban on a subculture that often openly promotes anti-racism. Facebook have now given a large chunk of the profiles back to those who are part of the scene after much backlash on other platforms throughout the day, but the question still stands as to why it happened in the first place. Source
  10. The tech giants have funded a bevy of political groups, including those producing positive polling, and engaged in other fingerprint-free tactics designed to deter regulators. David Espinoza appeared unhappy when Arizona joined scores of states investigating Google last year. The Phoenix-based owner of a shoe-and-leather store wrote in a local newspaper he was “amazed and a little dumbfounded” by regulators’ campaign to “change how digital platforms operate.” “The current system is working for small businesses, and as the old saying goes: if it ain’t broke, don’t fix it,” he wrote. But Espinoza’s words, published in September by the Arizona Capitol Times, weren’t entirely his own. They were written on his behalf by an advocacy group that’s backed by Google and other tech behemoths, reflecting Silicon Valley’s stealthy new attempts to shape and weaponize public perception in response to heightened antitrust scrutiny. Under the withering microscope of government watchdogs, tech giants including Amazon, Facebook and Google have funded a bevy of political groups that have helped push positive polling and engaged in other fingerprint-free tactics designed to deter regulators who are seeking to break up or penalize the industry. The approach reflects the growing threats they now face from the Justice Department and the country’s top attorneys general, who have been investigating them on antitrust grounds. The Connected Commerce Council, for example, is a Washington-based nonprofit that bills itself as a voice for small businesses. But it counts Amazon, Facebook and Google as “partners,” and in recent months the group known as 3C has put its muscle to work arguing that Silicon Valley giants do not threaten competition, stifle smaller rivals and harm consumers in the process. Espinoza, a bootmaker by profession, said he was approached by 3C last year after he participated in a Google seminar meant to help small businesses better use digital tools. The advocacy group then wrote the opinion piece largely on his behalf, which appeared online just days after state attorneys general announced their antitrust probe of the company. The opinion piece did not indicate that 3C largely penned it. Espinoza said he still supported Google, whose technology, including its ad tools now under government investigation, have helped his company reach new customers across the country. But he also said he didn’t know about Google’s relationship with 3C, a group of which he is a member, before being contacted by The Post this week. “I’m not surprised,” Espinoza said. Google is “a big company … and they have the finances to extend themselves as much as they can.” Jake Ward, the president of 3C, said his organization represents thousands of small businesses, not Silicon Valley’s largest players. The organization often seeks to encourage corporate founders to share their views publicly, he added. “It is our responsibility, on behalf of our small business members, to protect the existing model and promote the market, which is working exceedingly well,” Ward said, later adding: “We are not, and will not work for, Big Tech.” Amazon and Facebook declined to comment. (Amazon chief executive Jeff Bezos owns The Washington Post.) Julie Tarallo McAlister, a spokeswoman for Google, said in a statement that the company supports “a range of organizations like the Connected Commerce Council that are working to help small businesses grow and prosper online.” Silicon Valley tech giants — and companies across a range of industries — often back a wide array of advocacy groups to boost their political fortunes. They aren’t required to disclose how much they spend on these organizations and exactly how involved they are in their day-to-day decisions, but ethics watchdogs say their participation alone is important. “It is an example of industry spending money and exerting influence, but doing it in a way that is meant to give the impression that it is not coming from industry,” said Noah Bookbinder, the executive director of Citizens for Responsibility and Ethics in Washington, a watchdog group. Bookbinder added: “They wouldn’t be members if they didn’t agree with the thrust of what these organizations are pushing for.” The tech industry’s attempts to shore up its public image in recent months reflects the seriousness of the U.S. government’s new antitrust scrutiny. After years of threats, state and federal leaders have embarked on the kind of inquiries that could result in dramatic changes to the way Amazon, Facebook and Google operate, including punishments that could break apart those companies. With Amazon, regulators are concerned that the e-commerce giant improperly gleans data from third-party sellers in an attempt to give its own products and services an advantage. In looking at Facebook, government officials have probed complaints it has gobbled up its digital rivals, leaving few viable competitors in social networking. And watchdogs have probed Google’s search, advertising and smartphone businesses to determine whether they’ve stifled competition, following in the footsteps of European regulators who have already penalized the company. All three tech giants deny they have violated state and federal antitrust rules. Still, Justice Department officials are expected to file a lawsuit against Google alleging it violated federal competition laws as soon as this month. Nearly every state’s attorney general, meanwhile, could follow with their own complaint in July, The Post previously reported. The antitrust lawsuits come roughly seven years after U.S. officials first probed Google for violating competition law but ultimately decided against bringing a case in court. With legal action imminent — and President Trump recently taking fresh, aggressive aim at Silicon Valley — the industry’s largest companies have shelled out sizable sums to lobby in Washington. Amazon, Facebook and Google have spent more than $11 million combined over the first three months of 2020 to influence federal action on a range of issues, including antitrust, according to ethics disclosures filed with Congress. The amount is slightly higher than the same period in 2019. But those figures do not reflect the hard-to-track sums spent by the industry to shape public opinion beyond the Beltway. Many in the tech industry privately say they’ve adopted such tactics because they face an onslaught of criticism from a wide array of new opposition groups, such as the Campaign for Accountability, a nonprofit that has produced research critical of tech companies including Amazon and Google. The group does not list its current backers, and it declined to detail them fully Tuesday. In the past it has courted Google’s critics, including Oracle, though the campaign maintains it is not taking corporate contributions. Facebook, for example, already has invested in a forthcoming advocacy group known as American Edge. The organization shares a similar structure to organizations such as the National Rifle Association, which blitzes airwaves with ads and doesn’t have to disclose its donors. The new tech group has sought to enlist support from other tech companies including Amazon and Google, according to two people familiar with the effort who spoke on the condition of anonymity to discuss private deliberations. It is not clear whether either company intends to join. The Post first reported last month on the group’s imminent plans for launch. The tech industry also has sought to funnel dollars to a wide array of conservative groups in recent years, hoping to earn more favor among Republicans in power at the White House and in Congress. That includes the National Taxpayers Union, a right-leaning outfit that typically targets government spending it sees as wasteful. Last month, the NTU tapped a firm that previously polled for Trump’s 2016 presidential campaign to gauge voter sentiment about big tech. They focused their efforts on Arizona, Texas and other states that are actively investigating Google and other companies, and their findings concluded that voters would rather see their attorneys general focusing on other issues, including the opioid epidemic. The survey explicitly asked if states should punish companies including Amazon and Google, which publicly have acknowledged their past financial support of the NTU. Pete Sepp, the president of the organization, declined to discuss the NTU’s donors or the exact reasons it commissioned the poll, though he stressed that he and his organization have worked on competition-related issues for decades. “We have huge historical footprint in the antitrust issue space that transcends any tech firm and goes well before their founding,” he said. Such research — seeking to channel public sentiment — is a battle-tested tactic in antitrust probes, former regulators say. It’s meant to “press upon public officials, and indirectly upon agencies, that [companies] enjoy broad public support for what they’re doing,” said William Kovacic, a top professor at George Washington University’s law school who previously served on the Federal Trade Commission. “To tamper with them in a significant way is to anger the broader public.” Two years ago, the Connected Commerce Council launched as a voice for small businesses, and Ward, its leader, has grown the organization into an operation that represents more than 10,000 entrepreneurs. The group provides technical support, helping owners and employees use tech tools to place ads, manage their checkbooks and reach new customers online, he said. The goal was to connect these smaller operators to larger companies, according to Ward, who found that “policy and politics collided pretty quickly on what I was trying to do.” In the end, he has found himself defending Amazon, Facebook and Google because it’s better for the start-ups 3C represents, he said. By September 2018, 3C members had sounded off in support of major tech companies during a regulatory proceeding at the Federal Trade Commission. 3C also helped produce opinion pieces, including the one published by Espinoza in 2019. Ward said the work is critical because regulators and readers otherwise never would hear from small businesses. In more recent months, 3C has amped up its letter-writing campaigns, dispatching missives targeting Texas and other states now investigating large technology companies. Its letter to Texas Gov. Greg Abbott in May, signed by 200 members, even said the state should not penalize Big Tech amid the coronavirus crisis. “During a pandemic, when many storefronts are shuttered and businesses that are still running are operating entirely online, it is the wrong time to demand changes in digital technology operations and business models,” they said. In doing so, however, Ward has stressed his organization’s independence. “We don’t lobby on their behalf,” he said. “And we’re not advocates for their larger positions.” Paywall Source Free Mirror Source
  11. TikTok, meanwhile, is joining the EU's code of conduct. The European Union wants tech giants to do more than they have to counter fake news for users on the continent. EU foreign policy lead Josep Borrell and European Commission values and transparency VP Vera Jourova have said Facebook, Google and Twitter should produce monthly reports on their efforts to stamp out disinformation campaigns. The officials are not only concerned about attempts by Russia and China to influence European politics, but the direct damage to people from COVID-19 misinformation and anti-vaccination myths. “Disinformation does not only harm the health of our democracies, it also harms the health of our citizens,” Jourova said. The hoped-for reports would detail both efforts to limit COVID-19 falsehoods, including ads, as well as steps taken to promote trustworthy material. Internet companies might not need that much prodding, mind you. Jourova added that TikTok was joining the EU’s voluntary Code of Practice on Disinformation (where Facebook, Google, Mozilla and Twitter are already members) to fight fake news. TikTok is promising to foster truth and transparency in ads, enforce policies against false identities and bots, prioritize “authoritative” info when relevant and help researchers looking into disinformation campaigns. This won’t necessarily lead to a significant shift in TikTok’s existing approach, but it reflects the social video service’s attempts to reassure the world that its international content policies aren’t subject to Chinese government influence. Source
  12. old man yells at cloud dot jpeg — Facebook’s new design turns your PC into an enormous phone It's hard to like most of the new layout—but having a dark mode is nice. Enlarge / The new design does at least include a dark mode. I generally prefer bright layouts, but if you have a bad habit of Facebooking in bed late at night, this is less likely to prevent sleepiness. Jim Salter 140 with 100 posters participating, including story author Sometime last night, Facebook's new design layout rolled out to my personal account. It assured me that I could switch back if I didn't like it, so I immediately tried it out. I just as immediately switched it off and never looked back. At least, I never looked back until this afternoon, when the powers that be at Ars said—and I quote—"feel free to hate review it, if you want." I am a professional if nothing else, so this is not a hate review. But I must admit it's a "visceral dislike" review, and perhaps some readers will appreciate—or at least not mind—the things that turn me off so strongly about the new layout. If you like landscape browsing on a smartphone, you’ll like the new Facebook I do the majority of my Facebook browsing on a desktop PC—a serious desktop PC, for serious people, with dual 24-inch monitors in 1080P. I strongly dislike layouts that present me with less information and waste a ton of real estate, and Facebook's new layout does exactly that, in spades. The old Facebook layout generally fits at least two or three posts per page, at least once you scroll past the giant Stories banner. It also allows you to keep multiple Facebook Messenger chats open, in separate floating divs at the bottom of the screen. Once you switch to the new Facebook layout, the first overwhelming impression is one of supreme embiggenment. The text sizes are larger, the elements are more widely spaced, and very little fits on the screen at once. The Stories banner is, thankfully, less obnoxiously huge than it was on the old layout. That's a matter of sheer necessity, unfortunately; it has to be smaller in order to fit even half of a post underneath it on the new scale of things. I was unable to find a single pair of posts on my timeline that would fit on one page under the new layout—and at least half of them were at least two pages tall, all by themselves. The Messenger functionality is also diminished, with only a single chat visible at any one time. If you click another contact or group on your contacts list, your current chat disappears, to be replaced with the new one. Notifications are less clunky There isn't much that I like about the new layout, and there's quite a bit that I detest. I will say that the new notification list is nice, though. The old notifications list just sort of sprawled over whatever parts of the page it happened to land on, basically making the whole thing useless while notifications are visible. In the new layout, the notification area is pinned all the way to the right, leaving posts uncovered. In the new design, you can also continue to scroll, browse, and comment on posts in your feed while Notifications are up and pinned to the far right. Messages, unfortunately, are covered—and you only get a single message visible in the new layout, whether the notification list is covering it or not. So you'll need to close the notifications if you want to chat with your friends—and it's considerably more difficult to chat with multiple friends simultaneously, since only one message can be visible at a time. Conclusions I suspect that this new layout—much like the auto-moderator bug that began pseudo-randomly eating valid news posts this week—is another instance of founder and CEO Mark Zuckerberg's famous motto, "Move fast and break things." The design feels like what happens when you try to unify a layout codebase between phone-based apps and the desktop website but don't bother with much in the way of validation or solicitation of user feedback. The new design is probably here to stay and will eventually supplant the current version of "classic" entirely—but I really hope it sees some more refinement first. Until then, I'll be sticking with the old layout. The Good Now with dark mode! The notifications list is much better pinned to the far right instead of occupying roughly the third-fifth from the left Switching to the new layout and back again is instantaneous and easy Within the new layout, switching dark mode on and off again is equally quick and easy The Bad Only one post on the screen at a time. Or less—usually, less Only one message on the screen at a time—no matter how high your resolution The Ugly Huge text is huge Wasted space is wasted The layout itself, basically Source: Facebook’s new design turns your PC into an enormous phone (Ars Technica)
  13. SAN FRANCISCO (Reuters) - Facebook Inc fired an employee who had criticized Chief Executive Mark Zuckerberg’s decision not to take action against inflammatory posts by U.S. President Donald Trump this month, citing his tweet challenging a colleague’s silence on the issue. FILE PHOTO: A Facebook logo is displayed on a smartphone in this illustration taken January 6, 2020 Brandon Dail, a user interface engineer in Seattle, wrote on Twitter that he was dismissed for publicly scolding a colleague who had refused to include a statement of support for the Black Lives Matter movement on developer documents he was publishing. Dail sent the tweet a day after joining dozens of employees, including the six other engineers on his team, in abandoning their desks and tweeting objections to Zuckerberg’s handling of Trump’s posts in a rare protest at the social media company. “Intentionally not making a statement is already political,” Dail wrote in the tweet, sent on June 2. He said on Friday that he stood by what he wrote. Facebook confirmed Dail’s characterization of his dismissal, but declined to provide additional information. The company said during the walkout that participating employees would not face retaliation. Dail did not respond to a request for comment. Trump’s posts which prompted the staff outcry included the racially charged phrase “when the looting starts, the shooting starts” in reference to demonstrations against racism and police brutality held after the May 25 killing of George Floyd, a black man who died in police custody in Minneapolis. Twitter affixed a warning label to the same post, saying it glorified violence. Facebook opted to leave the post untouched. Zuckerberg defended his decision at a tense all-hands meeting with employees that week. During the meeting, Dail tweeted that it was “crystal clear today that leadership refuses to stand with us.” Dail again voiced objections this week after both Facebook and Twitter declined to take action against a Trump post that contained an unsubstantiated conspiracy theory about Martin Gugino, a 75-year-old protester who was critically injured by police in Buffalo, New York. “Trump’s attack on Martin Gugino is despicable and a clear violation [of] Facebook’s anti-harassment rules. It’s again extremely disappointing that we (and Twitter) haven’t removed it,” he said. Internal dissent is often encouraged at Silicon Valley tech giants, but the companies have been accused of penalizing workers who organise and air complaints publicly. Alphabet’s Google fired at least five workplace activists late last year, while Amazon dismissed critics of its warehouse conditions during the coronavirus pandemic. Both companies denied firing employees for speaking out. Source
  14. Facebook's New Privacy Controls Are Long Overdue The new Manage Activity feature will let you archive and bulk delete posts for the first time. Photograph: Bjarte Rettedal/Getty Images Throughout its 16 years of existence, Facebook has struggled to provide the privacy controls users really want and need to safeguard the data they post to the platform. It's been a challenging project, with plenty of major detours along the way. But today Facebook is announcing a new tool for managing your posts. It may be the most intuitive version of the controls yet, because it's basically what Gmail has offered for email since the year Facebook launched. The new feature, known as Manage Activity will create the concept of an "archive" on Facebook, allowing you to move any or all of your past posts to a secret new home that only you can see. Manage Activity will also introduce a Facebook Trash folder so you can delete posts more easily. The new feature is rolling out in Facebook's mobile apps first. "Whether you're entering the job market after college or moving on from an old relationship, we know things change in people’s lives, and we want to make it easy for you to curate your presence on Facebook," the company says in a blog post. Screenshot: Facebook Archive and Trash are two concepts most people are familiar with from email. As with Gmail, posts you move to the trash will stay there for 30 days and then be deleted unless you manually eliminate them sooner or reinstate them. And Manage Activity is introducing batch actions for multiple posts at once so you can view and organize in bulk, rather than going post by post. You can filter by date ranges, types of posts (Photos and Videos, Posts From Other Apps, etc.), specific people, and other categories, and then select individual posts, batch-archive, or batch-delete as needed. If you archive or delete a post that people are tagged in they’ll lose access to the post. Facebook says they won't be notified of the change, though, so you can archive stealthily. You can't archive other people's posts that you are tagged in—this is only for content you've shared. Facebook already offers a Select Privacy menu for each of your posts where you can choose who can view the content; your options include Public, Friends, Specific friends, and so on. In some ways, archiving a post will be similar to choosing Only Me, in the sense that only you will be able to view the post and no one else. But if you've been using Only Me as a makeshift archiving feature, Manage Activity isn't going to help put everything in one place. All archived posts will show up in one place, but Facebook says there isn’t a way to view the collection of posts you've marked Only Me. After dedicating 2018 to figuring out how to "fix" Facebook, founder and CEO Mark Zuckerberg published "A Privacy-Focused Vision for Social Networking" in March 2019. In it he discussed a problem that every Facebook user has likely been aware of for more than a decade. "One challenge in building social tools is the 'permanence problem,'" Zuckerberg wrote. "As we build up large collections of messages and photos over time, they can become a liability as well as an asset. For example, many people who have been on Facebook for a long time have photos from when they were younger that could be embarrassing. But people also really love keeping a record of their lives." Midway through 2018, Zuckerberg promised a feature that would let users clear their browsing history from Facebook. It took more than a year for the tool, Off-Facebook Activity, to finally roll out in August 2019. The feature provides an accounting of the third-party websites and apps that share your visit history with Facebook and then gives you the option to clear that out. It also lets you block Facebook from using your browsing history for targeted ads. All the way back in November 2011, though, Zuckerberg was grappling with similar issues. "Facebook has always been committed to being transparent about the information you have stored with us—and we have led the internet in building tools to give people the ability to see and control what they share," he wrote. "But we can also always do better. I’m committed to making Facebook the leader in transparency and control around privacy." Nine years after those remarks, and 16 years after its founding, Facebook is somehow still in the process of going back to basics with its privacy features. Facebook's New Privacy Controls Are Long Overdue
  15. It plans to label ads from those outlets, too. Roughly eight months after Facebook said it would begin labeling state-backed media, it’s finally applying those labels. Beginning today, Facebook will label media outlets that are wholly or partially under the editorial control of their government. The labels will appear in the Ad Library Page, on Pages and in the Page Transparency section. Users in the US will begin to see them on posts in News Feed, too. Facebook also plans to label ads from state-controlled media outlets. In the US, Facebook will block such ads altogether “out of an “abundance of caution” -- even though it admits they rarely appear. That’s part of an attempt to protect elections in 2020 from foreign influence, Facebook says. Facebook claims it worked with more than 65 experts on media, governance and human rights to determine what makes a media outlet “state-controlled.” It looks at each outlet’s mission statement, ownership structure, editorial guidelines, funding and more. It also considers whether organizations have measures in place to protect editorial independence. Facebook did not say why it took so much longer to apply the labels than expected, but it may have had something to do with the fact that different state-backed media outlets have different levels of control. For instance, Russia’s RT and China’s Xinhua have been used to spread propaganda, but the UK’s BBC and Canada’s CBC maintain more distance from their respective governments. While Facebook did not say which media outlets it will label, Qatar’s Al Jazeera previously spoke out against the labels. The outlet is privately owned but has a member of the Qatari royal family as a board chairman. Al Jazeera said Facebook would cause “irreparable harm” if it applied the state-backed label. Facebook says organizations can submit an appeal if they believe the label is applied in error and that it will continue to refine its approach moving forward. Source
  16. Facebook is introducing a dark mode for mobile Users reported seeing the dark mode interface on iOS Illustration by James Bareham / The Verge After launching a dark mode for its desktop interface, Facebook confirms it is testing a dark mode for its mobile apps as well. As first noted by SocialMedia Today, Facebook has made the dark mode available to a very small percentage of people globally, a spokesperson told The Verge in an email Sunday. The mobile version of the dark mode Facebook introduced last month for desktop is “meant to cut down on glare,” particularly in low-light environments, the spokesperson said. There’s no timeline yet for when the dark mode will be available to all mobile users, however. Users who already have the new dark mode on mobile tweeted screenshots of what it looks like: It’s a bit surprising it’s taken this long for Facebook to roll out a mobile dark mode; its Instagram, and WhatsApp apps have dark mode already, as does Facebook Messenger. Twitter has had a version of night mode for its Android and iOS apps for some time, and even Google rolled out a dark mode for its app earlier this year. Low-light and dark mode varieties — which allow users to change the background color of an app window to black— are popular not only because they make apps easier to view for some users and are a bit more aesthetically pleasing, but because most dark mode versions can help preserve a device’s battery life as well. UPDATE June 28th 2:24PM ET: Added confirmation from Facebook spokesperson Facebook is introducing a dark mode for mobile
  17. Google, Facebook, and Twitter halt government data requests after new Hong Kong security law The companies are reviewing a new security law that gives China power to stifle dissent Illustration by Alex Castro / The Verge Google, Facebook, and Twitter are pausing the processing of data requests from the Hong Kong government as they review a new security law that went into effect on July 1st. Google put its pause into place as soon as the law took effect last Wednesday. “[W]hen the law took effect, we paused production on any new data requests from Hong Kong authorities,” a Google spokesperson told The Verge in an email, “and we’ll continue to review the details of the new law,” the spokesperson said. Twitter also halted its handling of government requests as of July 1st, with Facebook announcing its pause on Monday, The New York Times reported. Social media platforms typically produce private user information in response to valid court orders, depending on the legal process in various countries. But under this new position, all the companies will, at least temporarily, ignore the requests coming from the government of Hong Kong. The new policies are in response to China’s new national security law in Hong Kong, which was first proposed in May. Hong Kong has traditionally enjoyed significant independence from mainland China, but the central Chinese government has tightened restrictions on speech in Hong Kong in recent months, bringing a gradual end to the “one country, two systems” principle. China’s push toward more control has led to widespread protests across Hong Kong, which began last year. In particular, the new security law gives China the power to limit political dissent against the Communist Party, making it unlawful to engage in “secession, subversion, organization and perpetration of terrorist activities, and collusion with a foreign country or with external elements to endanger national security.” Those powers are particularly relevant for social platforms, which may be hosting the now-criminalized subversive activities. Google, Facebook, and Twitter have both been banned in China for several years, part of the so-called “Great Firewall,” under which government censors and monitors track online activity. The new security law has already compelled several political opposition parties in Hong Kong to disband, NPR reported, and is expected to further chill political dissent against Beijing in Hong Kong. “We believe freedom of expression is a fundamental human right and support the right of people to express themselves without fear for their safety or other repercussions,” a Facebook spokesperson said in an email to The Verge. Twitter says it is reviewing the new law to assess the implications, adding many terms of the new law are “vague and without clear definition,” a spokesperson wrote in an email to The Verge. “Like many public interest organizations, civil society leaders and entities, and industry peers, we have grave concerns regarding both the developing process and the full intention of this law.” Facebook has a process for reviewing government requests, which takes into account its own policies and local laws as well as international human rights standards, the spokesperson added. “We are pausing the review of government requests for user data from Hong Kong pending further assessment of the National Security Law, including formal human rights due diligence and consultations with international human rights experts.” Facebook has offices in China and uses Chinese suppliers to manufacture some of its hardware, including its Oculus VR headsets and its Portal video chat devices. Facebook CEO Mark Zuckerberg has attempted to mend relations with China in the past, meeting with Communist Party leaders while in Beijing for an economic forum in 2016. More recently, he’s pushed concerns about China setting the terms for online engagement. “If another nation’s platform sets the rules,” Zuckerberg said last year, “our nation’s discourse could be defined by a completely different set of values.” Google, Facebook, and Twitter halt government data requests after new Hong Kong security law
  18. When we think about companies “harvesting our data,” chances are we’re thinking of The Big Tech Names doing that harvesting. But sometimes, even these companies get taken for a bit of a ride. Case in point: earlier today, the adtech company OneAudience got slammed with a hefty Facebook lawsuit, on charges that the company with claims that it peddled “malicious” software to app developers that would pull sensitive intel from an app downloader’s Facebook, Twitter and Google accounts—behind all of these company’s backs. “These apps were distributed online to app users on various app stores, including the Google Play Store, and included shopping, gaming, and utility-type apps, the lawsuit states. In Facebook’s case, users that logged in using their Facebook account handed over their “name, email address, locale (i.e. the country that the user logged in from), time zone, Facebook ID, and, in limited instances, gender.” This is the latest leg of the Facebook versus OneAudience saga, after the company was first found to be harvesting this data late last year. At the time, Facebook requested “an audit” of the company’s data-sucking behavior, which the lawsuit states OneAudience didn’t comply with. In lieu of this “compliance”, OneAudience shut down this software, stressing that the data of the hundreds of users effected was “never intended to be collected” and was never used. “We believe that consumers should have the opportunity to choose who they share their data with and in what context,” OneAudience said at the time. Naturally, this data was collected for the purposes of targeted advertising. While the software in question—OneAudience’s mobile-specific SDK, or software device kit—might be no more, the company is still touting its ability to target “real, verified users” to be pelted with a given ad campaign. Older pitch decks from the company suggest that aside from the mobile-specific intel, it also profiled users based on where they lived and the language they spoke. Ultimately, as pointed out by Recode, this lawsuit opens a can of worms about the complexity of the data-sharing chain of command. While companies like Facebook can control the ways their own ecosystem operates, its sheer reach means that it can’t keep an eye on every partner at every time—and just short of filing lawsuits or plugging up obvious loopholes as they arise, it’s unclear whether their commitment lies more with advertisers, or the consumers that they’re targeting. Source
  19. Facebook’s redesign for its Messenger app proves two things: 1) Facebook’s leaning hard into its Stories format after the feature’s success on Instagram; and 2) Yes, those chatbots really were the worst. Facebook’s ditching the Discover tab, a hub for various business tools and games, among other updates to help streamline the private messaging app’s design. The company first announced plans for the change in August, and now TechCrunch reports that Facebook has commenced the switchover this week, so you might have already noticed the update. It’s a complete 180 from Facebook’s previous push to make the app a one-stop-shop for “connecting with all the people and businesses you care about,” with added bloat like games, shopping, and confusing chatbots to serve as automated middlemen between users and businesses. Now Facebook’s boiled it down to just two tabs, Chats and People, with the latter defaulting to a page for Stories, the 24-hour photo and video highlight reel Facebook more or less ripped off Snapchat (albeit to greater success). A sub-tab adjacent to Stories shows which friends are currently online. Given the success Facebook’s had so far with Instagram’s Stories feature, which averages 500 million daily users as of January, it’s no wonder they’re bumping up this feature’s real estate. And, of course, since Stories also display ads, more eyes on users’ Stories means more revenue for Facebook. Though demoted, any content previously on the Discover tab isn’t gone for good. Now you’ll just have to access it via the Messenger’s search bar. Though, with users now having to purposefully seek them out, businesses may be turned off of any ideas about expanding on the platform. Facebook did not immediately respond to Gizmodo’s request for comment. That said, a company spokesperson told TechCrunch that Facebook remains committed to promoting businesses on its platforms and that they still play an important part of Messenger. Source
  20. A good rule of thumb is to be skeptical of the privacy-forward changes Facebook touts to the public, and to deeply interrogate any of the quieter changes it rolls out behind the scenes since those—surprisingly—often mark the real efforts that the company’s taking to be a little bit less of an invasive shitshow. In the latest change, Facebook is tightening its rules around the use of raw, device-level data used for measuring ad campaigns that Facebook shares with an elite group of advertising technology partners. As first spotted by AdAge, the company recently tweaked the terms of service that apply to its “advanced mobile measurement partner” program, which advertisers tap into to track the performance of their ads on Facebook. Those mobile measurement partners (MMPs) were, until now, free to share the raw data they accessed from Facebook with advertisers. These metrics drilled down to the individual device level, which advertisers could then reportedly connect to any device IDs they might already have on tap. Facebook reportedly began notifying affected partners on February 5 and all advertising partners must agree to the updated terms of the program before April 22, according to Tencent. While Facebook didn’t deliver the device IDs themselves, passing granular insights like the way a given consumer shops or browses the web—and then giving an advertiser free rein to link that data to, well, just about anyone—smacks hard of something that could easily turn Cambridge Analytica-y if the wrong actors got their hands on the data. As AdAge put it: The program had safeguards that bound advertisers to act responsibly, but there were always concerns that advertisers could misuse the data, according to people familiar with the program. Facebook says that it did not uncover any wrongdoing on the part of advertisers when it decided to update the measurement program. However, the program under its older configuration came with clear risks, according to marketing partners. Gizmodo reached out to Facebook for comment about the changes—we’ll update this story if they respond. A bit of background here: When you see ads on Facebook for—I don’t know, a giant furry suit—there’s a chance that the person advertising that furry suit didn’t do it alone. The company works with literally hundreds of marketing partners that can help that fur-vertiser every step of the way. A chunk of these partners specializes in “measurement” and “attribution”—in making sure that the right ad for the right fursuit gets seen by the right Instagram user at the right time. Folks in the attribution space are plugged into every major platform and a ton of major ad networks themselves, aside from Facebook. An advertiser could go to one of these measurement partners and, to stick with our example, figure out which fursuit is driving the most e-commerce sales, or whether the way a retailer worded its ad might be scaring potential customers off. Device-level data can be a huge part of the appeal of working with MMPs. In the case of mobile measurement, an advertiser could use that data to figure out which members of his target market respond best to which kind of fursuit, how long it took these target members to buy one of these things after seeing the ad, and where they made the eventual purchase. That same device-level data could also give an advertiser a heads up if a person, say, isn’t really feeling furry ads in their feed all the time, or if they’re really feeling these ads and is in danger of potential bankruptcy from buying out a warehouse of merch. Until now, this raw data could be passed freely from Facebook to its trusted ad tech partners, which could then share it with advertisers. Now, its partners can only use that data “on an aggregate and anonymous basis,” according to Facebook’s new terms of service for MMPs. While the data here wasn’t as personal as names or addresses, it provided insights into the way an individual Facebook user responded to a piece of content, which could be just as useful for fursuit enthusiasts and political pundits alike—especially when they could potentially connect that to a given mobile device ID, which is unique to each phone. As one marketing exec told AdAge, “Facebook saw this as potentially a really big data leakage problem. Nothing was stopping the advertiser from syndicating this data; Facebook couldn’t control whether or not the advertiser leaked it.” With the ToS update, Facebook’s quashing that chain of command and keeping advertisers from getting their mitts on potentially sensitive user data. The changes also prohibit those advertisers—or the marketing partners, ostensibly—from taking these raw data points to create entirely new profiles of people off of the data that Facebook provided. It’s worth noting that this isn’t the first time that Facebook’s floated this idea. Way back in 2015, mobile marketers revolted when the company approached them with the idea of throttling the amount of device-level data they had access to, causing them to drop the proposal. Likewise, the new update is leaving a lot of these same parties less than chipper about their on-Facebook targeting aspects, but it looks like Facebook’s been beaten down by enough congressional hearings to hold strong this time around. Source
  21. Update: It turns out that Facebook has been well aware for months of this private WhatsApp chat flaw. Thanks to twitter user @hackrzvijay, we know that Facebook was notified back in November 2019 about this security flaw. However, Facebook didn’t do anything about it. The Twitter user in question reported the problem to Facebook with the intention of receiving a cash bounty. In this tweet, the hacker posts a message from Facebook declining to give a bounty because the ability for anyone to find invite codes online for private WhatsApp chat groups is “an intentional product decision.” Facebook then says that it cannot control what Google and other search engines index, so its hands are tied. As far as we can tell, both Facebook and Google are still not talking publicly about this problem, but this Facebook message makes it seem as though Facebook doesn’t think there’s anything wrong with your private WhatsApp chat groups being easily accessible by anyone. Original article, According to a new report from Vice, private WhatsApp group invites might not actually be so private. Through some pretty basic Google searching, it’s relatively easy to gain access to private chat groups. Normally, private WhatsApp group chats are only accessible via an invite code that gets handed out by the moderators of the chat. These invite codes, though, are simply URLs with specific strings of text. It appears that Google is indexing at least some of these invites which enables pretty much anyone with Google access to find them. Now, before you get out the pitchforks and start storming Google’s gate, from the outset this appears to be a WhatsApp problem (or, more specifically, a Facebook problem, as it ownsWhatsApp). Google uses crawlers to index URLs across the internet and it is very easy for websites and apps to place a line of code on pages that tells these Google crawlers not to index the information there. The likely reason behind this problem is WhatsApp failing to do this. Vice reached out to both Google and Facebook about this matter but didn’t receive a response. If you want to comb through Google Search to find out if your private WhatsApp group is indexed, just start with a “chat.whatsapp.com” string and then enter in some information specific to your chat. Vice did this and was able to find several chat groups related to sharing porn as well as a chat that describes itself as being for NGOs accredited by the United Nations. These chat groups listed out members’ names as well as contact information, in some cases phone numbers. This story will no doubt make the rounds today and WhatsApp and Facebook will need to respond soon. There’s about to be a lot of angry users. Source
  22. In 2018, following the Cambridge Analytica scandal, Facebook announced the “Download Your Information” feature allowing users to download all the information that the company have on them since the creation of the account. All of it? It doesn’t seem so. Concerns were quickly raised when Facebook released the feature, that the information was inaccurate and incomplete. Privacy International recently tested the feature to download all ‘Ads and Business’ related information (You can accessed it by Clicking on Settings > Your Facebook Information > Download Your Information). This is meant to tell users which advertisers have been targeting them with ads and under which circumstances. We found that information provided is less than accurate. To put it simply, this tool is not what Facebook claims. The list of advertisers is incomplete and changes over time. Ads section on Facebook “Download Your Information” page. “Advertisers Who Uploaded a Contact List With Your Information” Among the data downloaded, users can see which advertisers have uploaded a list that contains their personal data. On the advertiser side, Facebook calls this the “Custom Audience Tool”, it allows a company to upload a list of “customers” in order to target them (or avoid them). To be effective, this contact list needs to contain a unique identifier for the person (see Facebook’s documentation). This is usually an email address, a phone number, an Advertising ID or a users’ Facebook UID, but it can go way beyond that. In the template provided by Facebook, advertisers can also include all sorts of information they might have, such as name, gender, country, street, zip code, age or even date of birth. Contact list template provided by Facebook This advertiser-facing feature assumes that companies are in posession of this information because of an existing relationship with you. Under GDPR, this means that the company uploading this information must have a legal basis for pocessing this data. From a user’s perspective, it means we should be able to properly check which companies think we are their “customers”. This is cleary part of our rights under data protection law to information about what data a company has on us. Convenience for advertisers, opacity for users Unfortunately for the users, how this tool is exactly used to exploit their data is obscure and opaque. Facebook’s "Download Your Information" tool is supposed to let you see the companies that uploaded a list containing your data from the moment you created your account. Yet, after testing this feature at different points in time, we found out that it is not the case. Download your information page with dates going from account creation to date of download Indeed, we’ve found that the list provided by Facebook varies from one month to the other, removing some companies that were appearing previously and adding new ones. This goes hand in hand with a recent move by Facebook to limit the number of advertisers displayed on the site (without downloading anything, by visiting your Ads preferences when your are signed in) to the last 7 days. Altogether, this means that there is no way for users to know all the companies who uploaded a list with their data to Facebook. This prevents people from exercising/using their rights under GDPR - despite there being a clear obligation to facilitate the excercise of these right… This is an issue we see both at the back end (in the download your informaiton tool) and the front end in the ads that we see , where there is a lack of transparency from Facebook on how advertisers target you. The lack of information and difficulties in exercising rights, renforces an opaque environment where people are unaware of how their data is gathered, shared and used to profile and target them. Off-Facebook activity, a new widow to the opacity On 28 January 2020, Facebook announced the roll-out of another new feature called Off-Facebook activity. This tool is meant to let you “see and control the data that other apps and websites share with Facebook”. Unfortunately, this tool is once again a tiny sticking plaster on a much wider problem. While we welcome the effort to offer more transparency to users by showing the companies from which Facebook is receiving personal data, the tool offers little way for users to take any action. Do we Facebook? Do we? First, the information provided about the companies is again limited to a simple name, preventing the user from exercising thier right to seek more information about how this data was collected. As users we are entitled to know the name/contact details of companies that claim to have interacted with us. If the only thing we see, for example, is the random name of an artist we’ve never heard before (true story), how are we supposed to know whether it is their record label, agent, marketing company or even them personally targeting us with ads? Second, the details about the data transfer are extremely limited. Using Facebook’s Download your Information once again only provides limited information about what Facebook receives (some event are marked under a cryptic CUSTOM). There is also no information regarding how the data was collected by the advertiser (Facebook SDK, tracking pixel, like button…) and on what device, leaving users in the dark regarding the circumstances under which this data collection took place. Example of an off-Facebook activity detail in a downloaded archive Finally, this tool illustrates just how impossible it is for users to prevent external data from being shared with Facebook. Without meaningful information about what data is collected and shared, and what are the ways for the user to opt-out from such collection, Off-Facebook activity is just another incomplete glimpse into Facebook’s opaque practices when it comes to tracking users and consolidating their profiles. So what? Social media, and Facebook in particular, are going through a trust crisis. Years of failures and abuses have raised concerns about their ability to harness the power they give to advertisers (be they policital or private) and to be transparent with their users in any meaningful way. In this context, failure to provide meaningful information doubled with innacurate and thus misleading information about the “Download Your Information” tool (the information you download is supposed to be exhaustive) is yet another breach of trust. The online advertising is already a hot mess (see our work on AdTech) and while regulators are expected to take action we need the companies dominating this business to lead by example. Facebook should ensure the tools it offers to its users are accurate and effective. The current state is unacceptable. Source
  23. “Zuck off”: Doctors, nurses, scientists rail against Zuckerberg Backlash against Facebook's policies continues. Enlarge / Facebook CEO Mark Zuckerberg testifying before Congress in April 2018. It wasn't his only appearance in DC this decade. Bloomberg | Getty Images 26 with 20 posters participating San Francisco city officials are considering condemning the decision to name a local public hospital after Mark Zuckerberg—a move backed by nurses and doctors at the hospital, who have been railing against the Facebook co-founder and CEO since the hospital changed its name in 2015. San Francisco Supervisor Gordon Mar on Tuesday introduced a resolution to the board of supervisors that would condemn the Zuckerberg name. The resolution also urges the city to establish clear rules on naming rights that reflect the city’s “values and a commitment to affirming and upholding human rights, dignity, and social and racial justice.” Doctors and nurses at the hospital have been campaigning for the hospital to drop the name since it was first introduced in 2015, following a $75 million donation from Zuckerberg and his wife, Priscilla Chan, a pediatrician who used to work at the hospital. Over the years, hospital staff have expressed concern that the hospital is associated with Facebook and all of its problems and controversies—including, but not limited to, those related to privacy, unethical research, the dissemination of misinformation, hate speech, and disinformation. They have written op-eds in local papers, signed petitions, passed out buttons reading “Zuck off,” and circulated letters. In one 2018 demonstration, nurses took a roll of blue masking tape to the large metal sign at the hospital’s main entrance, completely covering up “Zuckerberg” from the name. Enlarge Sasha Cuttler, Instagram Though protest has died down as health care workers struggle to address the raging COVID-19 pandemic, opposition to the name is still smoldering. Robert Brody, an internal medicine physician at the hospital, circulated an email last month asking hospital staff to consider removing “Zuckerberg” from email signatures, presentations, and research papers, according to a report by Stat. He promotes the idea with the tagline “X to the Z.” “Whether we like it or not, Zuckerberg’s name is attached to our institution,” Dr. Brody wrote in the email. “Looking into a future desperate for institutional funding, our leaders are unlikely to support any effort to change the official name back to San Francisco General Hospital. But that doesn’t mean we, who work here, have to use the Z name or letter.” Naming Rights Enlarge / The Zuckerberg San Francisco General Hospital and Trauma Center sign that was partly covered during a 2018 demonstration. Getty | Justin Sullivan In an email to Ars, the hospitals’ chief communications officer, Brent Andrew, said that the hospital is not considering changing its name. The hospital released a statement saying that “The couple’s $75 million gift in 2015 allowed the hospital to acquire state-of-the-art technology we use every single day to save patient lives, and by providing continuing support of renovations, improvements in patient care and education” However, Andrew noted that the city’s board of supervisors is actually the body that has naming authority over the hospital. In fact, it was the board itself that approved the Zuckerberg name back in 2015. The supervisors at the time—who are almost entirely different from the supervisors that comprise the current board—noted in its approval of the resolution that It is customary and the philanthropic standard for hospitals, whether private or public in nature, to recognize major philanthropic gifts from private individuals by providing naming in honor of such gifts, whether the naming is associated with an entire hospital, hospital and health system or a building or designated space within the institution. With the resolution, the board agreed to change the name from the “San Francisco General Hospital and Trauma Center” to the “Priscilla and Mark Zuckerberg San Francisco General Hospital and Trauma Center.” However, in practice, the hospital almost exclusively goes by “Zuckerberg San Francisco General and Trauma Center” or just ZSFG. The resolution also stated that the new name “shall remain in place for 50 years.” It’s unclear if the board can break that agreement. Amid Facebook’s Cambridge Analytica scandal in 2018, another board member—Aaron Peskin—asked the city attorney to outline a procedure for removing Zuckerberg’s name from the hospital, according to the San Francisco Chronicle. The outcome of that inquiry was not released publicly. However, Andrew indicated to Ars that the board does have the power to change the name. Ongoing fight Meanwhile, opposition to the name continues at the hospital. Mike Dingle, a retired nursing aide who came up with the “Zuck off” slogan and buttons, suggested to Stat that he was working on “Zuck off” face masks. And the protest from hospital staff is just one front of the opposition that Zuckerberg and Facebook are currently facing. More than 1,000 companies have stopped buying ads on the social media platform as part of a Stop Hate for Profit campaign. The campaign accuses Facebook of a "long history of allowing racist, violent, and verifiably false content to run rampant on its platform." Separately, 260 scientists funded through the Chan Zuckerberg Initiative (CZI) or the CZ Biohub wrote an open letter to Zuckerberg last month urging him to improve Facebook’s policies on hate speech and misinformation. “The spread of deliberate misinformation and divisive language is directly antithetical” to the mission of CZI, they wrote, which is stated as using technology to “‘help solve some of our toughest challenges—from preventing and eradicating disease, to improving learning experiences for kids, to reforming the criminal justice system’ and ‘to build a more inclusive, just, and healthy future for everyone.’” “Zuck off”: Doctors, nurses, scientists rail against Zuckerberg
  24. Facebook is adding a Quiet Mode that silences push notifications on mobile A new digital well-being feature for Facebook’s main mobile app Photo by Amelia Holowaty Krales / The Verge Facebook announced an all-new “Quiet Mode” for its main mobile app on Thursday, which will pause “most” push notifications and remind you that it’s turned on when you try to open the software on your phone while the mode is still active. It’s not clear exactly what notifications will be exempted from the new mode; the company says some, like privacy updates, it is legally required to send out. This new mode is also not to be confused with the existing “mute push notifications” setting that lets you stop only push notifications, but not those within the app, for a designated amount of time. Instead, this new Quiet Mode will be found under Facebook’s “Your Time on Facebook” dashboard, which it added back in November 2018, following a push for major platforms and device makers like Apple and Google to promote digital wellness apps. Image: Facebook It’s part of a larger update to the dashboard that Facebook says will add week-over-week trends, usage tracking for daytime versus night, and a counter for total number of visits. It’s rolling out now to iOS users and will arrive for Android users in May, the company says. The new Quiet Mode will work both manually and on a set schedule if you so choose. It will pause notifications from within the app, like those obnoxious Facebook Watch badges, and on a system level, so you won’t see numbered badges on iOS either. “As we all adjust to new routines and staying home, setting boundaries for how you spend your time online can be helpful. Whether it’s to help you focus on your family and friends, sleep without distraction or manage how you spend your time at home, we have tools that can help you find the right balance for how you use Facebook,” reads a new update to the company’s ongoing COVID-19 information blog post. In addition to Quiet Mode, Facebook says it’s also added new shortcuts to the notification settings and News Feed preferences panel, so “you can make the most of your time on Facebook by controlling the type of posts you see in your News Feed as well as the updates you receive.” Source: Facebook is adding a Quiet Mode that silences push notifications on mobile (The Verge)
  25. Messenger Rooms are Facebook’s answer to Zoom and Houseparty for the pandemic Facebook is greatly expanding its video chat offerings to keep up with rising demand Facebook is rolling out a suite of new products to expand its capabilities in video chat. The company today announced Messenger Rooms, a tool for starting virtual hangouts with up to 50 people and allowing friends to drop in on you whenever they like. It’s also doubling the capacity of video calls on WhatsApp from four people to eight, adding video calls to Facebook Dating, and adding new live-streaming features to both Facebook and Instagram. CEO Mark Zuckerberg announced the features in a live stream today. In an interview with The Verge, Zuckerberg said the new video features were built in line with the company’s shift toward creating more private messaging tools. “Video presence isn’t a new area for us,” he said. “But it’s an area that we want to go deeper in, and it fits the overall theme, which is that we’re shifting more resources in the company to focus on private communication and private social platforms, rather than just the traditional broader ones. So this is a good mix: we’re building tools into Facebook and Instagram that are helping people find smaller groups of people to then go have more intimate connections with, and be able to have private sessions with.” The moves come as the global pandemic has forced hundreds of millions of people to stay indoors and rely on digital tools for nearly all of their work, school, and play. More than 700 million people are now making calls on Facebook Messenger and WhatsApp every day. But competitors are also surging. Zoom, which began life as a simple tool for business videoconferencing, rocketed from 10 million users in December to more than 300 million today. Houseparty, an app for virtual hangouts with friends that Facebook had previously cloned before abandoning the project last year, now routinely hovers at the top of app store download charts. It gained 50 million users over the past month. The rapid growth of alternative social products has always been cause for concern at the famously paranoid Facebook, which devotes significant resources to monitoring emerging social products and then acquiring the companies behind them or copying their features. While we are still in the first few months of the COVID-19 pandemic, it’s already clear that consumer behavior is changing to cope with it — and that Facebook’s existing product lineup has not met demand. Of everything announced today, Messenger Rooms promises to be the most significant. The feature, which Facebook says will be available in the company’s products globally sometime in the next few weeks, will allow up to 50 people to join a call. The room’s creator can decide whether it’s open to all or lock it to prevent uninvited guests from joining. You’ll be able to start a room from Messenger and Facebook to start. Later, rooms will come to Instagram Direct, WhatsApp, and Portal. Guests can join a room regardless of whether they have a Facebook account. While in a room, you can play with Facebook’s augmented reality filters or swap out your real-life background for a virtual one. Some backgrounds offer 360-degree views of exotic locales, the company said. And a new slate of AR filters will help brighten up dark rooms or touch up users’ appearances. Room calls are not end-to-end encrypted, but Facebook says it does not view or listen to calls. The creator of a room can remove participants at any time, and rooms where illicit behavior is taking place can be reported to Facebook. (WhatsApp video calls are end-to-end encrypted, offering an extra layer of protection to users.) Zoom saw a surge in malicious behavior as it became the world’s default meeting app, with racist, bigoted, and pornographic “Zoombombings” roiling meetings all over the world. Zuckerberg said Messenger Rooms were designed with strong privacy controls, and that the feature’s reliance on connections with your real-life friends and family make it less likely that it will be used to harass people. For groups where people don’t know each other as well, moderators will be able to kick people out of rooms. “A lot of the time that I’ve spent on this over the last few weeks as we’ve been building this out and getting ready to ship has been on privacy, security, integrity reviews, and how do we make sure that a lot of the use cases that that have been problematic around Zoom are not going to be things that are replicated here,” he said. Facebook Live will add back a feature called Live With that enables users to invite another person to stream with them. The donate button will become available on live streams, allowing users to raise money directly from their broadcasts in the countries where fundraisers are available. Instagram will begin allowing users to post live streams to IGTV as well as to Instagram stories after they finish a stream, and Instagram Live broadcasts will become available on the desktop for the first time. Users with Facebook’s Portal display will also get the ability to go live to pages and groups, the company said. Portal users can already go live from their own profiles. But live-streaming also has a dark side, and Facebook faced criticism after introducing live-streaming when it was used to broadcast acts of violence. The company removed 1.5 million copies of the Christchurch terror attack in the days after the video, which was broadcast live on Facebook. Rooms will be available in Messenger today in nearly all countries where Facebook is available, the company said. It will become available inside the Facebook app in a handful of unspecified countries today and roll out globally within coming weeks. Source: Messenger Rooms are Facebook’s answer to Zoom and Houseparty for the pandemic (The Verge)
  • Create New...