Jump to content

Search the Community

Showing results for tags 'social media'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • Site Related
    • News & Updates
    • Site / Forum Feedback
    • Member Introduction
  • News
    • General News
    • FileSharing News
    • Mobile News
    • Software News
    • Security & Privacy News
    • Technology News
  • Downloads
    • nsane.down
  • General Discussions & Support
    • Filesharing Chat
    • Security & Privacy Center
    • Software Chat
    • Mobile Mania
    • Technology Talk
    • Entertainment Exchange
    • Guides & Tutorials
  • Off-Topic Chat
    • The Chat Bar
    • Jokes & Funny Stuff
    • Polling Station


  • Drivers
  • Filesharing
    • BitTorrent
    • eDonkey & Direct Connect (DC)
    • NewsReaders (Usenet)
    • Other P2P Clients & Tools
  • Internet
    • Download Managers & FTP Clients
    • Messengers
    • Web Browsers
    • Other Internet Tools
  • Multimedia
    • Codecs & Converters
    • Image Viewers & Editors
    • Media Players
    • Other Multimedia Software
  • Security
    • Anti-Malware
    • Firewalls
    • Other Security Tools
  • System
    • Benchmarking & System Info
    • Customization
    • Defrag Tools
    • Disc & Registry Cleaners
    • Management Suites
    • Other System Tools
  • Other Apps
    • Burning & Imaging
    • Document Viewers & Editors
    • File Managers & Archivers
    • Miscellaneous Applications
  • Linux Distributions


  • General News
  • File Sharing News
  • Mobile News
  • Software News
  • Security & Privacy News
  • Technology News

Find results in...

Find results that contain...

Date Created

  • Start


Last Updated

  • Start


Filter by number of...

Found 21 results

  1. Google is bringing back human moderators to oversee YouTube content, taking over from automated systems that were given more responsibilities at the onset of the COVID-19 pandemic. YouTube revealed in late August that in the three months prior, 11.4 million videos have been removed from the platform for violating its Community Guidelines. This is the highest number of videos taken down from YouTube over a three-month period since the service was launched in 2005, and it was attributed to the higher reliance on A.I. as the pandemic prevented human reviewers from going to work. YouTube admitted, however, that some of the videos would have been erroneously removed. “One of the decisions we made [at the beginning of the pandemic] when it came to machines who couldn’t be as precise as humans, we were going to err on the side of making sure that our users were protected, even though that might have resulted in a slightly higher number of videos coming down,” YouTube’s chief product officer told the Financial Times. The Google-owned company revealed that it has reversed the decision to take down 160,000 videos, the Financial Times reported. Normally, less than 25% of appeals are successful, but under A.I. moderation, the percentage of successful appeals has increased to 50%. However, while Mohan claims that more human moderators will start overseeing YouTube content, it remains unclear how that will happen amid the ongoing pandemic. Digital Trends has reached out to Google for additional details on their working arrangements, and we will update this article as soon as we hear back. Source
  2. Snapchat on Monday launched Spotlight, a video tab within its app that, like TikTok, distributes videos based more on how popular they are than on who created them. Facebook in August launched its TikTok competitor, called Reels. Driving the news: Snapchat's news comes days after Twitter said it would be adding "Fleets," which are basically Snapchat stories for people who tweet. (Nearly every social media app has launched some version of Stories in the past few years.) The big picture: Tech platforms used to focus on ways to create wildly different products to attract audiences. Today, they all have similar features, and instead differentiate themselves with their philosophies, values and use cases. Instagram launched 10 years ago as a photo-sharing app for artists and design buffs, but now includes everything from live video to shopping to help those creators market and sell their ideas and goods. Snapchat was created as a private messaging app between close friends, but today includes professionally-curated content, games and maps to help close friends develop deeper relationships. Twitter was created as a public ideas platform, but over the years it's made it easier to share images, videos and audio to help users discuss current events. What to watch: As social media companies adopt similar features, expect them to emphasize how their core values shape their versions of those features. For example, Snapchat was deliberate about making sure Spotlight would be set to private mode by default and wouldn't include public counts of comments, likes or shares. Source
  3. More Americans get their news from social media than from newspapers, a Pew Research study has found, a tip of the balance in that direction for the first time. As recently as last year, the sides were roughly equal – news via social media was about the same portion as news via print newspapers. According to the report posted on the Pew Research Center website, 20% of U.S. adults say they “often get news via social media,” a slightly higher figure than the 16% who favor newspapers. Pew notes that social media’s recent gain over print follows years of “steady declines” in newspaper circulation, combined with “modest increases” in the portion of Americans using social media. The survey was conducted earlier this year. TV remains the single most popular news consumption platform, though, despite steady decline in recent years: 49% of adults get their news from TV. Coming in at #2 is news websites (33%), with radio at #3 (26%). Social media and print round out the top five. When combined, news websites and social media are closing in on TV: 43% to TV’s 49%. Breaking down the types of TV news, Pew found that local TV is the most popular, with 37% of adults going that route, while 30% use cable TV most often and 25% turn to national evening network news programs. Pew asked about streaming devices for the first time in their annual study, finding that 9% of U.S. adults often get news from a streaming device on their TV. The majority of those using streaming devices (73%) don’t do so exclusively: they also use broadcast or cable TV for news. As would be expected, age plays a significant role in news consumption. Americans 65 and older are five times as likely as 18- to 29-year-olds to get their news from TV. Only 16% of that younger demo say they get their news from TV often. The 30- to 49-year-olds who do so are at about 36%. On the flip side: that youngest demo is four times as likely to get news from social media as the oldest demo. The elders is the only age group in which print has held its popularity, with four in 10 getting news from dead trees often. The middle group showed a preference for websites, with 42% of the 30- to 49-year-olds going online or using apps. Of the younger demo, that percentage is 27%, trailing behind the most popular – social media (36%). Only 2% of the youngest adults turn to print. Another Pew finding: Younger and middle-age Americans are far less likely than their elders to rely on only one platform. No more than half of the below-49ers rely on a single platform. Source
  4. Conservative apps deliver curated partisan news feeds on what are effectively private social media platforms, the New York Times reports. They're "creating a safe space for people who share a viewpoint, who feel like the open social networks are not fun," says the developer of such apps. Republicans who feel Silicon Valley harbors a bias against conservative views are developing their own online networks, according to a Saturday report in The New York Times. The Times highlighted a new generation of mobile applications made for the National Rifle Association and the Great America pro-Trump political action committee (PAC), which deliver curated partisan news feeds on what are effectively private social media platforms. These apps are gaining attention at a time when Silicon Valley has been accused repeatedly of using their power to stifle right-leaning voices. "People with center-right views feel like the big social platforms, Facebook and Twitter, are not sympathetic to their views," said Thomas Peters, CEO of uCampaign, the Washington startup behind the NRA and Great America apps. Peters added that the apps are "creating a safe space for people who share a viewpoint, who feel like the open social networks are not fun places for them." While looking to propagate their message outside the offerings of Big Tech ahead of next month's midterm election, these mini-platforms also actually aim to harness the enormous reach of those networks. The Times noted that the right-leaning platforms offer options to post messages on Facebook and Twitter that are scripted by the campaigns. Democratic candidates, including former President Barack Obama, have used consumer-facing apps to promote their political campaigns and advocacy. This year, Democratic campaigns are also embracing peer-to-peer text messaging, believed to engage younger voters more than stand-alone candidate apps, the Times said. Meanwhile, uCampaign recently started its own peer-to-peer texting platform, RumbleUp, for conservative campaigns. The full report can be found on the New York Times' website. Source
  5. The year 2018 will go down in history as the one where social networking platforms made country-specific changes and agreed to store user data belonging to Indians within the country. With great power comes great responsibility. The quote made popular by the iconic comic series 'Spider-Man' sums up the challenges that social media platforms like WhatsApp and Facebook are facing in India. They have been accused of being a carrier of hate messages and fake news that incited mob violence. And, now they stare at the prospects of stricter government rules, greater accountability, and regulatory scrutiny. These platforms, for some of whom India is the biggest consumer base outside of their home country, can see very much clear the writing on the wall -- follow the rules of engagement if you want to be in the world's fastest growing economy. The year 2018 will go down in history as the one where social networking platforms not only made country-specific changes -- be it labeling forwarded messages, limiting the number of people a user can send a message to at one go and launching public awareness campaigns against fake news. They also agreed to store user data belonging to Indians within the country. Globally, the tech and social giants scrambled to make efforts to mollify users with better control of their digital profile and data trail, as they faced backlash over data breaches. India market was no different. Earlier this year, Facebook came under the regulatory glare after a global data leak scandal hit about 87 million users. British data analytics and political consulting firm Cambridge Analytica was accused of harvesting personal information of millions of Facebook users illegally to help political campaigns and influence polls in several countries. Law and IT Minister Ravi Shankar Prasad warned the US social media giant of "stringent" action for any attempt to influence polls through data theft, even threatening to summon its CEO Mark Zuckerberg, if needed. The IT ministry slapped two notices on Cambridge Analytica and Facebook over the data breaches. Facebook admitted that nearly 5.62 lakh people in India were "potentially affected" by the incident and rushed to tighten processes, to prevent a repeat. But Cambridge Analytica continued to be evasive and in mid this year, the Centre asked CBI to probe the alleged misuse of data of India's Facebook users by the British political consultancy firm. Facebook, meanwhile, to bring transparency in political advertisements in the run-up to 2019 general elections, is making it compulsory for advertisers to disclose their identity and location before any such ad material can be run on the popular social media platform and Instagram. Twitter, too, intensified its crackdown on fake and automated accounts and began removing suspicious accounts from users' followers to give a "meaningful and accurate" view of follower count. But, it was Facebook-owned WhatsApp that faced the maximum heat after rumours circulating on the messaging platform incited mob fury and claimed over a dozen lives in various parts of the country. The toxic messages that spread on WhatsApp instigated riots in certain cases, as people forwarded and misinterpreted videos on the messaging platform. Following government's warnings, WhatsApp recently named a grievance officer for India and announced the appointment of an India head -- a first for the country that accounts for most users across the world. It has launched a label that identifies forwarded messages and barred forwarding of messages to more than five people at one go. As the Supreme Court voiced concerns over irresponsible content on social media, the government rushed to propose changes in IT Act's rules and released draft amendments which would require "intermediaries" to enable tracing of originators of information when required by government agencies. In the political slugfest that ensued, the Congress alleged that if the amendments were cleared, there would be a tremendous expansion in the power of the "big brother" government on ordinary citizens, "reminiscent of eerie dictatorships". Some Cyberlaw experts have equated the changes in rules to India's own anti-encryption law. The proposals require social media firms to deploy technology-based automated tools for proactively identifying, removing or disabling public access to "unlawful information or content". If approved, these changes will place social media platforms -- even those like WhatsApp which promise users privacy and encryption -- firmly under government lens, forcing them to adopt stricter due-diligence practices. The amendments -- which come ahead of the general polls in 2019 -- propose that platforms would have to inform users to refrain from hosting, uploading or sharing any content that is blasphemous, obscene, defamatory, hateful or racially, ethnically objectionable, or threatens national security. When backed by lawful order, these platforms will have to, within 72 hours, provide assistance as asked for by any government agency. The IT ministry has met Facebook, WhatsApp, Twitter, Google, and others to discuss the proposed changes and public feedback has been sought by January 15. The seemingly-infallible tech behemoths are already being equated with big oil and big tobacco, in Western markets. The larger question is whether the shifting public perception and recent moves by the government to regulate these habit-forming, new-age platforms would change the very essence of social media, once considered a harbinger of free speech and individual rights. source
  6. Republicans, Democrats, and Independents may not be able to agree taxes, foreign policy, or immigration. But they increasingly agree that social media do more to hurt free speech and democracy than help, according to a new poll out from Axios. The survey of 3,622 adults was conducted by SurveyMonkey earlier this month. It showed that over the last year, the adults who thought that social media helped went from 53% to 40%. The ranks of those who said the platforms hurt jumped from 43% to 57%. Although people with different political party allegiances differed in their total assessment of social media outlets, they all shows significant and similar shifts in their outlooks. Democrats who thought the platforms were good went from 61% to 50%; the number who thought they were bad jumped from 37% to 48%. At 52%, a majority of Republicans had already thought them a problem last year. That number now stands at 69%, while those with a positive take dropped from 45% to 30%. Independents were in between the other two groups. Negative takes jumped from 42% to 58%; the number who thought the platforms were good now stands at 39%, versus 55% last year. The changes in attitude come as one scandal after another has rocked the industry and Congress brought Twitter CEO Jack Dorsey and Facebook COO Sheryl Sandberg in for multi-hour hearings. A poll conducted for Fortune earlier this month found that Facebook is the least trustworthy of all big tech companies regarding the safety of user data. A recent article from the New York Times that drilled into the company’s responses to multiple crises, and the actions reportedly taken by Sandberg and CEO Mark Zuckerberg, led to renewed criticism. Even Apple CEO Tim Cook has now called regulation “inevitable” because free market responses failed. And there probably isn’t an app for that. Source
  7. NEW DELHI (Reuters) - Global social media and technology giants are gearing up to fight sweeping new rules proposed by the Indian government that would require them to actively regulate content in one of the world’s biggest Internet markets, sources close to the matter told Reuters. The rules, proposed by the Information Technology ministry on Christmas Eve, would compel platforms such as Facebook, its messaging service WhatsApp and Twitter to remove unlawful content, such as anything that affected the “sovereignty and integrity of India”. This had to be done within 24 hours, the rules propose. The proposal, which caught many holidaying industry executives off guard, is open for public comment until Jan. 31. It will then be adopted as law, with or without changes. The move comes ahead of India’s national election due by May and amid rising worries that activists could misuse social media, especially the WhatsApp messaging service, to spread fake news and sway voters. Industry executives and civil rights activists say the rules smack of censorship and could be used by the government of Prime Minister Narendra Modi to increase surveillance and crack down on dissent. Social media firms have long battled efforts by governments around the world to hold them responsible for what users post on their platforms. U.S. and India lobby groups, representing Facebook and other companies, have sought legal opinions from law firms on the impact of the federal proposal, and have started working on drafting objections to be filed with the IT ministry, four sources in the sector said. “The companies can’t take this lying down. We are all concerned, it’s fundamental to how these platforms are governed,” said an executive at a global social media company. An estimated half a billion people in India have access to the Internet. Facebook has about 300 million users in the country and WhatsApp has more than 200 million. Tens of millions of Indians use Twitter. The new rules, the sources said, would put privacy of users at risk and raise costs by requiring onerous round-the-clock monitoring of online content. Internet firm Mozilla Corp said last week the proposal was a “blunt and disproportionate” solution to the problem of harmful content online, and one which could lead to over-censorship and “chill free expression”. The IT ministry has said the proposal was aimed at only making social media safer. “This is not an effort to curb freedom of speech, or (impose) censorship,” Gopalakrishnan S., a joint secretary at India’s IT ministry said on Saturday when the ministry ran a #SaferSocialMedia campaign on Twitter. Facebook and WhatsApp declined to comment. A Twitter spokesperson said the company continues to engage with the IT Ministry and civil society on the proposed rules. “This will be like a sword hanging on technology companies,” said Nikhil Narendran, a partner specializing in technology law at Indian law firm Trilegal. TIGHT REGULATIONS Such regulations are not unique to India. Vietnam has asked tech companies to open local offices and store data domestically, while Australia’s parliament has passed a bill to force companies to give police access to encrypted data. Germany requires social media companies to remove illegal hate speech within 24 hours or face fines. Nevertheless, the proposal would further strain relations between India and global technology firms. They have been at odds since last year due to federal proposals requiring them to store more user data locally to better assist legal investigations. The new rules, called “intermediary guidelines”, also propose requiring companies with more than 5 million users in India to have a local office and a nodal officer for “24x7 coordination with law enforcement”. When asked by a government agency or through a court order, companies should within 24 hours “remove or disable access” to “unlawful” content, they stipulate. The rules also mandate companies to reveal the origin of a message when asked, which if enforced would deal a blow to WhatsApp which boasts of end-to-end encryption to protect user privacy. WhatsApp has battled criticism after fake messages about child kidnap gangs on its platform sparked mob lynchings in India last year. “You have created a monster, you should have the ability to control the monster,” a senior government official said, referring to WhatsApp. “We remain flexible in principle (to suggestions), but we definitely want them to be more accountable, especially the big companies,” the official said. Source
  8. Taking part in risky stunts — whether or not in front of a camera — is a practice that can occasionally end in disaster for those involved. YouTube certainly wants no part of such shenanigans and updated its guidelines to ram home the point. In a Q&A section posted on Tuesday, January 15, introducing its revamped guidelines, YouTube acknowledged that the video-streaming site is “home to many beloved viral challenges and pranks, like Jimmy Kimmel’s Terrible Christmas Presents prank or the water bottle flip challenge,” but said it had to make sure that “what’s funny doesn’t cross the line into also being harmful or dangerous.” Harmful or dangerous? Ah, that would be stunts such as the so-called “Bird Box challenge,” where some folks, inspired by the recent Netflix Original starring Sandra Bullock, have been attempting a range of activities while wearing a blindfold — like driving a car or walking along a train track. We can throw 2018’s Tide pod challenge into the same category too — a rather risky endeavor that involved eating the contents of laundry detergent packs. “Anything for a thumbs up,” appeared to be the mantra behind the madness, though creators are after views to boost their revenue off ads, too. YouTube already bans content showing dangerous activities, but the new rules go into more detail regarding “dangerous challenges and pranks.” It tells creators that it doesn’t even permit videos where someone believes they’re in some kind of physical danger, even if the situation is actually safe. The company offers examples such as home-invasion setups or drive-by shooting pranks. Stunts like fake bomb threats fit neatly into that category, too, though something as daft as that can also get you jailed. YouTube was also keen to make it clear that it doesn’t allow pranks “that cause children to experience severe emotional distress, meaning something so bad that it could leave the child traumatized for life,” adding that it’s been working with psychologists to develop guidelines around the kinds of setups that go too far. The company said it’s giving creators two months to review and clean up their content. During this time, challenges and pranks that violate its guidelines will be removed if its team gets to the banned content first, but the channel will not receive a strike during this period. If a creator disagrees with a strike, they can appeal against it. A strike disappears after 90 days, but creators are warned that if their account receives three strikes for violations within a 90-day period, it will be terminated. YouTube has received a barrage of criticism in recent years for hosting offensive content on its site, an issue which it says it’s tackling with machine-learning algorithms and the addition of more human reviewers. Source
  9. Company, Devumi, already filed for bankruptcy in mid-2018. Today, a company that sold social media 'likes' and 'followers,' settled its legal case with the New York's Attorney General Office in the first-ever criminal investigation of its kind. The company, Devumi LLC, operated the now-defunct devumi.com website, where it sold likes and views on YouTube, Vimeo, and SoundCloud, endorsements on LinkedIn, pins on Pinterest, and likes, retweets, and Twitter followers. While social media influencers and celebrities knew about the site for many years, and some of them even used it to boost their online presence, the site was brought into the public's attention after a New York Time exposé in January 2018. The NYT article sparked an immediate investigation into Devumi's fraudulent business by New York Attorney General Letitia James. Her office found that Devumi sold both bot (automated) and sock-puppet (human-operated) accounts. Furthermore, investigators also found that Devumi sold accounts and online activity that they've illegally copied from real accounts, including their real avatars and content. Both were seen as criminal behavior because the bot accounts helped generate interest or tricked companies into buying advertising or product placements with social media celebrities that faked their follower numbers. Seeing the writing on the wall that would have lead to a long legal case, Devumi representatives signed a settlement with the New York Attorney General Office to make the case go away, on the promise that it and all related companies and associates would stop engaging in any similar practices. Devumi, in the meantime, has ceased operations months before, claiming that the NYT article generated a drop in sales, with social media users going to rival companies, fearing they might be exposed for buying likes and followers like the celebrities named in the Times piece. These included John Leguizamo (actor), Michael Dell (Dell owner), Ray Lewis (retired NFL player), Kathy Ireland (model and actress), Akbar Gbaja-Biamila (host of the "American Ninja Warrior" show), and, ironically, even Martha Lane Fox (Twitter board member). Source
  10. The AchieVer

    Peach isn’t dead yet

    Peach isn’t dead yet It’s a 1.0 app in a web 2.0 world Social media is increasingly the internet: Facebook was founded in 2004, and it ate the web as we knew it then — a collection of microsites and curiosities run by so many individual proprietors, individually. It used to be that personalization was what you did to your site; now it’s found in the ads you’re served. Peach — the microblogging platform— was seemingly designed against those circumscribed possibilities, as an antidote to the weird world-eating dominion of the Twitters and Facebooks and Instagrams of the universe. Its whole purpose was to bring people back to the early days of online, when the only limits were in what you could code. To describe it in a line: Peach is an online diary that you can share with your friends, like LiveJournal and Tumblr before it. Peach went down last week. It took a few days before the developers addressed the situation online, and in that time its users were distraught because it wasn’t clear if the app was ever coming back. (As of this writing, it’s still not back up.) It is a special place: Warm, inviting, and private, a port hidden from the chaotic storm of posts that make up the contemporary internet. I got in touch with some of those people to see what Peach meant to them, and what it felt like to face down the possibility that this safe, beautiful place might disappear. “To say I’m bereft would be an understatement,” wrote my friend Alison, who owns an aerial gym and who was a prolific, early blogger. Many people felt the same way. “I’m holding out hope that I’ll wake up tomorrow and check Peach and everything will load like nothing ever happened, but the app has been on borrowed time for so long that anything besides accepting its death feels foolish,” said Peter McCracken. “PEACH FELT LIKE A REFRESHING BREEZE AFTER BEING JAMMED INTO A HOT AND CROWDED SUBWAY CAR FOR TWO HOURS.” “When I first joined, Peach felt like a refreshing breeze after being jammed into a hot and crowded subway car for two hours. It was antithetical to the numbers game of accumulating followers and posting nonstop that plagued my Tumblr and Twitter circles at the time. Here was a charmingly simple app that let you curate your own space and peek in on friends.” McCracken went on, saying Peach is more of a web of interconnected diaries than it is a social network. There, he posted about the major shifts that occurred in his life — career shifts, the relationships, and the like. It was, he wrote, decidedly uncool. “I probably won’t be able to unlearn the muscle memory that made me open it in vain to check notifications on posts that are no longer there,” he wrote. “It happened five times while I was writing this.” A person who went by Crow sent me an email, where they said Peach was a place where they went to interact with other people. “I’m autistic and really have no idea to interact with people in real life & it’s a lot easier online especially on Peach,” they said. “Without Peach I wouldn’t have met my ex who is still one of my closest friends today, my art wouldn’t have improved as much, and I think I would be a lot less happy if I’m being honest.” A person named Michael wrote me, saying that they used Peach because it was isolated from the wider internet, which mean it was easier to post without consequence — unlike somewhere larger, like Facebook. “I go on there and clear the drafts for my brain, and I keep up with a few internet friends, and it’s just nice,” they said. “And like it feels trite to call something a safe space, but Peach is a motherfucking safe space!! In all the ways that twitter is not and refuses to be.” For most people, Peach seemed to be a place to mature, in the realest sense: It’s a place where the wider world doesn’t interfere, somewhere away from the seriousness of the internet at large. A woman named Helena wrote: “My time on peach roughly matches up to my first serious adult relationship (I’ve been with peach slightly longer than my partner, which is wild to think about) and I think in both my relationship with them and my relationship with my Peach friends I’ve realised how much real intimacy is based on those two things — being able to be honest about the hard stuff, and being able to be listened to on the things you don’t think are worth mentioning.” “People are (half-)joking a lot about Peach being a therapist but the time I had access to actual therapy I struggled so much with the idea of telling the truth about how I felt to a stranger, whereas I had a handful of friends I made on Peach (we had mutual friends, not total randos) who I immediately felt like there was this bond of trust with because we were both in the same boat, being honest with each other on a dusty abandoned mobile app,” she went on. That was the magic. There aren’t many open spaces left online, and there aren’t too many playgrounds left. Peach is a 1.0 app in a web 2.0 world; its architecture is nostalgic for a time we’ve left behind. Though not irrevocably, at least not yet, because it’s still around. Source
  11. Social media companies like Facebook and Google have been slammed in the wake of the Christchurch massacre for failing to stop the spread of violent footage posted by the shooter. Pressure is mounting on them to do more after the terrorist’s video quickly spread across the internet on Friday but former tech employees say it’s not going to get any better. Yesterday, Facebook said it removed 1.5 million videos of the New Zealand shootingsincluding 1.2 million that were blocked from being posted. That implies 300,000 versions of the video were available to watch for at least short periods of time before Facebook managed to pull them down. For hours after the attack, the video circulated on other popular content sharing sites YouTube and Twitter as well as lesser known video streaming sites. Prime Minister Scott Morrison has taken aim at social media companies for not doing enough to prevent the spread of Friday’s live streamed attack. He demanded that tech giants provide assurances that they would prevent attacks from being shown online, suggesting live streaming services could be suspended. Opposition leader Bill Shorten also took aim at social media sites for hosting hate speech and not being accountable for the spread of anti-social content. Criticism has come from all corners, but serious questions remain about whether these sites can reliably be tasked with preventing another horrific live streamed video from being so widely circulated again. ‘IT ISN’T GOING TO GET A LOT BETTER’ These companies use a combination of algorithms, human workers and user reporting to police content. But given the huge volume of postings during an event like Christchurch it is currently an impossible task to block everything in real time. Alex Stamos is a computer scientist and the former chief security officer at Facebook. The day after the massacre he took to Twitter to lament the immense difficulty faced by a company like Facebook when so many users willingly post the violating footage. “Millions of people are being told online and on TV that there is a video and a document that are too dangerous for them to see, so they are looking for it in all the normal places,” he said, sharing a picture which showed a spike in Google searches for “New Zealand shooting” on Friday. “So now we have tens of millions of consumers wanting something and tens of thousands of people willing to supply it, with the tech companies in between.” Even if the company’s filtering systems were bulletproof, questions still remain about what should be allowed for legitimate reporting purposes and how to differentiate, he wrote. In short, “It isn’t going to get a lot better than this.” In fact, it will likely get worse. When it comes to Facebook, others were quick to point out that recent changes announced by CEO Mark Zuckerberg to introduce encrypted messaging and ostensibly boost privacy on the platform will limit the company’s ability to pull down infringing content. “End-to-end encryption prevents anyone — including us — from seeing what people share on our services,” Zuckerberg said earlier this month. According to former Facebook exec Antonio Garcia Martinez, a cynic might see this as a way for Facebook to protect itself against this kind of criticism. “Zuck’s recent statements about encryption, interpreted uncharitably, are a way to get out from under this content moderation curse forever, with an idealistic sheen,” he wrote on Twitter this morning. “By the way, I’m told the video is still circulating on WhatsApp, and there’s nothing FB can do about it due to e2e (end-to-end encryption),” he added. Tech firms have long struggled to balance their ethos of supporting free speech with the need to remove and prevent the spread of terrorist content. In 2016, Google, Facebook, Twitter and Microsoft announced they had teamed up to create a database of unique digital fingerprints known as “hashes” for videos and images that promote terrorism. Known as perceptual hashing, it means when one company takes down a piece of violating content, other companies can also use the hash to identify and remove the same content. But like other systems designed to improve content moderation, it is imperfect and is beholden to the never-ending game of cat and mouse when users are intent on sharing content. And it’s a problem that doesn’t look like going away any time soon. Facebook CEO Mark Zuckerberg has signalled major changes to come to Facebook. Source
  12. JAKARTA: Indonesia will meet social media companies to discuss its plans to impose fines of up to around US$36,000 (RM148,914) if they allow pornography, violence or other "negative" content on their platforms, a communications ministry official said. The South-East Asian country aims to push firms to better monitor and delete content the authorities deem obscene, Semuel Abrijani Pangerapan, the ministry's director of information applications, said late Nov 5. He told Reuters the ministry would issue a regulation governing the mechanism for fines following discussions with the companies. The fines could go into effect in 2021. "The point of this is that control of content will no longer be the job of the government," Pangerapan said by telephone, adding that he would invite companies including Google Facebook and Twitter. Representatives of Twitter and Facebook did not immediately respond to requests for comment. At an earlier press conference, he said "negative" content could include pornography or radicalism, and fines could range from 100mil rupiah (RM29,500) to 500mil rupiah (RM147,504). The move comes amid wider regional efforts by South-East Asian governments to demand action from global tech giants on content regulation and tax policy. The stakes are high for governments, which are counting on the digital economy to drive growth amid domestic political tensions, and Internet companies, which view South-East Asia's social-media-loving population of 641 million as a key growth market. Indonesia is a top-five market globally for US tech giants Facebook and Twitter. Authorities have succeeded in getting social media companies Telegram and TikTok to establish content monitoring teams in Indonesia after briefly banning them over "negative content". Communications ministry officials told Reuters in August they were working on a "three-letter system," meaning that if a platform fails to respond to three government requests to engage on an issue, then it would be banned from Indonesia. Indonesia has already blocked more than 70,000 websites displaying "negative content" such as pornography or extremist ideology in 2018 using a so-called "crawling system" that automatically searches internet content and issues alerts when inappropriate material is found. The country's Internet economy is the largest and fastest-growing in the region, on track to cross the US$130bil (RM537.74bil) mark by 2025, according to a report by Google, Singapore state investor Temasek Holdings and global business consultants Bain & Company. Source: Indonesia to meet social media firms as it eyes ‘negative content’ fines (via The Star Online)
  13. Hey guys, I just wanna share something with you to see if someone can help me because I am frustrated. My facebook and instagram account were hacked for the 4th time this month. When it first happened, I created a new hotmail account and linked it with both my facebook and instagram profiles because my original hotmail was also hacked. 2 step verification was enabled for both social media accounts but still didn't protect me. After the second and the third time, I became convinced that someone had access to my phone or laptop so I resetted both of them. After each hack I recover my accounts using my phone number. Last night was the fourth time so instead of recovering and re-use them I recovered and deleted both social media accounts. Can someone please help give me some advise on what I can do to protect myself in case I created new social media accounts or reactivated my already existing ones ? Thank you
  14. "SOCIAL MEDIA PLATFORMS should advance FREEDOM OF SPEECH," White House says. Share on Facebook Share on Twitter Donald Trump has long accused social media platforms like Facebook, Twitter, and YouTube of political bias. On Wednesday, his White House launched a new online form that allows members of the public to report political bias in their content moderation decisions. "SOCIAL MEDIA PLATFORMS should advance FREEDOM OF SPEECH," the form says (capitalization in the original, of course). "Yet too many Americans have seen their accounts suspended, banned, or fraudulently reported for unclear 'violations' of user policies. No matter your views, if you suspect political bias caused such an action to be taken against you, share your story with President Trump." The form asks users to provide their name and basic demographic and contact information. Users then provide details about the content that was censored and can provide screenshots of messages from social media companies about moderation decisions. The form also collects respondents' email addresses and asks for permission to add users to White House newsletters. Respondents are also asked to accept a user agreement that gives the Trump Administration a broad license to use the information, including publishing it. The form singles out four social media platforms by name: Facebook, Instagram, Twitter, and YouTube. Users can also choose "other" and type in another platform. As these platforms have become more prominent, they have faced harsh criticism from both sides of the political spectrum. Liberals have attacked them for being too slow to block online harassment and hate speech. Social media companies have responded by beefing up their moderation efforts—but that has caused conservatives to worry about mainstream conservative content getting swept up in the dragnet. Last month Vice reported on a recent internal discussion at Twitter addressing this very issue. During an all-hands meeting, someone asked why the platform doesn't use automated tools to remove white supremacist content the way it has for ISIS propaganda. A Twitter employee who works on the issue reportedly said that one reason was that filters designed to identify white supremacist accounts could also catch the accounts of some Republican politicians. The latest White House initiative ratchets up the pressure on social media companies from the right, encouraging them to tread lightly as they consider more aggressive moderation of far-right content. Source: White House unveils new tool to report censorship by social media giants (Ars Technica)
  15. The Problem With Verification Travis Hawley was scrolling through the comments sections of LeBron James’s Instagram posts recently when he noticed some postings from other athletes. “Dm me to buy verification badge,” wrote Dmitry Orlov, a player for the Washington Capitals. “Dm me to buy verification badge! Paypal, Zelle, Cashapp,” Malcolm Grant, an American professional basketball player in Lithuania, commented repeatedly. Other athletes joined in. The Cleveland Browns defensive tackle Sheldon Richardson, the Cincinnati Reds third baseman Jonathan India, the San Diego Padres shortstop Fernando Tatis, and the Tampa Bay Buccaneers wide receiver Bobo Wilson all claimed in the comments sections of major sports stars and celebrities that they could sell blue check marks to anyone, for the right price. Their accounts had, of course, been hacked; all have since been restored. But while their comments were live, some people took the bait, asking what they’d need to do or how much they’d need to pay. Hawley, who worked as a government intelligence analyst before getting into social-media marketing, told me he suspected it was a scam but decided to follow the hackers down their rabbit hole, chatting with one man on WhatsApp and by telephone and, eventually, paying him several hundred dollars. When Hawley made it clear he wouldn’t be sending any more money after his initial offering, the hacker blocked him on WhatsApp and Instagram. Hawley remains unverified. The network of Instagram verification-peddling scammers that Hawley encountered is just one of many groups of people seeking to exploit the company’s mysterious verification process for personal gain. Some hackers create fake accounts claiming to offer blue check marks only to steal users’ personal data. Others cheat influencers with fake verification services before commandeering their accounts. Hundreds of people online advertise verification services. And some users have even been able to obtain a check mark after paying thousands of dollars. But the prevalence and longevity of these verification ploys reveals more about the system than the scammers: It’s only because verification is so opaque, and so seemingly arbitrary, that they seem plausible in the first place. Being verified comes with perks: Typically, your comments are sometimes featured higher, it’s harder to impersonate you, and you get more robust insights on your personal account. Ten years ago, Twitter first pioneered the blue check mark; the initial rollout was messy. Five years later, at the end of 2014, Instagram introduced verification badges, but initially only verified users through its partnerships team, which works with top-tier celebrities and brands like Beyoncé and Madonna. Over the years, however, the threshold for verification on all platforms has been lowered significantly. Instagram began giving out more check marks to journalists and business figures, and Twitter introduced a public verification-request form in 2016. In 2017, the company claimed to put verification on pause, but continued to verify users at a steady rate behind the scenes. In 2018 Instagram also introduced a public verification-request form, and the number of verified users ballooned even more. Instagram doesn’t release statistics on how many users it verifies per year, but both platforms almost never revoke badges unless a user explicitly violates terms. This means that even if you leave an industry or stop doing the thing that you became well known for, you won’t lose your badge. I recently came across the profile of a woman who worked at a small public-relations company in Texas and had just a few hundred followers on Twitter and even fewer on Instagram, yet was verified on both platforms. I discovered through LinkedIn that she had been a journalist several years ago for hardly more than a year. (I myself am verified, simply because I’m a reporter who could theoretically be impersonated, not because I am a well-known figure. But if I left my profession tomorrow, I would, under the current system, retain my check mark. ) The line between celebrities and average users has also blurred over the years, thanks to the proliferation of influencers. Reesa Lake, the executive vice president of brand partnerships at Digital Brand Architects, an influencer-marketing agency, told me that verification has become so arbitrary that it’s lost meaning within her industry. “I think the way brands look at it has changed,” Lake said. “It’s not as essential. They look for different qualities in influencers other than if they’re verified. Verification is a personal vanity metric at the end of the day.” She points to someone like Helena Glazer Hodne, known online as Brooklyn Blonde, as a prime example of a hyper-successful influencer who consistently nabs quality brand deals without having a check mark. All this has led to a glut of mid-tier celebrities, sports stars, and moderately known people across a wide span of industries getting blue check marks on Instagram and Twitter—while people with as many, if not more, followers don’t have them. Verification scams, like the one Hawley happened upon, are so common because users constantly encounter seemingly nonfamous people with check marks. It’s easy to assume that if this many unremarkable people have achieved verification, it must be something you can buy. “How do people get verified cuz I [swear to God] they’re just givin em out to randos so they feel better about themselves,” one person tweeted. Some people have suggested incremental changes to the verification process. Mahzad Babayan, the director of talent at Fullscreen, an entertainment company that works with social-media stars, told me that perhaps establishing a review process is in order. “If we’re looking at oversaturation on the platform, they probably need to implement something like a review process,” she said. “It’s like having a job. Don’t you get reviews?” Niv Dror, a venture capitalist, told me that the verification team should offer customer support for those seeking it. “They should have a human you can contact,” Dror said. But these stopgap measures still perpetuate a broken system and won’t ultimately eliminate abusers like the ones Hawley encountered. Rather than have a binary system where users are either check-marked or not—and the lucky few get access to special perks that all users would likely appreciate—Twitter, Instagram, and the like should rebuild the whole system and adopt new features to meet users’ needs. “When I ask friends how they get verified, nine times out of 10 they say they know a guy,” says Siqi Chen, the chief product officer of Sandbox VR. “It seems to be a predictable symptom of the fact that it takes knowing someone to get verified. But when Instagram and Twitter rely on personal relationships it opens the opportunity for scamming. Anyone can say ‘I know someone at Insta who can get you verified,’ and there’s no way to prove or disprove that.” One key way check marks function is as a badge of authenticity. If you’re seeking out clothing from a specific brand, or looking to see if a profile is a specific account and not an impersonator, a verified badge comes in handy. But verified badges are a poor way to deliver this information to users. Twitter and Instagram could, for instance, simply allow any user to link their offline identity or business to their profile. Eugene Wei, a former product head at Flipboard and Hulu, told me that making such a change is key for Instagram as the company moves further into e-commerce. He likens it to when Amazon rolled out verified buyer and seller badges. “You may be like, ‘Hey, is this a legit retailer that I should trust?’” he said. “Having a different system that’s more meaningful and not using a blue check, which is really broad and meaningless, makes sense.” Chen thinks platforms would also do well to develop a new system that clearly communicates information about how and why users are verified. “From a product standpoint, if you want to design something scalable, you need clear standards so that you can give this decision to anyone, or even a machine, to make a judgment call,” he says. “It’s the lack of clear standards that results in the state we have today.” Until Twitter and Instagram take steps to publicly delineate what exactly verification means on the platform, and offer a transparent public process for vetting people, bad actors will exploit people’s confusion. And more people will continue to dream up theories about how the process works. “I wonder if [the confusing verification process] isn’t intentional,” Chen says. “If the hidden standard is that you’re a notable person, maybe the fact that you can reach someone at Facebook or Instagram to get verified is part of the test.” Source
  16. A lack of security training for interns, and their obsession with sharing content on social media, could lead to a perfect storm for hackers looking to collect social engineering data. Researchers are warning of a new security Achilles’ heel for enterprises, and it may not be what they expect. That threat is interns. According to researchers, interns are unwittingly posting confidential and valuable company insights via social media that pose a security risk to the companies that hire them. While insider threats are nothing new and have often been linked to disgruntled employees, or hires who unintentionally click on malicious phishing emails, interns bring an entirely new threat to companies. Lax security training for company interns – coupled with the attachment of Generation Z to social media – is providing a lucrative opportunity for hackers to collect social engineering information, researchers said. More disturbingly, the level of information posted online – including details about office layout, company data, and even badge information – was enough to allow researchers with IBM X-Force Red to actually create their own spoofed badge and physically breach an office while purporting to be an employee. “From posting photos of their security badges to video blogging a ‘day in the life’ at the office, the social media habits of interns and eager young employees make them a rich source of information for hackers,” said Stephanie Carruthers, global social engineering expert with X-Force Red, in a recent post. A New Threat When it comes to collecting data for social engineering, “social media is a goldmine,” said Carruthers – and between Snapchat, Instagram, YouTube and Facebook, Generation Z is the most avid users of social media to date, according to a Pew Research survey. “About 75 percent of the time, a social media search turns up the information I’m seeking within just a few hours,” she said. “This is especially true for large companies, where these posts are most often from interns or new employees.” For instance, interns may post pictures to Snapchat, Instagram or Facebook, as well as videos to Youtube, of their office to social media, revealing internal office layouts, badge pictures, Outlook calendars and more in the background – an easy way for hackers to both collect social engineering tidbits or even breach company premises physically. In fact, that’s exactly what Carruthers did – after discovering an Instagram photo of an intern revealing a new corporate badge, IBM researchers were able to produce a fake badge using photo editing: “The fake badge may not work on doors, but it could work for piggybacking when other employees enter a secure location,” she said. Other platforms, like Glassdoor, offer troves of valuable information for phishing emails – including company organizational charts, salary ranges or typical interview information. “Using this information, an attacker could develop phishing emails, preparing the subject and content according to what’s trending among employees of a given company,” Carruthers said. “Unfortunately, employees could easily fall for a well-crafted email, and they may forget to check the sender’s legitimacy.” Added that that equation is a lack of proper security awareness training for onboarding interns and new hires at many firms, she said. “For companies that don’t include security awareness training as part of onboarding, new employees may not be trained until the next round of companywide instruction, which could be up to a year away,” she said. “Excited new employees often post their #NewJob #FirstDay #CompanyName via a hash-tagged selfie, showing off their new workspace and neglecting to realize that sensitive company information may be in the background.” Protection Insider threats continue to be a top concern across the industry. In fact, according to the Verizon Data Breach Investigations Report from this year, “privilege misuse and error by insiders” account for 30 percent of breaches. How can organizations protect against this insider threat? Companies should rethink their social media security policies, as well as train managers and social teams to spot any risky data posted online, Carruthers said: And because photos may inevitably end up online from the office, she recommended that companies also establish a safe photo space – an area of the office where any sensitive information is banned. The top method of protection, however, is implementing security training, Carruthers stressed. “Make sure your interns and new hires are getting this as part of their onboarding process,” she said. “You can make this fun and effective by helping them to understand the ways a hacker could use the seemingly harmless info they might consider posting.” Source
  17. Next up for Firefox's Tracking Protection: Social Media tracker blocking Mozilla plans to extend the functionality of Firefox's Tracking Protection feature soon by adding Social Media tracker blocking to the list of protections. Social Media tracker blocking is not an entirely new feature but Mozilla wants to move it into its own Tracking Protection category and improve it at the same time. Tracking Protection is a built-in feature of the Firefox web browser that has been designed specifically to mitigate tracking on the Internet. Mozilla enabled Tracking Protection by default for all instances just recently in stable versions of Firefox; the feature was enabled in private browsing windows previously only. The organization announced in mid 2018 that it had plans to push Tracking Protection, and that is exactly what it has been doing since. The next upgrade addresses another major source of tracking on today's Internet: social media tracking. Mozilla plans to introduce the feature in Firefox 70 Stable but the release may be postponed depending on how development progresses or even pulled. Firefox 70 Stable is scheduled for a October 23, 2019 release. Mozilla plans to block social media trackers by default in Firefox once the feature lands. Tracking Protection supports the three different states Standard, Strict and Custom; standard is the default state and social media tracker blocking is enabled in that state. Mozilla maintains a list of trackers that will get blocked when the feature is enabled. The current list includes Facebook, Twitter, LinkedIn, and YouTube, and the various tracking domains that these social media services use. Firefox users may click on the Shield icon in the browser's address bar to display the types of trackers that are blocked on the active site. Social Media Trackers will be listed there if any are blocked and users may click on the small arrow icon to display the list of trackers the site attempted to load. Mozilla plans to make use of doorhangers in the future to inform users about tracking protection features. Closing Words Tracking Protection is a core feature of the Firefox web browser that has been improved significantly in the past 12 months to block more trackers but also other unwanted content such as cryptomining or fingerprinting. There is certainly room for improvement, either by improving existing protections, e.g. fingerprinting, or by implementing new tracking protection groups. Source: Next up for Firefox's Tracking Protection: Social Media tracker blocking (gHacks - Martin Brinkmann)
  18. LONDON (Reuters) - Sixteen of the world’s biggest advertisers have joined together to push platforms such as Facebook, Twitter and Google’s YouTube to do more to tackle dangerous and fake content online. The Global Alliance for Responsible Media will also include media buying agencies from the major ad groups - WPP, IPG, Publicis, Omnicom and Dentsu - as well as the platform owners, the group said on Tuesday at the ad industry’s annual gathering in Cannes, France. Luis Di Como, executive vice president of global media at Unilever, said it was the first time that all sides of the industry had come together to tackle a problem that had far reaching consequences for society. “When industry challenges spill into society, creating division and putting our children at risk, it’s on all of us to act,” he said. “Founding this alliance is a great step toward rebuilding trust in our industry and society.” He said the group would initially focus on content that was a danger to society, such as terrorism. Platform owners had taken steps to address the problems, he said, but their focus had been more reactive - tackling content after it appeared - than proactive. The alliance will work together to develop processes and protocols to protect people and brands, he said. Other brand owners in the alliance include Adidas, Danone, Diageo, Mondelez International, Nestle and Procter & Gamble. Source
  19. Multiple vaping companies were sent letters by federal regulators this week over posts by social media influencers that did not include necessary warnings about the vape products. The warning letters—which were sent to Artist Liquids Laboratories, Humble Juice Co., Hype City Vapors, and Solace Technologies—stated that the posts in question were reviewed by the Food and Drug Administration (FDA) and the Federal Trade Commission (FTC) and found to lack the required warning statement that the product both contains nicotine and that nicotine is an addictive chemical. According to the letters, the posts by influencers in partnership with the respective companies were shared to Facebook, Instagram, and Twitter, platforms on which some of the influencers had tens of thousands or more followers. In some cases, the letters said, posts by the companies themselves on social media or their websites failed to communicate the required warning language. “Given the significant risk of addiction, the failure to disclose the presence of and risks associated with nicotine raises concerns that the social media postings could be unfair or likely to mislead consumers,” the letters read. “The FTC urges you to review your marketing, including endorsements by your social media influencers, and ensure that necessary and appropriate disclosures are made about the health risks of nicotine.” Lorenzo De Plano, a co-founder of Solace, told Gizmodo in a statement by email that the letter his company was sent was related to a post by a single influencer who did not include necessary warnings in their post, adding that the company is no longer working with that individual. “All of Solace Vapor’s internal packaging, marketing and nicotine warnings are compliant with FDA standards,” Plano said. “Solace Vapor does not condone the use of our products by anyone who previously was not a tobacco and or cigarette user. We will be reviewing and terminating any and all 3rd party influencers who may not be compliant with our marketing policies. We hope that all other companies in our industry do the same.” The letters stated that the companies would be required to submit a written response within 15 working days of receipt that outlined their timeline for corrective actions. Spokespeople for Artist Liquids, Humble Juice, and Hype City Vapors did not immediately return requests for comment. The FDA and FTC said that the warning letters come as part of the FDA’s Youth Tobacco Prevention Plan, which among other initiatives is aimed at cutting off access by kids to tobacco products but also includes policing ads and marketing that may target youth. Vape giant Juul previously came under fire for its own marketing, which has been accused of attempting to lure teens to its products and contributing in large part to the widespread use of vape products among kids. The company has since folded many of its social media accounts, including Instagram. “Years of progress to combat youth use of tobacco is now threatened by an epidemic of e-cigarette use by kids, and unfortunately research shows many youth are mistaken or unaware of the risks and the presence of nicotine in e-cigarettes,” Acting FDA Commissioner Ned Sharpless said in a statement this week. “That’s why it’s critical we ensure manufacturers, retailers and others are including the required health warning about nicotine’s addictive properties on packages and advertisements—especially on social media platforms popular with kids.” Source
  20. More than 4.7 million counterfeit products seized, over 16 400 social media accounts suspended and 3 300 websites closed in the EU-wide operation Aphrodite II against trafficking of counterfeit goods. A joint investigation carried out by law enforcement authorities from 18 countries and supported by Europol, resulted in the seizure of 4.7 million counterfeit products. During the operation, 16 470 social media accounts and 3 400 websites selling counterfeit products were closed. The online fake goods marketers were selling a large variety of counterfeit items including clothes and accessories, sports equipment, illegal IPTV set-top boxes, medicines, spare car parts, mobile phones, miscellaneous electronic devices and components, perfumes and cosmetics. The operation led to the arrest of more than 30 suspects and reported 110 others to respective judicial authorities. A select number of suspects are part of two distinct criminal networks responsible for producing and trafficking counterfeit products online. Several investigations are still ongoing. Europol's Intellectual Property Crime Coordinated Coalition (IPC3) and the Italian Finance Corps (Guardia di Finanza) coordinated the joint investigation, with cooperation from the private sector. The European Union Intellectual Property Office (EUIPO) supported the activities of IPC3 with a grant. Law enforcement agencies from Austria, Belgium, Bulgaria, Croatia, Cyprus, Czechia, Greece, Hungary, Ireland, Italy, Malta, Netherlands, Portugal, Romania, Slovakia, Spain, Ukraine and the United Kingdom are all involved in the operation. Digital platforms - no safe haven for counterfeits Criminal groups continuously abuse the communication opportunities of digital platforms such as websites, social media and instant messaging to traffic and sell counterfeit products. The exponential growth of internet platforms has also affected the development of online marketplaces (known as e-stores) that are considered as alternative retail channels. These new markets also take advantage of social channels to perpetrate illicit activities. Law enforcement, supported by the private sector, is therefore extending its response to online trafficking of counterfeit products. To counter the threat, Europol is examining the scale of the problem, gathering evidence and monitoring social media and sales platforms. Selling fakes on social media Sellers can advertise counterfeit goods through overt social media posts - with photos of the product and price - or through hidden links to other marketplaces located outside the EU. In the latter case, details of the transaction are arranged through other communication channels such as instant messaging applications or even by telephone under different names. Couriers deliver the packages while the payment is made with prepaid cards, money transfer companies or other forms of electronic payment and web-based services. Fake products sold on social media can be extremely dangerous. Lacking any quality control and not complying legal norms, fake toys, medicines, body care products, fake spare car parts, inks and material used to produce imitation luxury products and clothes can be harmful to consumer health. IPC3 intends to promote their recurrent operation, operation Aphrodite, to encourage more countries and private countries to get on board and contribute their expertise and explore new operational methodologies. Source
  21. A protected right? Free speech and social media A decade ago this very month, in Cairo's Tahrir Square, social media was being praised. Its role as an organizing tool during the pro-democracy rallies had many calling the Arab Spring the "Facebook Revolution" instead. But for all its glowing promise, we quickly learned social media is only as good as how it's used. "The major difference between now and then is, more than ever before, our experiences on social media are determined by hidden decisions made by the social media companies themselves," said Ramesh Srinivasan, who was in Tahrir Square back then researching how Twitter and Facebook were giving voice to the voiceless. "It used to be something more of an open pipe," he said. Srinivasan is now an author, and a professor at UCLA's Department of Information Studies. "What we are seeing when we go online is likely to be that which is most sensational or inflammatory. They're predicting whatever is most likely to grab people's attention." Keeping us engaged on social media is how Big Tech makes money, and the past four years have proven lies and conspiracy theories are unfortunately more engaging that the truth. "And the danger in that is, what?" asked correspondent Lee Cowan. "The danger is it's gonna present us with an extremely distorted view of reality," Srinivasan replied. "The fringe becomes the new normal." That is a study in the Trump Presidency. His reality TV roots taught him controversy gets ratings. And he used his social media feeds in much the same way. No one has a bigger "bully pulpit" than the President of the United States, and no one before Mr. Trump used it with such abandon online. The attack on the Capitol changed all that. After years of defending his presence on their platforms, Facebook, Twitter, YouTube and other social media giants booted Mr. Trump, claiming he'd incited a riot. Amazon removed an entire site from its servers: Parler, which had become the place favored by many conservatives. "De-platformed" was a word we learned a lot about this past week. "I think Big Tech has made a terrible mistake, and very, very bad for our country," President Trump said Tuesday. "They shouldn't be doing it. But, uhh, there's always a counter-move when they do that." While many applauded the move, the precedent of shutting out the leader of the free world made many people uncomfortable, too. It's a huge power to wield, one that is currently held in the hands of a very few. It's not a new argument; the CEOs of the major tech companies have been called on the carpet before. "Who the hell elected you, and put you in charge of what the media are allowed to report and what the American people are allowed to hear?" Sen. Ted Cruz asked tech executives during an October 2020 hearing on the Communications Decency Act. In 2019 Rep. Alexandria Ocasio-Cortez asked Facebook CEO Mark Zuckerberg, "So, you won't take down lies or you will take down lies? I think it's just a pretty simple yes or no." But claiming Big Tech is running afoul of the First Amendment by de-platforming those it deems harmful may be missing the larger point. Cowan asked, "Is kicking someone off any of these social media sites an infringement of their free speech rights?" "No, it isn't," replied Daphne Keller, who directs the Program on Platform Regulation at Stanford University's Cyber Policy Center. "They are not subject to the First Amendment. They are not the government." When it comes to digital speech and the First Amendment, she said, it gets messy pretty fast. Cowan said, "This isn't just a free speech argument on the part of users; it's also a free speech argument on the part of the providers as well, right?" "People who want to sue platforms, and force them to carry speech they don't want to, have a double First Amendment problem," Keller said. "First of all, those people don't have a First Amendment claim against the platforms, and second of all, the platforms do have a First Amendment argument against being forced to carry speech they disagree with." But what if the argument over regulation was reframed – less about speech, and more about changing how Big Tech exposes us to that speech? Yaël Eisenstat used to work for Facebook as one of the heads of election integrity, where she saw firsthand just what these companies do with all that content. "This idea that it's just this free flow of information is false. It's a curated flow of information," she said. "It's a business model that is predicated on gathering as much of our human behavioral data as possible, to create these little boxes of who we are, to then target us with ads." That's all fine if we're shopping for sneakers, she said. But those same algorithms apply to our politics, too. We'll willingly follow ideas that pop up down the rabbit hole, and those who want their messages to spread know the more controversial, the better. "I don't think that Mark Zuckerberg set out with the idea that, 'I wanna create a platform where the most outrageous, salacious, hate-filled speech wins.' I don't think that was his goal," Eisenstat said. "But instead of holding the platform responsible for what somebody posts, it's the tools that I want them held responsible for, not the actual speech on the platform, except for, of course, if the speech breaks the law." "But it sounds like what you're saying, though, is that'd be changing the business model, pretty much," said Cowan. "One hundred percent." Big Tech has promised more transparency, and better enforcement of their own rules when it comes to spreading dis- and mis-information. Facebook removed more hate speech this year than ever before; Twitter, the same. Even Tik-Tok is being more proactive. But that will likely not be enough going forward. Cowan asked Ramesh Srinivasan, "Can we trust them to do this kind of regulation on their own, though?" "No. We should not be trusting Twitter or any private company to magically serve the public interest. I think, if anything, the last four years have taught us that we can't do that." A year before the end of his second term, President Bill Clinton talked about the challenges of regulating the internet: "That's sorta like trying to nail Jell-O to the wall." That was more than two decades ago. Times change, but the value of good stewardship doesn't. "This discussion is about how we wanna live, how we wanna be, as a country and as a people," said Srinivasan. "It's a discussion about our humanity, at the end of the day. I do believe that you can force people to tie their actions to beliefs that might be a little more virtuous than their mere bottom lines." Source: A protected right? Free speech and social media
  • Create New...