Jump to content

Search the Community

Showing results for tags 'raid'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Site Related
    • News & Updates
    • Site / Forum Feedback
    • Member Introduction
  • News
    • General News
    • FileSharing News
    • Mobile News
    • Software News
    • Security & Privacy News
    • Technology News
  • Downloads
    • nsane.down
  • General Discussions & Support
    • Filesharing Chat
    • Security & Privacy Center
    • Software Chat
    • Mobile Mania
    • Technology Talk
    • Entertainment Exchange
    • Guides & Tutorials
  • Off-Topic Chat
    • The Chat Bar
    • Jokes & Funny Stuff
    • Polling Station

Categories

  • Drivers
  • Filesharing
    • BitTorrent
    • eDonkey & Direct Connect (DC)
    • NewsReaders (Usenet)
    • Other P2P Clients & Tools
  • Internet
    • Download Managers & FTP Clients
    • Messengers
    • Web Browsers
    • Other Internet Tools
  • Multimedia
    • Codecs & Converters
    • Image Viewers & Editors
    • Media Players
    • Other Multimedia Software
  • Security
    • Anti-Malware
    • Firewalls
    • Other Security Tools
  • System
    • Benchmarking & System Info
    • Customization
    • Defrag Tools
    • Disc & Registry Cleaners
    • Management Suites
    • Other System Tools
  • Other Apps
    • Burning & Imaging
    • Document Viewers & Editors
    • File Managers & Archivers
    • Miscellaneous Applications
  • Linux Distributions

Categories

  • General News
  • File Sharing News
  • Mobile News
  • Software News
  • Security & Privacy News
  • Technology News

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Found 7 results

  1. One of the alleged key members of piracy group SPARKS has pleaded not guilty to US Government charges that he was involved in a conspiracy that cost movie companies tens of millions of dollars. Jonatan Correa, aka 'Raid', has been granted bail on a $75,000 bond with a number of conditions attached. On August 25, unofficial reports began to circulate that something big was underway in the top-tier piracy world known as The Scene. So-called topsites, the servers where masses of copyright-infringing content are stored, began shutting down globally as it became clear that law enforcement action was underway. Within hours, the US Department of Justice announced that three key members of the piracy groups known as SPARKS, GECKOS, DRONES and SPLiNTERS had been indicted, charged with a range of offenses connected to movie and TV show piracy. As reports of wider arrests filtered in from law enforcement entities in Europe, the status of the indicted trio was clarified in the United States. George Bridi, 50, a citizen of Great Britain, had been detained in Cyprus on an INTERPOL Red Notice. Norway resident Umar Ahmad (aka ‘Artist’), 39, was still at large. Jonatan Correa (aka ‘Raid’), 36, had been arrested on US soil in Olathe, Kansas, and placed into custody. Jonatan Correa (aka ‘Raid’) The USDOJ alleges that Correa was involved in the ‘Sparks Conspiracy’ (the collective name for the prosecution) from around January 2011 through to August 2020. This differs from the charges against Bridi and Ahmad, whose ‘conspiracy’ charges run from January 2011 to January 2020. The reasons for this remain unclear. All three are charged with causing “tens of millions of dollars” in losses to film production studios. According to a superseding indictment, Correa and the others fraudulently obtained copies of discs containing movies and TV shows in advance of their official release dates. It is further claimed that Correa remotely accessed a computer belonging to a co-conspirator in Westchester County in order to “illegally record and reproduce” copyrighted TV shows. At this stage, the identity of this alleged co-conspirator is being kept under wraps by the authorities. While Correa’s co-defendants face additional charges including wire fraud and transporting stolen property, Correa faces a single charge of conspiracy to commit criminal copyright infringement. Correa Arrested, Quickly Released, Pleads Not Guilty After being arrested in Kansas on August 25 at 07:00, court records reveal that Correa was presented before the Kansas district court on August 26 and released the very same day. Other documents that might offer additional information persistently return a “not found” error on records portal PACER, while access to others is simply denied. However, additional detail is available relating to Correa’s status and plea. Records indicate that Correa was released on bail with a $75,000 bond, secured by cash or property. His release comes with pre-trial supervision conditions, including that he must surrender all travel documents and submit to a ban on all new travel document applications. In addition, Correa has agreed to participate in a drug testing and treatment program and is forbidden from possessing firearms, other weapons, or any “destructive devices”. He is further ordered to abstain from contact with co-defendants and witnesses, with conditions. “Defendant shall have no contact with any co-defendant, witnesses known to Defendant, or any other members of the Sparks Group without the presence of counsel,” a bail document reads, adding that Correa must not engage in any of the conduct alleged in his charging document either. Perhaps most importantly of all, exactly one week ago on September 1, 2020, Correa appeared remotely before United States Magistrate Judge James L. Cott and was arraigned on the superseding indictment. Correa pleaded not guilty to the charges against him. Whether that position will change later down the line remains to be seen but right now, his alleged co-conspirators don’t yet appear to be in US custody. Alleged Co-Conspirators Bridi and Ahmad According to the most recent official information, Ahmad (Artist) is reportedly still at large, perhaps in Norway, while Bridi is located some 5,500 miles away from New York on the island of Cyprus in the Mediterranean Sea. Since the unsealing of his indictment and from the filings made available to date, Ahmad hasn’t been mentioned in court documents. In respect of Bridi, however, the US Government has requested and obtained multiple certified copies of his indictment and arrest warrant for the “purposes of extraditing the defendant to the United States.” When that will take place is unknown but at least as far as Correa’s cases is concerned, Acting United States Attorney Audrey Strauss told the court last week that the parties are currently in discussion over discovery and a pre-trial disposition. A conference has been set for October 26, 2020, for these discussions to continue. As previously reported, the action against SPARKS has negatively affect pirated releases from The Scene, with a steep decline observed in volumes of content being made available. Source documents here (1,2,3) Source: TorrentFreak
  2. When officers from Hungary's National Tax and Customs Administration raided a pirate IPTV provider they were unsurprised to discover large amounts of satellite and computer equipment for capturing and distributing live TV . However, what they also found was hundreds of pounds of food that had been stockpiled by the operator, who hadn't been outside for months due to fears of catching the coronavirus. 2020 has developed into one of the most memorable years in living memory for the entire planet but for mostly the wrong reasons. Not a day goes by without news of the coronavirus pandemic and its devastating effect on individuals, families, the economy, and health in general. In common with many industries, coronavirus has hit the entertainment sectors too, with few new films and TV shows coming out (with notable exceptions such as Mulan) as people are either forced or inclined to stay home and stay safe. Throughout all of this, however, pirate operations have remained mostly online, with notable spikes in interest reported earlier in the year. IPTV Raid and Arrest But Authorities Didn’t Expect This As part of European efforts to crack down on the supply of IPTV, a few weeks ago officers in the National Tax and Customs Administration raided a pirate IPTV provider. What they found was extraordinary to say the least. Situated in what appeared to be a fenced-off barbed wire compound with CCTV surveillance, the outside of the building was perhaps not much of a surprise. Adorned with a large number of satellite dishes used to source original programming from the skies, the walls of the structure gave away what may lie inside. Indeed, the main contents of the building were as expected, such as an office with desks, chairs and various computers, plus a separate area containing what appear to be rows of servers used for capturing TV content from official providers and redistributing it over the Internet. In total, the authorities seized 52 computers, several decoders, TV cards, plus six servers dedicated to redistribution. The image above suggests that the operation wasn’t set up on the large budgets usually witnessed in police footage from raids elsewhere in Europe but with at least 8,000 paying customers, it was clearly functional. However, in a video released by the authorities, it is apparent that on some of the server shelves also sit items of food, including dozens and dozens of packets of flour. A panning camera shot also reveals a large refrigerator and then a small mountain of stacked canned food. Another shot, possibly in another area, reveals little floor space due to yet more stacked cans, a significant area occupied by box upon box of dried pasta packets, plus additional shelves loaded with soft drinks, other foodstuffs, and the coronavirus pandemic staple – dozens of toilet rolls. An Operator of the Service Was Scared of the Coronavirus According to the National Tax and Customs Administration, the service was founded by a man from Nagykanizsa who first set out to “redirect” his mother’s paid TV package to his own home for free. He teamed up with a man from Budapest to create a service that was subsequently offered to close friends too. Over time, however, they realized they could make money from the operation and began offering it on an invitation-only basis to outsiders. The network of customers grew and ultimately became available worldwide via the Internet. However, earlier this year, when the coronavirus started to sweep across Europe, one of the people in charge of the operation reacted like many across the region. In fear of catching what could be a deadly virus, he stockpiled the mountains of food detailed Situated in what appeared to be a fenced-off barbed wire compound with CCTV surveillance, the outside of the building was perhaps not much of a surprise. Adorned with a large number of satellite dishes used to source original programming from the skies, the walls of the structure gave away what may lie inside.above – hundreds of pounds/kilos – so that he could keep the service running but without having to venture far outside. “In addition to IT equipment, durable food was in the Budapest property. The young man had accumulated hundreds of kilos of flour, canned food and pasta in fear of the coronavirus epidemic, and had not ventured into the streets for months,” the authorities explain. Damage to Copyright Holders But Also Paying No Taxes According to estimates provided by the tax authorities, the service is alleged to have generated around HUF 6 million (US$1.97m) for the pair but for reasons that aren’t explained, they “forgot” to pay the necessary duties to the state. This explains why the tax authorities were involved in the raid. “An illegal IPTV service that is provided without payment of royalties infringes copyright or copyright-related rights, which is a criminal offense. The offender can be sentenced to up to eight years in prison,” the National Tax and Customs Administration says. Whether the self-imposed prison sentence of a few months will now be extended to a forced sentence of a few years is currently unknown. Previous Post Source: TorrentFreak
  3. The raids and arrests this week targeting piracy release group SPARKS have caused chaos in The Scene, with members and groups going into hiding and new releases dropping like a stone. The targeting of just one group shouldn't have such a massive effect but it seems probable that in the weeks and months to come, we'll learn that one weak spot can be exploited to undermine a much larger infrastructure. This Tuesday, TorrentFreak received more rapid-fire anonymous tips than we have done in recent memory. Demanding confidentiality is nothing new but tipsters and sources using anonymous mailers, obscured IP addresses, alongside repeat requests that identities aren’t revealed, usually point to something particularly unusual. And indeed, something unusual was definitely underway. Late Tuesday, documents filed under seal in the United States as early as January 2020 were suddenly unsealed, revealing one of the most important piracy-related cases of the past decade. As detailed in our report yesterday, a case brought by the US Government resulted in a Grand Jury charging at least three members of several and related top-tier ‘Scene’ release groups – SPARKS, GECKOS, DRONES, ROVERS and SPLiNTERS – with conspiracy to commit criminal copyright infringement and other crimes. The US case has been ongoing for many months and the investigation certainly longer. Exactly how long was unknown until yesterday when a Swedish prosecutor revealed that it had been underway “for years”. However, What took us by surprise was the volume of reports on Tuesday, the claims of panic and fear in ‘The Scene’ globally, and what now appears to be a significant reduction of releases of all kinds from what is usually a prolific and cascading ‘Piracy Pyramid’ system. Initial Information Proved Correct People closely involved in The Scene are naturally secretive, or at least that’s the mandate. The truth is that some are prepared to talk but everyone is so scared of being caught by the authorities or labeled by fellow members as insecure, that truly verifiable sources are extremely hard to come by. As a result, reporting the finer details becomes a product of overlapping independent sources, none of whom want to be identified, which isn’t ideal. Nevertheless, during Tuesday we were told by multiple sources that topsites and warez-affiliated members and resources were being targeted by law enforcement, anti-piracy groups, or a combination of both in many regions. What they all had in common was that the entities were affiliated with SPARKS and various topsites. Another recurring theme was the focus on Nordic countries as being at the heart of action. Many countries were mentioned, including the Netherlands, Germany, Switzerland and Poland but, again and again, the reports cited both Norway and Sweden as potentially the main ‘problem’ areas. US Department of Justice Began Talking Yesterday In an official announcement Wednesday, following the initial yet unofficial reports of raids 24 hours earlier and after the unsealing of the indictments, the USDOJ revealed the global scale of the operation against SPARKS and its affiliates. “Thanks to the efforts of HSI, the Postal Inspection Service, Eurojust, Europol, and our law enforcement partners in 18 countries on three continents, key members of this group are in custody, and the servers that were the pipeline for wholesale theft of intellectual property are now out of service,” the announcement read. The US revealed that law enforcement authorities in many countries assisted in the investigation against SPARKS including those in Canada, Cyprus, Czech Republic, Denmark, France, Germany, Italy, Republic of Korea, Latvia, Netherlands, Norway, Poland, Portugal, Romania, Spain, Sweden, Switzerland, and the United Kingdom. SPARKS member George Bridi, 50, was reportedly arrested on Sunday in Cyprus on an INTERPOL Red Notice. Correa (aka ‘Raid’), 36, was arrested Tuesday in Olathe, Kansas, where he will appear in federal court. Umar Ahmad (aka ‘Artist’), 39, was not arrested and as of Wednesday was reportedly still at large, according to the US Government. The Nordic Connection Several pieces of information received by TF during Tuesday indeed placed someone known as ‘Artist’ as a central and important figure in the action taking place. Umar Ahmad is now officially named as that key person but according to his indictment, the US Government is not seeking to prosecute him for SPARKS-related offenses beyond January 2020. That’s also the case for George Bridi, an indicted co-defendant whose alias is currently unknown. The only SPARKS defendant charged with offenses up to August 2020 is Jonatan Correa, aka ‘Raid’. While there is room for speculation as to what may have happened here, it seems somewhat reasonable to conclude (at least given the charges) that Ahmad and Bridi stopped their alleged offending months ago. However, according to records kept by Scene-watching sites (known as pre-databases), SPARKS-related groups continued releasing content online until fairly recently. That aside, what we can confirm today is that Norway’s National Criminal Investigation Service, commonly known as Kripos, carried out raids at several premises this week and seized computer equipment on what is being described as a “large scale”. In addition, three men – who are yet to be named but are in their 30s and 40s – were arrested and charged for breaches of Norway’s Copyright Act. It is not currently known whether 39-year-old Oslo-resident Umar Ahmad is among them. Danish authorities have also confirmed that four men, aged between 35 and 48, had their homes searched and were subsequently charged with copyright infringement offenses. Servers and other pieces of IT equipment were seized. Source: Some Warning Signs Were Spotted a While Ago It’s certainly possible that SPARKS members were absolutely oblivious to the US Government’s investigation but according to one difficult-to-verify source, who insisted on anonymity but spoke with us at length and in considerable detail, this year and “before COVID”, some Scene members were questioning why a particular SPARKS member had suddenly “retired”. We are not publishing that member’s name here (which we believe was provided to us in advance of the unsealing of the US indictment) but according to the same source, another possibly-connected mystery was still lingering. The source alleges that some months earlier an individual connected to a separate yet prominent release group also “went afk” and suddenly stopped providing content. Again, we aren’t publishing the name of that group or the nickname of the person involved but we can confirm that the alleged group stopped releasing several months before the end of 2019. This led to rumors that one or both may have been compromised and hadn’t just taken a break. The relevance is that, according to the same insider, the pair (coincidentally or not) are believed to have shared the same content sources. Again, this is unconfirmed information but the first group has never returned to action and the second has the US Government on the attack after uncovering where it was obtaining its DVD and Blu-Ray discs from. Significant Legal Action in Sweden After receiving initial information, which was later confirmed by the USDOJ, that significant action had taken place in Sweden. On Tuesday, we spoke with Jon Karlung, the owner of ISP Bahnhof, which we were informed may have been visited by the authorities investigating SPARKS. That turned out not to be the case. Karlung told us that nobody had visited the company nor requested information. However, he said that with 400,000 households and 10,000 companies as clients, plus the company’s sale of bandwidth capacity to other ISPs, he couldn’t rule out that someone way down the chain, even a client of someone else, may have been visited. Whether connected to this specific ISP or not, multiple sources informed us that at least one topsite affiliated with multiple groups utilized a high-bandwidth home link in Sweden, with another topsite connected to multiple groups also seized in the country. What we know from official sources is that there were 14 house searches carried out in Sweden on Tuesday, including in Umeå, Malmö, Gothenburg and Stockholm. No one was arrested during the raids but according to prosecutor Johanna Kolga, more servers were seized in Sweden than anywhere else. Netherlands Action and the Existence of MLATs Finding information about what happened in the Netherlands led us to Tim Kuik of anti-piracy group BREIN. We put it to him that if anyone in the country knows anything about the case, it must be him. Like most other people, Kuik wasn’t budging on detail. But he did offer a plausible explanation for the silence. “It is an interesting case indeed. It is entirely possible for so-called MLATs to be carried out on the request of say US law enforcement and the Dutch authorities carrying it out without informing any private stakeholders,” Kuik told us. “In such cases it may be so that stakeholders abroad, who may have filed a criminal complaint for example, have been made aware and would not be at liberty to say anything about it. So nobody is likely to comment I think. But you can always try. I have no comment.” Later, however, Eurojust – the European Union Agency for Criminal Justice Cooperation – confirmed that it “helped transmit and facilitate the execution of over 30 Mutual Legal Assistance requests and Letters of Request necessary for taking down the servers and executing searches..” In all, over 60 servers were taken down in North America, Europe and Asia and “several main suspects” were arrested, the agency added. Interesting Allegations, Few New Releases, and Kevin Bacon Over the past 48+ hours, TF has been provided with a list of topsites and related infrastructure that has either been raided or taken down as a precautionary measure. The dozen-plus platforms will therefore remain unnamed, as we simply cannot determine which of the platforms are offline voluntarily, or down because they have been seized. This leads us to why so many sites and other key pieces of infrastructure have disappeared, apparently just because one group was targeted. The reasons, we are told, are complex but can be boiled down to the number of connections SPARKS had in The Scene. One recurring theme is that one of SPARKS’ members is claimed to have become quite influential and as a result may have “extended his tentacles too far”, as one source framed it. These connections, with many other groups and activities, may go some way to explaining why The Scene all but shut down Tuesday. If we take Bacon’s Law and apply it here, the response makes complete sense. Nevertheless, the scale of the shutdown is unusual, to say the least, and only time will tell if The Scene will fully recover. For the average torrent or streaming site user, a period of reduced new content availability might be on the horizon but history shows us that rarely lasts for long and that the cycle will probably begin again, once people have figured out who they can trust. Source: TorrentFreak
  4. The top-tier piracy world known as The Scene is in turmoil after the unsealing of US indictments targeting key members of release group SPARKS and several linked affiliate groups including GECKOS, DRONES, ROVERS and SPLINTERS. Sources inform TF that there were several raids across Europe yesterday, mainly focused on Norway and Sweden. Yesterday morning, TorrentFreak began receiving reports from multiple sources that something big was happening in the shadowy world of top-tier piracy known as ‘The Scene’. From the volumes of information received, the majority of sources indicated that many so-called ‘topsites’ and their members had disappeared or gone into hiding. The word was that several major movie release groups – SPARKS, GECKOS and DRONES – had been targeted in a series of raids and as a result, people were running for cover. Precisely where these raids or actions took place still isn’t entirely clear. Multiple sources point to the Nordic region, particularly Norway and Sweden, but reports of disruption and/or action in the Netherlands and even Switzerland persisted across our confidential sources, all of whom demand anonymity. Importantly, one name kept cropping up – ‘Artist’ – someone who was identified by at least two individuals with inside knowledge as a central figure, not only in The Scene but also in the action that appeared to unfold yesterday. The reality, it now transpires, is that the events of Tuesday had their roots in an investigation that started months – maybe even years ago. US Govt Legal Action Launched in January 2020 – George Bridi On January 8, 2020, an indictment in the case United States v. BRIDI was filed and sealed in a New York district court. Yesterday, as the action in Europe was unfolding, Magistrate Judge Debra C. Freeman ordered the indictment to be unsealed, shining significant light on the events of Tuesday. The indictment and Grand Jury charges target George Bridi, a British national who, according to the US Government, resided in the Isle of Wight, a small island off the south coast of England. Bridi is identified, along with others “known and unknown”, as a member of a criminal conspiracy, aka release group SPARKS and its affiliates. His Scene nickname, if he has one, is not listed in the available documents. Nevertheless, the US Government claims to know much about his activities. “The primary objective of the Sparks Group was to fraudulently obtain DVDs and Blu-Ray discs for copyrighted movies and television shows prior to their retail release date, compromise the copyright protections on the discs, reproduce and upload the copyrighted content to servers controlled by the Sparks Group, and disseminate the copyrighted content on the Internet for public consumption before the DVDs and Blu-Ray discs were made available for sale by retailers to the public,” Bridi’s indictment reads. The ‘conspiracy’, at least in the case of Bridi, was alleged to taken place from around 2011 until January 2020. It’s claimed that members of SPARKS made various “material misrepresentations and omissions” to wholesale distributors in Manhattan, Brooklyn, and New Jersey in order to obtain copies of DVDs and Blu-Ray discs prior to their official release. The indictment indicates that Bridi was also involved in obtaining and reproducing TV show content. Once obtained, members of SPARKS allegedly used specialist software to “rip” the discs in order to remove their copyright protections and then encoded the content into a format easily consumed via the Internet. This content was then uploaded to servers controlled by SPARKS members, from where it’s alleged other members further distributed the content to streaming sites, torrent networks and other servers. Adding to the information received yesterday by TF indicating that SPARKS, GECKOS and DRONES were central to Tuesday’s turmoil, the indictment adds two other release groups to the list – ROVERS and SPRINTER – both claimed to be part of the ‘Sparks Conspiracy’. Bridi is charged with conspiracy to commit copyright infringement, conspiracy to commit wire fraud, and conspiracy to transport stolen property interstate. Superseding Indictment – Umar Ahmad, aka ‘Artist’ In addition to information received early yesterday by TF, with multiple sources referencing someone called ‘Artist’ involved in a central role, unsealed court documents now put more flesh on the bones. Identified by the US Government as Oslo, Norway, resident ‘Umar Ahmad’, it’s alleged that from around 2011 up to January 2020, ‘Artist’ was also a member of the ‘Sparks Conspiracy’. In common with Bridi, he is alleged to have caused “tens of millions of dollars” in losses to film production studios by being part of the SPARKS group that obtained physical discs from wholesale distribution companies, ripped and encoded them, and placed the content on the Internet for public consumption. Along with Bridi, he faces charges of conspiracy to commit criminal copyright infringement but charges of wire fraud and transporting stolen property are absent. Superseding Indictment – Jonatan Correa, aka ‘Raid’ Like Bridi and Ahmad, it’s alleged that Correa was involved in the ‘Sparks Conspiracy’ from around 2011 but his involvement is said to have stretched beyond the January 2020 dates listed in their Grand Jury charges by several months, to August 2020. For reasons that aren’t yet entirely clear, he appears to have carried on his alleged criminal activity until this month, unlike his peers who are only charged with offenses up to the first month of 2020. Correa’s charges read very much like Bridi’s and Ahmad’s, in that he stands accused of being part of the ‘Sparks Conspiracy’ that unlawfully obtained, ripped, encoded and uploaded video content to the Internet, including to various servers, streaming and torrent sites, all in advance of their official street date. “On numerous occasions between in or around 2011 and in or around May 2020 , a co-conspirator not named herein fraudulently arranged for discs containing copyrighted films and television shows to be picked up, mailed, or delivered from distributors located in Manhattan, Brooklyn, New Jersey, and British Columbia, Canada to other members of the Sparks Group, including JONATAN CORREA, a/k/a ‘Raid,’ the defendant, prior to their official release date…,” Correa’s indictment reads. It’s further alleged that around 2011 and in or around May 2020, Correa “remotely accessed” a computer located in Westchester County and belonging to another unnamed co-conspirator, in order to “illegally record and reproduce” copyrighted TV shows. While Bridi and Ahmad’s locations are revealed in court documents, Correa’s currently remains unknown. Like Bridi, he faces charges of conspiracy to commit criminal copyright infringement but in common with Ahmad, charges of wire fraud and transporting stolen property are absent. Supporting Reports and Information TorrentFreak is currently liaising with a number of sources who for security reasons are demanding anonymity. However, it appears that the information detailed thus far is just the tip of a large iceberg that has much of The Scene in turmoil and/or in hiding. What we can say at this point is that the mention of Umar Ahmad, aka ‘Artist’, is causing considerable concern because of his alleged reach in The Scene that seems to go well beyond SPARKS and its affiliated groups. We’ll have more on this in a follow-up report but in the meantime the US court documents can be found here (1,2,3 pdf) Source: TorrentFreak
  5. ZFS versus RAID: Eight Ironwolf disks, two filesystems, one winner We exhaustively tested ZFS and RAID performance on our Storage Hot Rod server. Enlarge / Neither the stopwatch nor the denim jacket is strictly necessary, if we're being honest about it. Aurich Lawson / Getty 126 with 68 posters participating, including story author This has been a long while in the making—it's test results time. To truly understand the fundamentals of computer storage, it's important to explore the impact of various conventional RAID (Redundant Array of Inexpensive Disks) topologies on performance. It's also important to understand what ZFS is and how it works. But at some point, people (particularly computer enthusiasts on the Internet) want numbers. First, a quick note: This testing, naturally, builds on those fundamentals. We're going to draw heavily on lessons learned as we explore ZFS topologies here. If you aren't yet entirely solid on the difference between pools and vdevs or what ashift and recordsize mean, we strongly recommend you revisit those explainers before diving into testing and results. And although everybody loves to see raw numbers, we urge an additional focus on how these figures relate to one another. All of our charts relate the performance of ZFS pool topologies at sizes from two to eight disks to the performance of a single disk. If you change the model of disk, your raw numbers will change accordingly—but for the most part, their relation to a single disk's performance will not. Equipment as tested First image of article image gallery. Please visit the source link to see all 2 images. We used the eight empty bays in our Summer 2019 Storage Hot Rod for this test. It's got oodles of RAM and more than enough CPU horsepower to chew through these storage tests without breaking a sweat. Specs at a glance: Summer 2019 Storage Hot Rod, as tested OS Ubuntu 18.04.4 LTS CPU AMD Ryzen 7 2700X—$250 on Amazon RAM 64GB ECC DDR4 UDIMM kit—$459 at Amazon Storage Adapter LSI-9300-8i 8-port Host Bus Adapter—$148 at Amazon Storage 8x 12TB Seagate Ironwolf—$320 ea at Amazon Motherboard Asrock Rack X470D4U—$260 at Amazon PSU EVGA 850GQ Semi Modular PSU—$140 at Adorama Chassis Rosewill RSV-L4112—Typically $260, currently unavailable due to CV19 The Storage Hot Rod's also got a dedicated LSI-9300-8i Host Bus Adapter (HBA) which isn't used for anything but the disks under test. The first four bays of the chassis have our own backup data on them—but they were idle during all tests here and are attached to the motherboard's SATA controller, entirely isolated from our test arrays. How we tested As always, we used fio to perform all of our storage tests. We ran them locally on the Hot Rod, and we used three basic random-access test types: read, write, and sync write. Each of the tests was run with both 4K and 1M blocksizes, and I ran the tests both with a single process and iodepth=1 as well as with eight processes with iodepth=8. For all tests, we're using ZFS on Linux 0.7.5, as found in main repositories for Ubuntu 18.04 LTS. It's worth noting that ZFS on Linux 0.7.5 is two years old now—there are features and performance improvements in newer versions of OpenZFS that weren't available in 0.7.5. We tested with 0.7.5 anyway—much to the annoyance of at least one very senior OpenZFS developer—because when we ran the tests, 18.04 was the most current Ubuntu LTS and one of the most current stable distributions in general. In the next article in this series—on ZFS tuning and optimization—we'll update to the brand-new Ubuntu 20.04 LTS and a much newer ZFS on Linux 0.8.3. Initial setup: ZFS vs mdraid/ext4 When we tested mdadm and ext4, we didn't really use the entire disk—we created a 1TiB partition at the head of each disk and used those 1TiB partitions. We also had to invoke arcane arguments—mkfs.ext4 -E lazy_itable_init=0,lazy_journal_init=0—to avoid ext4's preallocation from contaminating our results. Using these relatively small partitions instead of the entire disks was a practical necessity, since ext4 needs to grovel over the entire created filesystem and disperse preallocated metadata blocks throughout. If we had used the full disks, the usable space on the eight-disk RAID6 topology would have been roughly 65TiB—and it would have taken several hours to format, with similar agonizing waits for every topology tested. ZFS, happily, doesn't need or want to preallocate metadata blocks—it creates them on the fly as they become necessary instead. So we fed ZFS each 12TB Ironwolf disk in its entirety, and we didn't need to wait through lengthy formatting procedures—each topology, even the largest, was ready for use a second or two after creation, with no special arguments needed. ZFS vs conventional RAID A conventional RAID array is a simple abstraction layer that sits between a filesystem and a set of disks. It presents the entire array as a virtual "disk" device that, from the filesystem's perspective, is indistinguishable from an actual, individual disk—even if it's significantly larger than the largest single disk might be. ZFS is an entirely different animal, and it encompasses functions that normally might occupy three separate layers in a traditional Unixlike system. It's a logical volume manager, a RAID system, and a filesystem all wrapped into one. Merging traditional layers like this has caused many a senior admin to grind their teeth in outrage, but there are very good reasons for it. There is an absolute ton of features ZFS offers, and users unfamiliar with them are highly encouraged to take a look at our 2014 coverage of next-generation filesystems for a basic overview as well as our recent ZFS 101 article for a much more comprehensive explanation. Megabytes vs Mebibytes As in the last article, our units of performance measurement here are kibibytes (KiB) and mebibytes (MiB). A kibibyte is 1,024 bytes, a mebibyte is 1,024 kibibytes, and so forth—in contrast to a kilobyte, which is 1,000 bytes, and a megabyte, which is 1,000 kilobytes. Kibibytes and their big siblings have always been the standard units for computer storage. Prior to the 1990s, computer professionals simply referred to them as K and M—and used the inaccurate metric prefixes when they spelled them out. But any time your operating system refers to GB, MB, or KB—whether in terms of free space, network speed, or amounts of RAM—it's really referring to GiB, MiB, and KiB. Storage vendors, unfortunately, eventually seized upon the difference between the metrics as a way to more cheaply produce "gigabyte" drives and then "terabyte" drives—so a 500GB SSD is really only 465 GiB, and 12TB hard drives like the ones we're testing today are really only 10.9TiB each. Testing and analysis, using ZFS default settings As we did with the mdraid performance tests, we used fio to test our Ironwolf disks under ZFS. Once again, we're focusing entirely on random access, in two block sizes: 4KiB and 1MiB, which we test for read, write, and synchronous write on all topologies. We also have some additional variables in play when testing ZFS. We wanted to show what happens when you misconfigure the ashift value (which sets the sector size) and also what happens when you tune recordsize to better reflect your workload. In order to focus on all this juicy and relevant data, we needed to cut some fluff—so we're only looking at multi-process operations this time, with fio set to numjobs=8 and iodepth=8. The major reason we tested single-process operations in the mdraid performance article was to very directly demonstrate that multiple-disk topologies tend not to accelerate single-threaded workloads. That's still true here—and while we did test single-threaded workloads against ZFS, there was nothing to be seen there that you don't see in the otherwise much more interesting multi-threaded, 8-process workloads. So that's what we're focusing on. Performance scales with vdevs First image of article image gallery. Please visit the source link to see all 3 images. One of the most common—and most pernicious—myths I encounter when talking to people about storage is the idea that performance scales well with the number of disks in a vdev, rather than the number of vdevs in a pool. The very mistaken idea—which seems reasonable on the surface—is that as the number of data chunks in a stripe go up, performance goes up with it. If you have an eight-disk RAIDz2, you've got six "data disks" per stripe, so six times the performance, give or take—right? Meanwhile, an 8-disk pool of mirrors only has four "data disks" per stripe, so—lower performance! In the above charts, we show performance trends per vdev for single-disk, 2-disk mirror, and 4-disk RAIDz2 vdevs. These are shown in solid lines, and the "joker"—a single RAIDz2 vdev, becoming increasingly wide—is the dark, dashed line. So at n=4, we're looking at four single-disk vdevs, four 2-disk mirror vdevs—and a single, 6-wide RAIDz2 vdev. Remember, n for the joker isn't the total number of disks—it's the total number of disks, minus parity. Let's take a look at n=2 on the 1M Async Write chart above. For single disk vdevs, two-wide mirror vdevs, and four-wide RAIDz2 vdevs, we see just the scaling we'd expect—at 202 percent, 203 percent, and 207 percent of the performance of a single vdev, of the same class. The RAIDz2 "joker" at n=2, on the other hand, is four disks wide—and since RAIDz2 is dual parity, that means it's got two "data disks," hence n=2. And we can see that it's underperforming badly compared to the per-vdev lines, with only 160 percent the performance of a single "data disk." The trend only gets worse as the single RAIDz gets wider, while the trend-lines for per-vdev scaling keep a clean, positive linear slope. We see similar trends in 4K writes and 1M reads alike. The closest the increasingly wide single-RAIDz2 vdev comes to linear scale is on 1MiB reads, where at first its per-disk scale appears to be keeping up with per-vdev scale. But it sharply falls off after n=4—and that trend will continue to get worse, as the vdev gets wider. We'll talk about how and why those reads are falling off so sharply later—but for now, let's get into some simple performance tests, with raw numbers. RAIDz2 vs RAID6—default settings First image of article image gallery. Please visit the source link to see all 3 images. If you carefully test RAIDz2 versus RAID6, the first thing that stands out is just how fast the writes are. Even sync writes, which you might intuitively expect to be slower on ZFS—since they must often be "double-committed," once to ZIL and once again to main storage—are significantly faster than they were on mdraid6. Uncached reads, unfortunately, trend the other way—RAIDz vdevs tend to pay for their fast writes with slow reads. This disadvantage tends to be strongly offset in the real world due to the ARC's higher cache hit ratio as compared to the simple LRU cache used by the kernel, but that effect is difficult or impossible to estimate in simple, synthetic tests like these. On servers with heavy, concurrent mixed read/write workloads, the effect of RAIDz's slow reads can also be offset by how much faster the writes are going—remember, storage is effectively half-duplex; you can't read and write at the same time. Any decrease in utilization on the write side will show up as increased availability on the read side. Storage is a complex beast! Caveats, hedges, and weasel words aside, let's make this clear—RAIDz is not a strong performer for uncached, pure read workloads. First image of article image gallery. Please visit the source link to see all 3 images. For the most part, we're seeing the same phenomenon in the 4KiB scale that we did at 1MiB—RAIDz2 handily beats RAID6 at writes, but it gets its butt handed to it on reads. What might not be quite so obvious is that RAIDz2 is suffering badly from a misconfiguration here—most users real-world experience of 4KiB I/O is due to small files, such as the dotfiles in a Linux user's home directory, the similar .INIs and what have you in a Windows user's home directory, and the hordes of small .INI or .conf files most systems are plagued with. But the 4KiB RAIDz2 performance you're seeing here is not the RAIDz2 performance you'd be seeing with 4KiB files. You see, fio writes single, very large files and seeks inside them. Since we're testing on the default recordsize of 128KiB, that means that each RAIDz2 4KiB read is forced to pull in an additional 124KiB of useless, unwanted data. Later, we'll see what happens when we properly tune our ZFS system for an appropriate recordsize—which much more closely approximates real-world experience with 4KiB I/O in small files, even without tuning. But for now, let's keep everything on defaults and move ahead to performance when using two-wide mirror vdevs. ZFS mirror vdevs vs RAID10—default settings First image of article image gallery. Please visit the source link to see all 3 images. The comparison between mirror vdevs and RAID10 is a fun one, because these are easily the highest-performing topologies. Who doesn't like big numbers? At first blush, the two seem pretty evenly matched. In 1MiB write, sync write, and uncached read, both systems exhibit near-linear scale and positive slope, and they tend to be pretty close. RAID10 clearly has the upper hand when it comes to pure uncached reads—but unlike RAIDz2 vs RAID6, the lead isn't enormous. We're leaving a lot of performance on the table, though, by sticking to default settings. We'll revisit that, but for now let's move on to see how mirrors and RAID10 fare with 4KiB random I/O. First image of article image gallery. Please visit the source link to see all 3 images. Although the curve is a little tweaky, we see Linux RAID10 clearly outperform ZFS mirrors on 4KiB writes—a first, for mdraid and ext4. But when we move on to sync 4KiB writes, the trend reverses. RAID10 is unable to keep up with even a single ext4 disk, while the pool of mirrors soars to better than 500 percent a single ext4 disk's performance. Moving onto 4KiB reads, we once again see the ZFS topology suffering due to misconfiguration. Having left our recordsize at the default 128KiB, we're reading in an extra 124KiB with every 4KiB we actually want. In some cases, we can get some use out of that unnecessary data later—by caching it and servicing future read requests from the cache instead of from the metal. But we're working with a large dataset, so very few of those cached "extras" are ever of any use. Retesting ZFS with recordsize set correctly We believe it's important to test things the way they come out of the box. The shipping defaults should be sane defaults, and that's a good place for everyone to start from. While the ZFS defaults are reasonably sane, fio doesn't interact with disks in quite the same way most users normally do. Most user interaction with storage can be characterized by reading and writing files in their entirety—and that's not what fio does. When you ask fio to show you random read and write behavior, it creates one very large file for each testing process (eight of them, in today's tests), and that process seeks within that large file. With the default recordsize=128K, ZFS will store a 4KiB file in an undersized record, which only occupies a single 4KiB sector—and reads of that file later will also only need to light up a single 4KiB sector on disk. But when performing 4KiB random I/O with fio, since the 4KiB requests are pieces of a very large file, ZFS must read (and write) the requests in full-sized 128KiB increments. Although the impact is somewhat smaller, the default 128KiB recordsize also penalizes large file access. After all, it's not exactly optimal to store and retrieve an 8MiB digital photo in 64 128KiB blocks, rather than only 8 1MiB blocks. In this section, we're going to zfs set recordsize=4K test for the 4KiB random I/O tests, and zfs set recordsize=1M for the 1MiB random I/O tests. Is ZFS getting "special treatment" here? An experienced sysadmin might reasonably object to ZFS being given special treatment while mdraid is left to its default settings. But there's a reason for that, and it's not just "we really like ZFS." While you can certainly tune chunk size on a kernel RAID array, any such tuning affects the entire device globally. If you tune a 48TB mdraid10 for 4KiB I/O, it's going to absolutely suck at 1MiB I/O—and similarly, a 48TB mdraid10 tuned for 1MiB I/O will perform horribly at 4KiB I/O. To fix that, you must destroy the entire array and any filesystems and data on it, recreate everything from scratch, and restore your data from backup—and it can still only be tuned for one performance use case. In sharp contrast, if you've got a 48TB ZFS pool, you can set recordsize per dataset—and datasets can be created and destroyed as easily as folders. If your ZFS server has 20TiB of random user-saved files (most of which are several MiB, such as photos, movies, and office documents) along with a 2TiB MySQL database, each can coexist peacefully and simply: [email protected]:~# zfs create pool/samba [email protected]:~# zfs set recordsize=1M pool/samba [email protected]:~# zfs create pool/mysql [email protected]:~# zfs set recordsize=16K pool/mysql Just like that, you've created what look like "folders" on the server which are optimized for the workloads to be found within. If your users create a bunch of 4KiB files, that's fine—the 4KiB files will still only occupy one sector, while the larger files reap the benefit of similarly large logical block sizes. Meanwhile, the MySQL database gets a recordsize which perfectly matches its own internal 16KiB pagesize, optimizing performance there without hurting it on the rest of the server. If you install a PostgreSQL instance later, you can tune for its default 8KiB page size just as easily: [email protected]:~# zfs create pool/postgres [email protected]:~# zfs set recordsize=8K pool/postgres And if you later re-tune your MySQL instance to use a larger or smaller page size, you can re-tune your ZFS dataset to match. If all you do is change recordsize, the already-written data won't change, but any new writes to the database will follow the dataset's new recordsize parameter. (If you want to re-write the existing data structure, you also need to do a block for block copy of it, eg with the mv command.) ZFS recordsize=1M—large blocks for large files We know everybody loves to see big performance numbers, so let's look at some. In this section, we're going to re-run our earlier 1MiB read, write, and sync write workloads against ZFS datasets with recordsize=1M set. We want to reiterate that this is a pretty friendly configuration for any normal "directory full of files" type of situation—ZFS will write smaller files in smaller blocks automatically. You really only need smaller recordsize settings in special cases with a lot of random access inside large files, such as database binaries and VM images. RAIDz2 vs RAID6—1MiB random I/O, recordsize=1M First image of article image gallery. Please visit the source link to see all 3 images. RAIDz2 writes really take off when its recordsize is tuned to fio's workload. Its 1MiB asynchronous writes leap from 568MiB/sec to 950MiB/sec, sometimes higher. These are rewrites of an existing fio workload file: The first fio write test always goes significantly faster on ZFS storage than additional test runs re-using the same file do. This effect doesn't increase, however. The second, 20th, and 200th run will always be the same. But in the interests of fairness, we throw away that first, significantly higher test run for ZFS. In this case, that first "throwaway" async write run was at 1,238MiB/sec. Sync writes are similarly boosted, with RAIDz2 turning in an additional 54MiB/sec over its untuned results, more than doubling RAID6's already-lagging performance. Unfortunately, tuning recordsize didn't help our 1MiB uncached reads—although the eight-wide RAIDz2 vdev improved by just under 100MiB/sec, RAIDz2 still lags significantly behind mdraid6 with five or more disks in the vdev. A pool with two four-wide RAIDz2 vdevs (not shown above) comes much closer, pulling 406MiB/sec to eight-wide mdraid6's 485MiB/sec. RAIDz2 vs RAID6—4KiB random I/O, recordsize=4K First image of article image gallery. Please visit the source link to see all 3 images. Moving along to 4KiB random access I/O, we see the same broad trends observed above. In both sync and async writes, RAIDz2 drastically outperforms RAID6. When committing small writes, RAID6 arrays fail to keep up with even a single ext4 disk. This trend reverses, again, when shifting to uncached reads. Wide RAIDz2 vdevs do at least manage to outperform a single ext4 disk, but they lag significantly behind mdraid and ext4, with an eight-wide RAIDz2 vdev being outperformed roughly 4:1. When deciding between these two topologies on a performance basis, the question becomes whether you'd prefer to have a 20:1 increase in write performance at the expense of a 4:1 decrease in reads, or vice versa. On the surface of it, this sounds like a no-brainer—but different workloads are different. A cautious—and wise!—admin would be well advised to do workload-specific testing both ways before making a final decision, if performance is the only metric that matters. ZFS mirror vdevs vs RAID10—1MiB random I/O, recordsize=1M First image of article image gallery. Please visit the source link to see all 3 images. If you were looking for an unvarnished performance victory for team ZFS, here's where you'll find it. RAID10 is the highest performing conventional RAID topology on every metric we test, and a properly-tuned ZFS pool of 2-wide mirror vdevs outperforms it everywhere. We can already hear conventional RAID fans grumbling that their hardware RAID would outrun that silly ZFS, since it's got a battery or supercapacitor backed cache and will therefore handle sync writes about as rapidly as standard asynchronous writes—but the governor's not entirely off on the ZFS side of that race, either. (We'll cover the use of a LOG vdev to accelerate sync writes in a different article.) ZFS mirror vdevs vs RAID10—4KiB random I/O, recordsize=4K First image of article image gallery. Please visit the source link to see all 3 images. Down in the weeds at 4KiB blocksize, our pool of mirrors strongly outperforms RAID10 on both synchronous and asynchronous writes. It does, however, lose to RAID10 on uncached 4KiB reads. Much like the comparison of RAIDz2 vs RAID6, however, it's important to look at the ratios. Although a pure 4KiB uncached random read workload performs not quite twice as well on RAID10 as on ZFS mirrors, such a workload is probably fairly rare—and the write performance advantage swings 5:1 in the other direction (or 12:1, for sync writes). Most 4KiB-heavy workloads will also be constantly saturated workloads, as the disks are constantly thrashing trying to keep up with demand—meaning that it wouldn't take many write operations to overwhelm RAID10's 4KiB read performance benefits with its write performance discrepancies. 4KiB random read workloads also tend to heavily favor better cache algorithms. We did not attempt to test cache efficiency here, but an ARC can safely be assumed to strongly outperform a simple LRU on nearly any workload. Why are RAIDz2 reads so slow? First image of article image gallery. Please visit the source link to see all 5 images. There's one burning question that needs to be answered after all this testing: Why are RAIDz reads so much slower than conventional RAID6 reads? With recordsize tuned appropriately for workload, RAIDz2 outperforms RAID6 on writes by as much as 20:1—which makes it that much more confusing why reads would be slower. The answer is fairly simple, and it largely amounts to the flip side of the same coin. Remember the RAID hole? Conventional RAID6 is not only willing but effectively forced to pack multiple blocks/files into the same stripe, since it doesn't have a variable stripe width. In addition to opening up the potential for corruption due to partial stripe write, this subjects RAID6 arrays to punishing read-modify-write performance penalties when writing partial stripes. RAIDz2, on the other hand, writes every block or file as a full stripe—even very small ones, by adjusting the width of the stripe. We captured the difference between the two topologies' 1MiB reads in the series of screenshots above. When recordsize is set to 1M, a 1MiB block gets carved into roughly 176KiB chunks and distributed among six of the eight disks in an eight-wide RAIDz2, with the other two disks each carrying a roughly 176KiB parity chunk. So when we read a 1MiB block from our eight-wide RAIDz2, we light up six of eight disks to do so. By contrast, the same disks in an eight-wide RAID6 default to a 512KiB chunk size—which means 512KiB of data (or parity) is written to each disk during a RAID6 write. When we go back to read that data from the RAID6, we only need to light up two of our eight disks for each block, as compared to RAIDz2's six. In addition, the two disks we light up on RAID6 are performing a larger, higher-efficiency operation. They're reading 128 contiguous 4KiB on-disk sectors, as compared to RAIDz2's six disks only reading 44 contiguous 4KiB on-disk sectors for each operation. If we want to get deep into the weeds, we could more extensively tune the RAIDz2 to work around this penalty: we could set zfs_max_recordsize=4194304 in /etc/modprobe.d/zfs.conf, export all pools and reload the ZFS kernel module, then zfs set recordsize=3M on the dataset. Setting a 3MiB recordsize would mean that each disk gets 512KiB chunks, just like RAID6 does, and performance would go up accordingly—if we write a 1MiB record, it gets stored on two disks in 512KiB chunks. And when we read it back, we read that 1MiB record by lighting up only those two disks, just like we did on RAID6. Unfortunately, that also means storage efficiency goes down—because that 1MiB record was written as an undersized stripe, with two chunks of data and two chunks of parity. So now we're performing as well or better than RAID6, but we're at 50 percent storage efficiency while it's still at 75 percent (six chunks out of every eight are data). To be fair, this is a problem for the eight-process, iodepth=8 reads we tested—but not for single-process, iodepth=1 reads, which we tested but did not graph. For single-process reads, RAIDz2 significantly outperforms RAID6 (at 129MiB/sec to 47MiB/sec), and for the exact same reason. It lights up three times as many disks to read the same 1MiB of data. TANSTAAFL—There Ain't No Such Thing As A Free Lunch. Conclusions If you're looking for raw, unbridled performance it's hard to argue against a properly-tuned pool of ZFS mirrors. RAID10 is the fastest per-disk conventional RAID topology in all metrics, and ZFS mirrors beat it resoundingly—sometimes by an order of magnitude—in every category tested, with the sole exception of 4KiB uncached reads. ZFS' implementation of striped parity arrays—the RAIDz vdev type—are a bit more of a mixed bag. Although RAIDz2 decisively outperforms RAID6 on writes, it underperforms it significantly on 1MiB reads. If you're implementing a striped parity array, 1MiB is hopefully the blocksize you're targeting in the first place, since those arrays are particularly awful with small blocksizes. When you add in the wealth of additional features ZFS offers—incredibly fast replication, per-dataset tuning, automatic data healing, high-performance inline compression, instant formatting, dynamic quota application, and more—we think it's difficult to justify any other choice for most general-purpose server applications. ZFS still has more performance options to offer—we haven't yet covered the support vdev classes, LOG, CACHE, and SPECIAL. We'll cover those—and perhaps experiment with recordsize larger than 1MiB—in another fundamentals of storage chapter soon. Source: ZFS versus RAID: Eight Ironwolf disks, two filesystems, one winner (Ars Technica) (To view the article's image galleries, please visit the above link)
  6. The Malaysian Communications and Multimedia Commission (MCMC) confiscated 58 uncertified communication devices in a series of raids around the country. In a statement, it said the haul was the result of a joint-operation with the Royal Malaysia Police (PDRM), with raids in 12 locations in Kuala Lumpur, Kedah, Perak and Pulau Pinang between July and October. In a press statement, MCMC stated the seized devices include antennas, signal boosters and repeaters, wireless TV transmitters, AV transmitters, Android TV boxes, walkie talkies and RF repeaters. The cases are being investigated under Rule 16 of the Communications and Multimedia Act 1998 and Communications and Multimedia (Technical Standards) Regulations 2000, which carries a jail term of up to six months, a fine of up to RM100,000, or both. MCMC said it is taking a hardline against the possession and sale of uncertified communication devices, having tackled 19 cases involving 66 charges up to September this year, issuing RM432,000 in fines in total. This is up from RM55,500 in fines from 10 cases involving 15 charges for the entire of last year. “Efforts to crackdown on the use or sales of uncertified communication devices will be ramped up by the MCMC. This means more people involved with these devices will be facing justice, ” it stated. The regulatory body encouraged members of the public to report such activities to the authorities and to only buy communication devices that display the MCMC label. Consumers can use the Check Your Label app on Google Play or Apple App Store to verify if their device is certified. Source: MCMC seizes 58 uncertified communication devices in latest crackdown (via The Star Online)
  7. MANILA (Reuters) - Philippine authorities said on Friday they arrested 342 Chinese workers in a raid on an unlicensed gambling operation, part of a crackdown on illegal migration and an illicit gaming industry that is being fueled by mainland China’s appetite for betting. Image: Chinese workers illegally working at an unlicensed online gambling firm are arrested by Philippine authorities at an office building in Quezon City, Metro Manila, Philippines, December 19, 2019. Licensed online gaming operations introduced in 2016 have been a boon in bringing money to the Philippine economy, but illegal businesses attracting massive numbers of Chinese migrants have also mushroomed, due largely to vested interests, corruption and weak law enforcement. Illegal operations far outnumber those being regulated, and do not pay no tax. Law enforcement bodies and the Chinese government suspect some are fronts for crime, including money laundering. The Chinese arrested late on Thursday were at a registered gambling firm that had yet to secure a license from the state gaming regulator. “We had reason to suspect that the company is a front for illegal cyber activities and investment scams,” said Fortunato Manahan, chief of the Bureau of Immigration’s intelligence division. Though Philippine President Rodrigo Duterte has good relations with China, where gambling is prohibited, he has refused its request to ban gaming operations that cater to mainland Chinese. Chinese gaming companies and their employees have been blamed for driving up office and residential rent, so much so that Makati City, Manila’s main business hub, has banned the issue of new licenses to gaming firms. In August, Cambodia heeded Chinese pressure to ban online gambling, which has drawn both Chinese investment and crime to the country. Source
×
×
  • Create New...